We are using GlusterFS and it works great :) With some tweaking we were able to average around 90-120 megabits per second on reads and 25-35 megabits per second on writes. Configuration is as follows:<div><br></div><div>2 file servers:</div>
<div>- Supermicro server motherboard with Intel Atom D510</div><div>- 4GB DDR2 RAM</div><div>- 6 x Western Digital RE3 500GB hard drives</div><div>- Ubuntu 10.04 x64 (on a 2GB USB stick)</div><div>- RAID 10</div><div><br>
</div><div>File servers are set up with replication and we have a total of 1.5TB of storage dedicated to virtual machine storage with ability to grow it to petabytes on demand - just add more nodes!</div><div><br clear="all">
--------------------------------------------<br>Slava Yanson, CTO<br>Killer Beaver, LLC<br><br>w: <a href="http://www.killerbeaver.net">www.killerbeaver.net</a><br>c: (323) 963-4787<br>aim/yahoo/skype: urbansoot<br><br>Follow us on Facebook: <a href="http://fb.killerbeaver.net/">http://fb.killerbeaver.net/</a><br>
Follow us on Twitter: <a href="http://twitter.com/thekillerbeaver">http://twitter.com/thekillerbeaver</a><br>
<br><br><div class="gmail_quote">On Wed, Sep 1, 2010 at 8:48 PM, Huang Zhiteng <span dir="ltr"><<a href="mailto:winston.d@gmail.com">winston.d@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi all,<br><br>In my open nebula 2.0b testing, I found NFS performance was unacceptable (too bad). I haven't done any tuning or optimization to NFS yet but I doubt if tuning can solve the problem. So I'd like to know what kind of shared storage you are using. I thought about Global File System v2 (GFSv2). GFSv2 does performs much better (near native performance) but there's limit of 32 nodes and setting up GFS is complex. So more important question, how can shared storage scales to > 100 node cloud? Or this question should be for > 100 node cloud, what kind of storage system should be used? Please give any suggestion or comments. If you have already implement/deploy such an environment, it'd be great if you can share some best practice.<br>
<br>--------------<br>Below there's some details about my setup and issue:<br clear="all"><br>1 front-end, 6 nodes. All machines are two socket Intel Xeon x5570 2.93Ghz (16 threads in total), with 12GB memory. There's one SATA RAID 0 box (630GB capacity) connected to front-end. Network is 1Gb Ethernet.<br>
<br>OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then exported via NFSv4. Also front-end exports RAID 0 partition to /srv/cloud/one/var/images.<br><br>The Prolog stage of Creating VM always caused frond-end machine almost freeze (slow response to input, even OpenNebula command would timeout) in my setup. I highly suspect the root cause is poor performance NFS. <br>
-- <br>Regards<br><font color="#888888">Huang Zhiteng<br>
</font><br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div>