Hi all,<br><br>In my open nebula 2.0b testing, I found NFS performance was unacceptable (too bad). I haven't done any tuning or optimization to NFS yet but I doubt if tuning can solve the problem. So I'd like to know what kind of shared storage you are using. I thought about Global File System v2 (GFSv2). GFSv2 does performs much better (near native performance) but there's limit of 32 nodes and setting up GFS is complex. So more important question, how can shared storage scales to > 100 node cloud? Or this question should be for > 100 node cloud, what kind of storage system should be used? Please give any suggestion or comments. If you have already implement/deploy such an environment, it'd be great if you can share some best practice.<br>
<br>--------------<br>Below there's some details about my setup and issue:<br clear="all"><br>1 front-end, 6 nodes. All machines are two socket Intel Xeon x5570 2.93Ghz (16 threads in total), with 12GB memory. There's one SATA RAID 0 box (630GB capacity) connected to front-end. Network is 1Gb Ethernet.<br>
<br>OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then exported via NFSv4. Also front-end exports RAID 0 partition to /srv/cloud/one/var/images.<br><br>The Prolog stage of Creating VM always caused frond-end machine almost freeze (slow response to input, even OpenNebula command would timeout) in my setup. I highly suspect the root cause is poor performance NFS. <br>
-- <br>Regards<br>Huang Zhiteng<br>