[one-users] What kinds of shared storage are you using?
jfontan at gmail.com
Thu Sep 2 10:05:25 PDT 2010
Even if NFS with default configuration is not the most performant
shared filesystem we thought it was the most common shared filesystem
people could use for virtualization. Maybe you can make it faster
adding some parameters when mounting the shared filesystem. "async"
will make your VM's run faster as it wont need to synchronously write
to the server. Tunning "rsize" and "wsize" will make a difference in
Anyway, I think we made a mistake calling shared filesystem drivers
"tm_nfs" as it could be used with other shared filesystem. It will
only asume that files in a certain path will be accesible by frontend
and nodes using standard fs commands. I encourage you to use other
filesystems more performant with "tm_nfs" drivers.
In our machines we have been using a hybrid system (described in
http://opennebula.org/documentation:rel1.4:sm at "Customizing and
Extending" section). This will let you have non cloned images that can
livemigrate and local images that will perform better.
I also invite you to develop new drivers for other systems if they
need changes. I hope that having the transfer commands in shell
scripts outside the core will make it easier to change or develop new
ones. If you have any doubt or need help to create those new drivers
contact us as we are also interested on interacting with other
On Thu, Sep 2, 2010 at 2:03 PM, Michael Brown <mbrown1413 at gmail.com> wrote:
> I've found that NFS is unacceptably slow too. With both the front end and
> the nodes mounting NFS, copies have to go through the front end, then back
> out again, which is a bit wasteful.
> We use a NetApp storage system, which can do flexclones. We can't take
> advantage of that with OpenNebula because everything is exported via NFS. I
> have a few coworkers that have experience with a NetApp api, so we plan on
> writing a tm driver for NetApp soon.
> I'm interested to hear about other people's setups and solutions.
> --Michael Brown
> On Wed, Sep 1, 2010 at 11:48 PM, Huang Zhiteng <winston.d at gmail.com> wrote:
>> Hi all,
>> In my open nebula 2.0b testing, I found NFS performance was unacceptable
>> (too bad). I haven't done any tuning or optimization to NFS yet but I doubt
>> if tuning can solve the problem. So I'd like to know what kind of shared
>> storage you are using. I thought about Global File System v2 (GFSv2).
>> GFSv2 does performs much better (near native performance) but there's limit
>> of 32 nodes and setting up GFS is complex. So more important question, how
>> can shared storage scales to > 100 node cloud? Or this question should be
>> for > 100 node cloud, what kind of storage system should be used? Please
>> give any suggestion or comments. If you have already implement/deploy such
>> an environment, it'd be great if you can share some best practice.
>> Below there's some details about my setup and issue:
>> 1 front-end, 6 nodes. All machines are two socket Intel Xeon x5570
>> 2.93Ghz (16 threads in total), with 12GB memory. There's one SATA RAID 0
>> box (630GB capacity) connected to front-end. Network is 1Gb Ethernet.
>> OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
>> exported via NFSv4. Also front-end exports RAID 0 partition to
>> The Prolog stage of Creating VM always caused frond-end machine almost
>> freeze (slow response to input, even OpenNebula command would timeout) in my
>> setup. I highly suspect the root cause is poor performance NFS.
>> Huang Zhiteng
>> Users mailing list
>> Users at lists.opennebula.org
> Users mailing list
> Users at lists.opennebula.org
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
More information about the Users