[one-users] What kinds of shared storage are you using?
slava at killerbeaver.net
Fri Sep 3 14:10:53 PDT 2010
We are using GlusterFS and it works great :) With some tweaking we were able
to average around 90-120 megabits per second on reads and 25-35 megabits per
second on writes. Configuration is as follows:
2 file servers:
- Supermicro server motherboard with Intel Atom D510
- 4GB DDR2 RAM
- 6 x Western Digital RE3 500GB hard drives
- Ubuntu 10.04 x64 (on a 2GB USB stick)
- RAID 10
File servers are set up with replication and we have a total of 1.5TB of
storage dedicated to virtual machine storage with ability to grow it to
petabytes on demand - just add more nodes!
Slava Yanson, CTO
Killer Beaver, LLC
c: (323) 963-4787
Follow us on Facebook: http://fb.killerbeaver.net/
Follow us on Twitter: http://twitter.com/thekillerbeaver
On Wed, Sep 1, 2010 at 8:48 PM, Huang Zhiteng <winston.d at gmail.com> wrote:
> Hi all,
> In my open nebula 2.0b testing, I found NFS performance was unacceptable
> (too bad). I haven't done any tuning or optimization to NFS yet but I doubt
> if tuning can solve the problem. So I'd like to know what kind of shared
> storage you are using. I thought about Global File System v2 (GFSv2).
> GFSv2 does performs much better (near native performance) but there's limit
> of 32 nodes and setting up GFS is complex. So more important question, how
> can shared storage scales to > 100 node cloud? Or this question should be
> for > 100 node cloud, what kind of storage system should be used? Please
> give any suggestion or comments. If you have already implement/deploy such
> an environment, it'd be great if you can share some best practice.
> Below there's some details about my setup and issue:
> 1 front-end, 6 nodes. All machines are two socket Intel Xeon x5570 2.93Ghz
> (16 threads in total), with 12GB memory. There's one SATA RAID 0 box (630GB
> capacity) connected to front-end. Network is 1Gb Ethernet.
> OpenNebula 2.0b was installed to /srv/cloud/one on front-end and then
> exported via NFSv4. Also front-end exports RAID 0 partition to
> The Prolog stage of Creating VM always caused frond-end machine almost
> freeze (slow response to input, even OpenNebula command would timeout) in my
> setup. I highly suspect the root cause is poor performance NFS.
> Huang Zhiteng
> Users mailing list
> Users at lists.opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Users