[one-users] GlusterFS on open nebula
Shankhadeep Shome
shank15217 at gmail.com
Tue Nov 6 23:26:22 PST 2012
We use glusterfs with open nebula on two kvm nodes with mirroring with 6
bricks per node. We recommend gluster 3.3 with a back end network
connection, 10GbE or IB. For stress testing the file system you can use
dbench on a cluster of VMs. Gluster 3.2 wasn't very robust to node failure
but 3.3 is far more stable. I would say its ready for development lab use
and moderately heavy loads. For best performance, I would suggest using
cgroups to shield the glusterfs processes from the kvm processes or use
separate storage hosts.
On Fri, Nov 2, 2012 at 5:03 AM, Giovanni Toraldo <me at gionn.net> wrote:
> On Fri, Nov 2, 2012 at 9:46 AM, Timothy Ehlers <ehlerst at gmail.com> wrote:
> > How does an instance react to a gluster node
> > failure? On my POC cloud, killing a server causes all the other boxes to
> > hang while the node times out in gluster.
>
> This is a GlusterFS well-known configuration issue, you may read
> documentation or ask about it on their ML.
>
> --
> Giovanni Toraldo
> http://gionn.net
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20121107/b779c45f/attachment-0002.htm>
More information about the Users
mailing list