<div dir="ltr">Hi Alberto,<div><br></div><div style>Last february there was a very interesting discussion about distributed FS in OpenNebula's mailing list:</div><div style><a href="http://lists.opennebula.org/pipermail/users-opennebula.org/2012-February/007824.html">http://lists.opennebula.org/pipermail/users-opennebula.org/2012-February/007824.html</a><br>
</div><div style><br></div><div style>I hope it will come in handy.</div><div style><br></div><div style>cheers,<br>Jaime</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jan 7, 2013 at 3:27 PM, Campbell, Bill <span dir="ltr"><<a href="mailto:bcampbell@axcess-financial.com" target="_blank">bcampbell@axcess-financial.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Alberto,<br>
When initially setting up our OpenNebula environment we began by using a GFS/iSCSI/LVM configuration, and while it worked well from a performance standpoint, maintaining the GFS cluster was a pain. We then investigated GlusterFS and initially piloted on this platform. At the beginning it went well, but we started to see some strange performance issues (similar to what you were seeing with MooseFS) where VMs would lock up, or slow down to a crawl, and then some FS errors on Gluster started happening, which impacted availability. We then moved the pilot over to NFS (as recommended in the OpenNebula documentation) and have been running on that for some time. However this isn't our long-term goal, as NFS isn't very scalable.<br>
<br>
I've been testing and have had pretty good success so far with Ceph. It's a bit different than Gluster/Moose, but isn't too terribly difficult to implement, is fault tolerant, scalable, and so far performance has been way better than I originally anticipated. We plan on implementing this going forward for our additional OpenNebula zones that are being deployed. The newer versions even have some spiffy new snapshotting capabilities for RBD devices as well.<br>
<br>
I submitted an in-progress driver to the developers with hopes that it will be included in the 4.0 release (KVM/QEMU/Libvirt have native support for RBD block devices, which are virtual block devices striped across objects in a Ceph cluster, and this driver utilizes this).<br>
<div><div class="h5"><br>
<br>
----- Original Message -----<br>
From: "Alberto Zuin - Liste" <<a href="mailto:liste@albertozuin.eu">liste@albertozuin.eu</a>><br>
To: <a href="mailto:users@lists.opennebula.org">users@lists.opennebula.org</a><br>
Sent: Saturday, January 5, 2013 4:52:26 AM<br>
Subject: [one-users] Which is the best storage for One/KVM?<br>
<br>
Hello all,<br>
in past for a customer I made a cloud with OpenNebula, XEN and a MooseFS<br>
storage: with a lot of chunk servers, the I/O latency of VM is acceptable.<br>
Recently I made a little cloud for my personal purposes and also I made<br>
it with OpenNebula and MooseFS, but with KVM instead of XEN.<br>
Sometimes, the I/O is very slow and the Kernel VM remounts the disk in<br>
read-only due to a timeout of 900 seconds (!!! I modified this setting<br>
in sysctl.conf).<br>
In all 2 systems the images are always in raw format (I don't use qow),<br>
then I don't known if the difference is caused by KVM/XEN or by MooseFS<br>
chunckservers hardware (2 server with a replica of 2, instead of 5<br>
server with a relica of 3).<br>
Now, I don't have enough time to make some tests: simply my setup is<br>
wrong and I have to make another one which works.<br>
The question is: if you have to make a little cloud like mine, with<br>
OpenNebula, 2 KVM host and 2 servers for storage (each with 2 SATA disk<br>
that I want to substitute with WD Velociraptor 1 TB to be secure), what<br>
kind of storage technology you'll choose? The size is not a problem (now<br>
I use only 1 TB, then a total of 2 TB is OK), but the speed is important<br>
because mail and sql wants a solid I/O and obviously rock solid in case<br>
of failure.<br>
MooseFS with better or more hardware? Another Cluster filesystem like<br>
Gluster o Chepth? A simple active/active DRBD?<br>
Thanks,<br>
Alberto<br>
<br>
--<br>
AZ Network Specialist<br>
via Mare, 36A<br>
36030 Lugo di Vicenza (VI)<br>
ITALY<br>
P.I. IT04310790284<br>
<a href="http://www.azns.it" target="_blank">http://www.azns.it</a><br>
Tel <a href="tel:%2B39.3286268626" value="+393286268626">+39.3286268626</a><br>
Fax <a href="tel:%2B39.0492106654" value="+390492106654">+39.0492106654</a><br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br>
<br>
</div></div>NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div></div></blockquote></div><br></div><br clear="all"><div><br></div>-- <br>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a>