[one-users] Which is the best storage for One/KVM?

Campbell, Bill bcampbell at axcess-financial.com
Mon Jan 7 06:27:28 PST 2013


Alberto,
When initially setting up our OpenNebula environment we began by using a GFS/iSCSI/LVM configuration, and while it worked well from a performance standpoint, maintaining the GFS cluster was a pain.  We then investigated GlusterFS and initially piloted on this platform.  At the beginning it went well, but we started to see some strange performance issues (similar to what you were seeing with MooseFS) where VMs would lock up, or slow down to a crawl, and then some FS errors on Gluster started happening, which impacted availability.  We then moved the pilot over to NFS (as recommended in the OpenNebula documentation) and have been running on that for some time.  However this isn't our long-term goal, as NFS isn't very scalable.

I've been testing and have had pretty good success so far with Ceph.  It's a bit different than Gluster/Moose, but isn't too terribly difficult to implement, is fault tolerant, scalable, and so far performance has been way better than I originally anticipated.  We plan on implementing this going forward for our additional OpenNebula zones that are being deployed.  The newer versions even have some spiffy new snapshotting capabilities for RBD devices as well.

I submitted an in-progress driver to the developers with hopes that it will be included in the 4.0 release (KVM/QEMU/Libvirt have native support for RBD block devices, which are virtual block devices striped across objects in a Ceph cluster, and this driver utilizes this).  


----- Original Message -----
From: "Alberto Zuin - Liste" <liste at albertozuin.eu>
To: users at lists.opennebula.org
Sent: Saturday, January 5, 2013 4:52:26 AM
Subject: [one-users] Which is the best storage for One/KVM?

Hello all,
in past for a customer I made a cloud with OpenNebula, XEN and a MooseFS 
storage: with a lot of chunk servers, the I/O latency of VM is acceptable.
Recently I made a little cloud for my personal purposes and also I made 
it with OpenNebula and MooseFS, but with KVM instead of XEN.
Sometimes, the I/O is very slow and the Kernel VM remounts the disk in 
read-only due to a timeout of 900 seconds (!!! I modified this setting 
in sysctl.conf).
In all 2 systems the images are always in raw format (I don't use qow), 
then I don't known if the difference is caused by KVM/XEN or by MooseFS 
chunckservers hardware (2 server with a replica of 2, instead of 5 
server with a relica of 3).
Now, I don't have enough time to make some tests: simply my setup is 
wrong and I have to make another one which works.
The question is: if you have to make a little cloud like mine, with 
OpenNebula, 2 KVM host and 2 servers for storage (each with 2 SATA disk 
that I want to substitute with WD Velociraptor 1 TB to be secure), what 
kind of storage technology you'll choose? The size is not a problem (now 
I use only 1 TB, then a total of 2 TB is OK), but the speed is important 
because mail and sql wants a solid I/O and obviously rock solid in case 
of failure.
MooseFS with better or more hardware? Another Cluster filesystem like 
Gluster o Chepth? A simple active/active DRBD?
Thanks,
Alberto

-- 
AZ Network Specialist
via Mare, 36A
36030 Lugo di Vicenza (VI)
ITALY
P.I. IT04310790284
http://www.azns.it
Tel +39.3286268626
Fax +39.0492106654

_______________________________________________
Users mailing list
Users at lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.





More information about the Users mailing list