[one-users] Distributed storage, local copy
Ruben S. Montero
rsmontero at opennebula.org
Sat Mar 17 11:38:01 PDT 2012
Hi,
This is a very timely thread. We are now working in improving the storage
capabilities of OpenNebula in the next release. OpenNebula 3.4 will feature
the storage Datastores (check [1]). This will overcome some of the
limitations observed by the single Image Repository architecture in 3.2,
namely:
* Separated storage area for running VMs (the <vm_id>/images directories),
that can be backed by different TMs (shared, or ssh to always perform local
I/O in the hosts), and a different storage server
* A host can now use multiple datastores with different TMs (you can access
ssh, and several NFS exports at the same time, and you are not limited to
the TM used to create the host)
* Better scale the storage by distributing VMs across different storage
servers (so I/O can be balanced)
* Better scale the storage by easily adding more storage (e.g. now running
out of space in the image repository is not easy to solve)
* Plan the storage based on the server/application type (so critical
servers are placed on better/safer storage and less important one on
cheaper storage nodes)
* Specialized Datastores, we plan to include several datastore types:
shared, ssh, qcow, VMware and lvm-iscsi.
As always the system has been architected in the OpenNebula way, i.e. quite
easy to hack and adapt ;)
This will also allow us to plan additional post-3.4 features. In
particular, we will look into a storage scheduler (similar to VMware DRS
for those that are familiar with the VMware portfolio). In 3.4 you do
oneimage create img_template -d datastore, the storage scheduler will
suggest or pick up the best datastore based on the space left, number of
VMs running, performance...
Cheers
Ruben
[1] blog.opennebula.org/?p=2646
On Mar 14, 2012 6:15 PM, "Frédéric Dreier" <frederic.dreier at gmail.com>
wrote:
> Hi Marshall,
>
> It all depends on how you will use you cloud.
>
> For example in the last setup I worked on we tried distributed
> storage. But we just hit the IOPS maximum of our storage (see IOPS on
> wikipedia for more info). It is due to the fact that we have a lot of
> VM and no SATA / SAS / iSCSI with or without RAID was able to provide
> the number of IOPS we needed. Result was : very slow disk operations
> in VM (apt-get upgrade could last hours). Some smart guys just tune
> the whole with a centralized NFS (Openindiana, ZFS, RAMDISK + extra
> DDRDRIVE) and now it rocks.
>
> Good luck,
>
> Frederic
>
>
>
>
>
>
>
> 2012/3/13 Marshall Grillos <mgrillos at optimalpath.com>:
> > We are planning an OpenNebula deployment for a private cloud setup.
> >
> >
> >
> > I recently read an article on the opennebula blog on storage:
> > http://blog.opennebula.org/?p=2187
> >
> >
> >
> > This article discusses a “Distributed storage local copy” solution that
> > seems to scale the best. However it does not contain any information or
> > hints on how to configure this type of setup. I have a few questions:
> >
> >
> >
> > 1) Will this setup allow live migrations?
> >
> > 2) Can someone point me to some configuration examples?
> >
> > 3) Does this require the VMWare hypervisor (we were looking at KVM
> or
> > XEN deployment)?
> >
> >
> >
> > Does anyone know what the general architecture would look like? We were
> > planning on attaching a large storage array to the OpenNebula controller
> to
> > house VM images and setting up enough storage on each node to house
> running
> > images to reduce network latency. I found this article useful in that it
> > afforded faster deployment times than the traditional Non-shared
> filesystem.
> >
> >
> >
> > Any tips/information on how to test this deployment would be great.
> >
> >
> >
> > Thanks,
> >
> > Marshall
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120317/6ff807c3/attachment-0002.htm>
More information about the Users
mailing list