[one-users] fibre channel datastore

Denis J. Cirulis denis.cirulis at gmail.com
Wed May 22 07:51:15 PDT 2013


Hello Jaime,
thanks for your tips on configuring the LVM datastore. I gone through all
the steps and I have configured datastore. Next problem is that I can not
understand why my virtual machines are not being provisioned to this new
datastore.

Thanks.


On Tue, May 21, 2013 at 6:43 PM, Jaime Melis <jmelis at opennebula.org> wrote:

> Hello Denis,
>
> 1. Do I have to connect the FC datastore to opennebula server ?
>>
> No. You don't need to (you could, but there's really no point).
>
>> 2. Which driver should I use on compute nodes ?
>>
> The LVM drivers.
>
>> 3. Will my virtual machines be persistent during infrastructure reboots ?
>
> Yes.
>
> The similar setup you have described is exactly what you want to replicate
> with OpenNebula. You need the LVM drivers for this. I have just added a
> diagram to the LVM guide [1] to show you exactly the setup you need. The
> idea is to export the same LUN to all the OpenNebula nodes and create a
> cLVM* on top of it. OpenNebula will speak to just one of the nodes of that
> cluster ($HOST parameter in the Datastore template). If the FC
> configuration is persistent, and the cLVM configuration is also persistent,
> then your VMs will persist during reboots.
>
> Note that the underlying storage doesn't affect OpenNebula at all, you
> could do this with FC, iSCSI, or even a NAS, as long as you export the same
> block device to all the hosts (are configure it to persist after reboots).
>
> By the way, I strongly recommend reading this article, written by the
> people at MTA SZTAKI, explaining many things about LVM deployment. You will
> find a lot of hints, best practices and troubleshooting tips in there.
>
> * Maybe you don't need cLVM, after reading the article [2] you will be
> able to understand the pros and cons of using it.
>
> [1] http://opennebula.org/documentation:rel4.0:lvm_ds
> [2] http://wiki.opennebula.org/shared_lvm
>
> Cheers,
> Jaime
>
> On Tue, May 21, 2013 at 9:35 AM, Denis J. Cirulis <denis.cirulis at gmail.com
> > wrote:
>
>> Hello,
>>
>> I have to setup proof of concept cloud using opennebula and zfs san.
>> I can not understand the correct scenario of running virtual machines
>> from FC:
>>
>> 1. Do I have to connect the FC datastore to opennebula server ?
>> 2. Which driver should I use on compute nodes ?
>> 3. Will my virtual machines be persistent during infrastructure reboots ?
>>
>> I already had similar setup but with plain libvirt and kvm, the concept
>> was to use one lun per datastore from fc storage using fc hba and switch,
>> which was advertised to compute nodes as lvm vg, then each virtual machine
>> had 1+n logical volumes from this vg as its hard drive. Backup was
>> performed via lvm snapshot and dd/rsync. There was a possibility to migrate
>> storage from one node to another without vm downtime using virsh block-copy
>> --live.
>>
>> What are the correct steps to achieve the same functionality on
>> opennebula 4.0 ?
>> I'm running CentOS 6.4 both on opennebula server and compute nodes.
>>
>> Thanks in advance!
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
> --
> Join us at OpenNebulaConf2013 <http://opennebulaconf.com/> in Berlin, 24-26
> September, 2013
> --
> Jaime Melis
> Project Engineer
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org | jmelis at opennebula.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130522/13dc7113/attachment-0002.htm>


More information about the Users mailing list