<div dir="ltr">Hello Denis,<div><br></div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">1. Do I have to connect the FC datastore to opennebula server ?<br>
</blockquote><div style>No. You don't need to (you could, but there's really no point). </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
2. Which driver should I use on compute nodes ?<br></blockquote><div style>The LVM drivers. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
3. Will my virtual machines be persistent during infrastructure reboots ?</blockquote><div style>Yes. </div></div><div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style>The similar setup you have described is exactly what you want to replicate with OpenNebula. You need the LVM drivers for this. I have just added a diagram to the LVM guide [1] to show you exactly the setup you need. The idea is to export the same LUN to all the OpenNebula nodes and create a cLVM* on top of it. OpenNebula will speak to just one of the nodes of that cluster ($HOST parameter in the Datastore template). If the FC configuration is persistent, and the cLVM configuration is also persistent, then your VMs will persist during reboots.</div>
<div style><br></div><div style>Note that the underlying storage doesn't affect OpenNebula at all, you could do this with FC, iSCSI, or even a NAS, as long as you export the same block device to all the hosts (are configure it to persist after reboots).</div>
<div style><br></div><div style>By the way, I strongly recommend reading this article, written by the people at MTA SZTAKI, explaining many things about LVM deployment. You will find a lot of hints, best practices and troubleshooting tips in there.</div>
<div style><br></div><div style>* Maybe you don't need cLVM, after reading the article [2] you will be able to understand the pros and cons of using it.</div><div style><br></div><div style>[1] <a href="http://opennebula.org/documentation:rel4.0:lvm_ds">http://opennebula.org/documentation:rel4.0:lvm_ds</a></div>
<div style>[2] <a href="http://wiki.opennebula.org/shared_lvm">http://wiki.opennebula.org/shared_lvm</a></div><div class="gmail_extra"><br></div><div class="gmail_extra">Cheers,</div><div class="gmail_extra">Jaime<br><br>
<div class="gmail_quote">On Tue, May 21, 2013 at 9:35 AM, Denis J. Cirulis <span dir="ltr"><<a href="mailto:denis.cirulis@gmail.com" target="_blank">denis.cirulis@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hello,<div><br></div><div>I have to setup proof of concept cloud using opennebula and zfs san.</div><div>I can not understand the correct scenario of running virtual machines from FC:<br></div><div><br></div>
<div>1. Do I have to connect the FC datastore to opennebula server ?</div><div>2. Which driver should I use on compute nodes ?</div><div>3. Will my virtual machines be persistent during infrastructure reboots ?</div><div>
<br></div><div>I already had similar setup but with plain libvirt and kvm, the concept was to use one lun per datastore from fc storage using fc hba and switch, which was advertised to compute nodes as lvm vg, then each virtual machine had 1+n logical volumes from this vg as its hard drive. Backup was performed via lvm snapshot and dd/rsync. There was a possibility to migrate storage from one node to another without vm downtime using virsh block-copy --live.</div>
<div><br></div><div>What are the correct steps to achieve the same functionality on opennebula 4.0 ?</div><div>I'm running CentOS 6.4 both on opennebula server and compute nodes.</div><div><br></div><div>Thanks in advance!</div>
</div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br>-- <br><div dir="ltr">Join us at <a href="http://opennebulaconf.com/" style="color:rgb(17,85,204)" target="_blank">OpenNebulaConf2013</a> in Berlin, <span><span>24-26 September, 2013</span></span><br>
--<div>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a></div>
</div>
</div></div>