Hi,<div><br></div><div>That sounds very promising, let us know if you run into any other problems.<br><div>Once it is finished, you may want to contribute the project to the ecosystem [1].</div><div><br></div><div>Cheers</div>
<div><br></div><div>[1] <a href="http://opennebula.org/community:ecosystem">http://opennebula.org/community:ecosystem</a><br clear="all">--<br>Carlos Martín, MSc<br>Project Engineer<br>OpenNebula - The Open-source Solution for Data Center Virtualization<div>
<span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="mailto:cmartin@opennebula.org" style="color:rgb(42,93,176)" target="_blank"></a></span></div>
<br>
<br><br><div class="gmail_quote">On Wed, Sep 5, 2012 at 1:05 PM, Nicolas AGIUS <span dir="ltr"><<a href="mailto:nicolas.agius@lps-it.fr" target="_blank">nicolas.agius@lps-it.fr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<table cellspacing="0" cellpadding="0" border="0"><tbody><tr><td valign="top" style="font:inherit">Hi,<br><br>Thanks for the answer. The second solution sounds good, will try this.<br><br>The first one is not possible. This sytem also handle failover and have to react to a failure within a few seconds. For example, if a node crashes, lost vm are immediately respawned on a healthy node. This is why it's running as closer as possible from the hypervisor and can't rely on any external component.<br>
<br><br>This drivers and the complete cluster stack will be released soon.<br><br>Regards,<br>Nicolas AGIUS<br><br>--- En date de : <b>Mar 4.9.12, Carlos Martín Sánchez <i><<a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a>></i></b> a écrit :<br>
<blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px"><br>De: Carlos Martín Sánchez <<a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a>><br>Objet: Re: [one-users] Update hostname after a migration<br>
À:
<a href="mailto:nicolas.agius@lps-it.fr" target="_blank">nicolas.agius@lps-it.fr</a><br>Cc: <a href="mailto:users@lists.opennebula.org" target="_blank">users@lists.opennebula.org</a><br>Date: Mardi 4 septembre 2012, 16h03<div>
<div class="h5"><br><br><div>Hi,<div><br></div><div>OpenNebula has the objects cached in memory, so you cannot simply inject new data into the DB. Besides, the (live)migration operation is not only a change in the VM hostname, it involves history records, accounting, and Host capacity.</div>
<div><br></div><div>The ideal solution would be to make that load balancer ask opennebula to do the migration.</div><div><br></div><div>If that's not possible, you could perform a dummy migration, with the following flow:</div>
<div>- The load balancer migrates a VM</div><div>- You somehow detect this migration, and execute a 'onevm livemigrate vm_id target_host'</div><div>- OpenNebula will try to migrate the VM, calling the driver script /var/lib/one/remotes/vmm/<vmm_driver>/migrate</div>
<div><br></div><div>This migrate script must be able to detect if the migration was already performed by the load balancer, maybe looking for a VM with the same name in the target host, and return success directly.</div>
<div>
<br></div><div><br></div><div>Do you have any plans to release these new drivers?</div><div><br></div><div>Best regards</div><div><br></div><div>--<br>Carlos Martín, MSc<br>Project Engineer<br>OpenNebula - The Open-source Solution for Data Center Virtualization<div>
<span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a rel="nofollow" href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a rel="nofollow" href="http://mc/compose?to=cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a rel="nofollow" href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a rel="nofollow" href="http://mc/compose?to=cmartin@opennebula.org" style="color:rgb(42,93,176)" target="_blank"></a></span></div>
<br>
<br><br><div>On Tue, Sep 4, 2012 at 1:57 PM, Nicolas AGIUS <span dir="ltr"><<a rel="nofollow" href="http://mc/compose?to=nicolas.agius@lps-it.fr" target="_blank">nicolas.agius@lps-it.fr</a>></span> wrote:<br><blockquote style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<table border="0" cellpadding="0" cellspacing="0"><tbody><tr><td style="font:inherit" valign="top">Hi,<br><br>I'm currently writing a new VMM to use openNebula with a clustered xen manager responsible of failover and loadbalancing.<br>
<br>This system, put between xen and opennebula will automatically move vm by its own.<br><br>Is there any way to inform oned that the vm has been migrated ?<br><br>I tried to inject this information in the database, but it did'nt work, the update is'nt taked in account. And there is nothing about a such things in the API.<br>
<br><br>Thanks,<br>Nicolas AGIUS<br></td></tr></tbody></table><br>_______________________________________________<br>
Users mailing list<br>
<a rel="nofollow" href="http://mc/compose?to=Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a rel="nofollow" href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div>
</div></div></div></blockquote></td></tr></tbody></table></blockquote></div><br></div></div>