[one-users] cancel live migration in progress

samuel samu60 at gmail.com
Wed Oct 19 02:56:31 PDT 2011


So the only option is to backup somehow the migrating instance and manually
recover Open Nebula? The main concern is what will be the status of the
currently migrating VM....probably the best is to recreate it?

Would it be good to add a timeout in the migrate script so it launches an
error to Open Nebula? I'm sorry but I'm not familiar with One internals and
I'm not sure what will be affected if we add timeout to the live migration
virsh command...

Thanks for your support,
Samuel.

On 18 October 2011 22:53, Javier Fontan <jfontan at opennebula.org> wrote:

> Unfortunately I don't know of any way to stop or recover the failed
> migration with OpenNebula or manually.
>
> Rebooting a physical host will basically destroy the running VMs and
> most probably the disks will be corrupted.
>
> On Tue, Oct 18, 2011 at 2:34 PM, samuel <samu60 at gmail.com> wrote:
> > I add more information so you can follow the steps taken and the final
> > issue.
> >
> > 1)segfault on node 2:
> > [620617.517308] kvm[28860]: segfault at 420 ip 0000000000413714 sp
> > 00007fff9136ea70 error 4 in qemu-system-x86_64[400000+335000]
> >
> > VMs work OK
> >
> > 2)restart libvirt on node 2
> > /init.d/libvirt-bin restart
> >
> > libvirt is able to check local VM:
> > # virsh list
> >  Id Nombre               Estado
> > ----------------------------------
> >   2 one-47               ejecutando
> >   3 one-44               ejecutando
> >
> > 3)tried to live-migrate one-44 from node2 to node3 using the sunstone web
> > interface
> >
> > vm.log:
> > Tue Oct 18 11:00:31 2011 [LCM][I]: New VM state is MIGRATE
> >
> > oned.log:
> > Tue Oct 18 11:00:31 2011 [DiM][D]: Live-migrating VM 44
> > Tue Oct 18 11:00:31 2011 [ReM][D]: VirtualMachineInfo method invoked
> >
> > 4)the end situation is:
> > one-44 is in MIGRATE state for opennebula (there's no timeout paremeter
> set
> > for the virsh live-migrate so it will be there forever (?))
> >
> > root at node3:# virsh list
> >  Id Nombre               Estado
> > ----------------------------------
> >
> > root at node2:# virsh list
> >  Id Nombre               Estado
> > ----------------------------------
> >   2 one-47               ejecutando
> >   3 one-44               ejecutando
> >
> >
> > /var/log/libvirt/qemu/one-44.log is empty in both nodes (node2 and
> node3).
> >
> > My question is:
> >
> > i)How can I stop the live migration from the open nebula view so it does
> not
> > lose the whole picture of the cloud and it keeps consistency?
> > ii)is it safe to restart node2 or node3?
> >
> > Thank you in advance for any hint on this issue.
> >
> > Samuel.
> > On 18 October 2011 11:58, samuel <samu60 at gmail.com> wrote:
> >>
> >> hi all,
> >>
> >> I'm having an issue with live migration.T here was a running instance on
> a
> >> node that had a qemu segfault (i've noticed afterwards because the
> instances
> >> were working).I've tried to live migrate the instance to another node
> >> without problems but the instance remains in MIGRATE state "forever".
> >> *is there any method to stope the live migration?
> >> *if I restart the node with a qemu segfault, will the instances run ok
> >> again? They have been running but the communication between opennebula
> and
> >> KVM is broken so I'm not sure whether the cloud will keep consistency. I
> >> think I read that if the name of the instance is the same and the node
> is
> >> the same, opennebula will keep consistency.
> >>
> >> Can anyone help me, please?
> >>
> >> Thanks in advance,.
> >> Samuel.
> >>
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >
> >
>
>
>
> --
> Javier Fontán Muiños
> Project Engineer
> OpenNebula Toolkit | opennebula.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20111019/bebcf9fa/attachment-0003.htm>


More information about the Users mailing list