[one-users] live migration fails on ubuntu 11.04

samuel samu60 at gmail.com
Fri Jun 17 08:58:00 PDT 2011


The error happened to be a wrong entry in the file /etc/hosts, where the
remote node's IP was set to the local one and there were several errors.

However, it is not yet possible to perform live migration on the same
escenario (normal migration works perfectly), I always end up with the
following error:
Fri Jun 17 17:47:58 2011 [LCM][I]: New VM state is MIGRATE
Fri Jun 17 17:51:09 2011 [VMM][I]: Command execution fail: 'if [ -x
"/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-21
node2; else                              exit 42; fi'
Fri Jun 17 17:51:09 2011 [VMM][I]: STDERR follows.
Fri Jun 17 17:51:09 2011 [VMM][I]: error: operation failed: migration job:
unexpectedly failed
Fri Jun 17 17:51:09 2011 [VMM][I]: ExitCode: 1
Fri Jun 17 17:51:09 2011 [VMM][E]: Error live-migrating VM, error: operation
failed: migration job: unexpectedly failed
Fri Jun 17 17:51:09 2011 [LCM][I]: Fail to life migrate VM. Assuming that
the VM is still RUNNING (will poll VM).

This is the output of the file /var/log/libvirt/qemu/one-21.log
2011-06-17 17:48:02.232: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -cpu qemu32 -enable-kvm -m
2048 -smp 1,sockets=1,cores=1,threads=1 -name one-21 -uuid
b9330d8d-3d2e-666a-c9e5-5e32e81c29dc -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-21.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c
-drive
file=/srv/cloud/one/var//21/images/disk.0,if=none,id=drive-ide0-0-0,format=raw
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=18,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=02:00:c0:a8:32:03,bus=pci.0,addr=0x3
-usb -vnc 0.0.0.0:21 -vga cirrus -incoming tcp:0.0.0.0:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
2011-06-17 17:51:11.997: shutting down

And on /var/log/syslog, the folloing line:
Jun 17 17:51:46 node2 libvirtd: 17:51:46.798: 1200: error :
qemuDomainWaitForMigrationComplete:4218 : operation failed: migration job:
unexpectedly failed

Can anyone provide help on this issue? How can I debug the live migration?

Thank you very much in advance,
Samuel.

On 7 June 2011 17:22, samuel <samu60 at gmail.com> wrote:

> Hi folks,
>
> After few tricks to the standard configuration (controller exporting via
> NFS opennebula directories to 2 other nodes) seems to work except for one
> point: live migration.
>
> When starting live migration (from sunstone web interface), the following
> problem appears:
>
> Tue Jun  7 17:12:51 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/kvm/migrate" ]; then /var/tmp/one/vmm/kvm/migrate one-131
> node1; else                              exit 42; fi'
> Tue Jun  7 17:12:51 2011 [VMM][I]: STDERR follows.
> Tue Jun  7 17:12:51 2011 [VMM][I]: error: Requested operation is not valid:
> domain is already active as 'one-131'
> Tue Jun  7 17:12:51 2011 [VMM][I]: ExitCode: 1
> Tue Jun  7 17:12:51 2011 [VMM][E]: Error live-migrating VM, error:
> Requested operation is not valid: domain is already active as 'one-131'
> Tue Jun  7 17:12:51 2011 [LCM][I]: Fail to life migrate VM. Assuming that
> the VM is still RUNNING (will poll VM).
>
> I'm using qemu+ssh transport with the following version:
> $ virsh version
> Compilado contra la biblioteca: libvir 0.8.8
> Utilizando la biblioteca: libvir 0.8.8
> Utilizando API: QEMU 0.8.8
> Ejecutando hypervisor: QEMU 0.14.0
>
> Installed version of open nebula is 2.2.
>
> Could anyone shed some light on this issue? I've looked in the Internet and
> found some posts relating to qemu bugs but I'd like to know whether can I
> get more information about this issue.
>
> Thank you very much in advance,
> Samuel.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110617/c889f979/attachment-0003.htm>


More information about the Users mailing list