[one-users] problem doing migration

Tino Vazquez tinova at fdi.ucm.es
Thu Jul 16 03:57:49 PDT 2009


Hi there,

It is very likely a permissions problem, but I also see some
connection issues in the log files you sent over. Let's see if we can
figure this out.

What kind of authentication did you set libvirt to or did you leave the default?

Also, can you perform livemigration using the virsh command?

How many libvirt daemons can you see running in the nodes (maybe one
is oneadmin's and the other is root's)

Based on previous experiences, please try setting the following env
variable in the nodes and the front-end

export VIRSH_DEFAULT_CONNECT_URI=qemu+unix:///system?socket=/var/run/libvirt/libvirt-sock

Hope it helps,

-Tino

--
Constantino Vázquez, Grid Technology Engineer/Researcher:
http://www.dsa-research.org/tinova
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org



On Fri, Jun 19, 2009 at 4:44 PM, Shi Jin<jinzishuai at yahoo.com> wrote:
>
> Hi there,
>
> I've set up a Ubuntu Jaunty cluster to run Open Nebula where the VM_DIR is shared by NFS.
> I tried to do livemigration but got the error message:
> Fri Jun 19 08:03:51 2009 [LCM][I]: New VM state is MIGRATE
> Fri Jun 19 08:03:51 2009 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 19 08:03:51 2009 [VMM][I]: ExitCode: 0
> Fri Jun 19 08:03:52 2009 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 19 08:03:52 2009 [VMM][I]: error: unable to connect to 'test2': Connection refused
> Fri Jun 19 08:03:52 2009 [VMM][I]: ExitCode: 1
> Fri Jun 19 08:03:52 2009 [VMM][E]: Error live-migrating VM, unable to connect to 'test2': Connection refused
> Fri Jun 19 08:03:52 2009 [LCM][I]: Fail to life migrate VM. Assuming that the VM is still RUNNING (will poll VM).
> Fri Jun 19 08:03:52 2009 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 19 08:03:52 2009 [VMM][I]: ExitCode: 0
> Fri Jun 19 08:03:52 2009 [VMM][I]: Monitor Information:
>        CPU   : -1
>        Memory: 890880
>        Net_TX: -1
>        Net_RX: -1
>
>
>
> I also tried  to do migration but got the error message:
> Fri Jun 19 08:06:24 2009 [LCM][I]: New VM state is SAVE_MIGRATE
> Fri Jun 19 08:06:31 2009 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 19 08:06:31 2009 [VMM][I]: ExitCode: 0
> Fri Jun 19 08:06:31 2009 [LCM][I]: New VM state is PROLOG_MIGRATE
> Fri Jun 19 08:06:31 2009 [TM][I]: tm_mv.sh: Will not move, source and destination are equal
> Fri Jun 19 08:06:31 2009 [LCM][I]: New VM state is BOOT
> Fri Jun 19 08:06:31 2009 [VMM][I]: Connecting to uri: qemu:///system
> Fri Jun 19 08:06:31 2009 [VMM][I]: error: Failed to restore domain from /mnt/OpenNebula/VMDir/53/images/checkpoint
> Fri Jun 19 08:06:31 2009 [VMM][I]: error: operation failed: cannot read domain image
> Fri Jun 19 08:06:31 2009 [VMM][I]: ExitCode: 1
> Fri Jun 19 08:06:31 2009 [VMM][E]: Error restoring VM, Failed to restore domain from /mnt/OpenNebula/VMDir/53/images/checkpoint
> Fri Jun 19 08:06:31 2009 [LCM][I]: Fail to boot VM.
> Fri Jun 19 08:06:31 2009 [DiM][I]: New VM state is FAILED
> oneadmin at frontend:/var/log/one$
>
> I think this problem is related the file access rights.
> root at test2:/mnt/OpenNebula/VMDir/53/images#  ls -l;
> total 12797424
> -rw------- 1 root     root      206845156 2009-06-19 08:06 checkpoint
> -rw-r--r-- 1 oneadmin nogroup         550 2009-06-18 23:36 deployment.0
> -rwxrwxrwx 1 oneadmin nogroup 12884901888 2009-06-19 07:40 disk.0
>
> It can be seen that the checkpoint file generated is owned by root while I think Open Nebula operates everything with the oneadmin account.  Is this a bug in the Open Nebula code or in my own setup?
>
> Thank you very much.
> --
> Shi Jin, PhD
>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



More information about the Users mailing list