[one-users] live migration using occi-storage fails
Strutz, Marco
marco.strutz at fokus.fraunhofer.de
Wed Jun 23 05:23:32 PDT 2010
I have add read permission.. now the live migration works! (My setup uses KVM as hypervisor)
Thanks!
oneadmin at v:~/var/36/images$ ls -la disk.0
-rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0
What can I do to have the read-permission automatically set by OpenNebula every time a virtual machine is deployed via OCCI? Is this read-only file permission a bug in the OCCI implementation, should I open a ticket?
Marco
-----Original Message-----
From: Javier Fontan [mailto:jfontan at gmail.com]
Sent: Tuesday, June 22, 2010 4:54 PM
To: Strutz, Marco
Cc: users at lists.opennebula.org
Subject: Re: [one-users] live migration using occi-storage fails
Hello,
Those write only permissions are probably causing that error. xen
daemon uses root permissions to read disk image files. As the
filesystem is nfs mounted these permissions are enforced for root user
and that can cause the problem.
On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
<marco.strutz at fokus.fraunhofer.de> wrote:
> Hello Javier.
>
>
> The destination node "v" uses a shared storage (via nfs) to access
> /srv/cloud and "disk.0" can be accessed from both machines ("v" and "b"). A
> symlink seems not no be used for the image(s):
>
>
> id=36:
>
> oneadmin at v:~/var/36/images$ ls -la /srv/cloud/one/var/36/images/disk.0
> -rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13
> /srv/cloud/one/var/36/images/disk.0
>
>
>
> id=38:
>
> oneadmin at v:~/var/36/images$ ls -la /srv/cloud/one/var/38/images/disk.0
> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55
> /srv/cloud/one/var/38/images/disk.0
>
> oneadmin at v:~/var/36/images$ ls -la /srv/cloud/images/2
> -rw------- 1 oneadmin cloud 41943040 2010-06-09 10:36 /srv/cloud/images/2
>
> oneadmin at v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
> -rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57
> /srv/cloud/images/ttylinux.img
>
>
>
> The file-permissions seems to be different. Could that be a potential
> problem?
>
>
>
>
> thanks
> Marco
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: Javier Fontan [mailto:jfontan at gmail.com]
> Gesendet: Mo 21.06.2010 17:58
> An: Strutz, Marco
> Cc: users at lists.opennebula.org
> Betreff: Re: [one-users] live migration using occi-storage fails
>
> Hello,
>
> Can you check that /srv/cloud/one/var//36/images/disk.0 is accessible
> from destination node (I suppose "v")? Also check if that it is a
> symlink the target file is readable there.
>
> Bye
>
> On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
> <marco.strutz at fokus.fraunhofer.de> wrote:
>> Hi everyone.
>>
>> I have deployed ttyLinux twice, once via occi (id=36) and the other via
>> cli (onevm create ...).
>> Both machines are up and running.
>>
>> Unfortunately live-migration doesn't work with the occi machine id=36.
>> BUT the live migration for id=38 work like a charme.
>>
>>
>> The ttyLinux image for Id=36 was uploaded via occi as storage resource
>> (disk-id=2).
>> The ttyLinux image for Id=38 get never in contact with occi ->
>> /srv/cloud/images/ttyLinux.img
>>
>> (both images are identical, confirmed via the 'diff' command)
>>
>> Strange: If I deploy a third ttyLinux (same configuration as id=38) but
>> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
>> the live-migration fails as well.
>>
>>
>> Any guesses? (log files see below)
>>
>>
>>
>> thanks in advance
>> Marco
>>
>>
>>
>> environment:
>> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
>> x86_64 GNU/Linux
>> OpenNebula v1.4 (Last Stable Release)
>>
>>
>>
>> -------------------------/srv/cloud/one/var/36/vm.log--------------
>> (...)
>> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
>> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
>> --connect qemu:///system migrate --live one-36 qemu+ssh://v/session
>> Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
>> Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
>> warning: Insecure world writable dir /srv/cloud in PATH, mode 040777
>> Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri: qemu:///system
>> Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed to
>> start listening VM
>> Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
>> Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
>> Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM. Assuming
>> that the VM is still RUNNING (will poll VM).
>> (...)
>> -------------------------------------------------------------------
>>
>>
>> -------------------------/srv/cloud/one/var/38/vm.log--------------
>> (...)
>> Fri Jun 11 14:56:52 2010 [LCM][I]: New VM state is MIGRATE
>> Fri Jun 11 14:56:53 2010 [LCM][I]: New VM state is RUNNING
>> (...)
>> -------------------------------------------------------------------
>>
>>
>>
>> -----------------------------$onevm list---------------------------
>> ID USER NAME STAT CPU MEM HOSTNAME TIME
>> 36 oneadmin ttyLinux runn 0 65536 b 00 00:01:03
>> 38 oneadmin ttylinux runn 0 65536 b 00 00:01:14
>> -------------------------------------------------------------------
>>
>>
>>
>> ----------------------------$onehost list--------------------------
>> ID NAME RVM TCPU FCPU ACPU TMEM FMEM
>> STAT
>> 2 v 0 400 400 400 8078448 8006072
>> on
>> 3 b 2 400 394 394 8078448 7875748
>> on
>> -------------------------------------------------------------------
>>
>>
>>
>>
>> ---------------------------$ onevm show 36-------------------------
>> VIRTUAL MACHINE 36 INFORMATION
>>
>> ID : 36
>> NAME : ttyLinux01
>> STATE : ACTIVE
>> LCM_STATE : RUNNING
>> START TIME : 06/11 14:11:15
>> END TIME : -
>> DEPLOY ID: : one-36
>>
>> VIRTUAL MACHINE TEMPLATE
>>
>> CPU=1
>> DISK=[
>> IMAGE_ID=2,
>> READONLY=no,
>> SOURCE=/srv/cloud/images/2,
>> TARGET=hda ]
>> FEATURES=[
>> ACPI=no ]
>> INSTANCE_TYPE=small
>> MEMORY=64
>> NAME=ttyLinux01
>> NIC=[
>> BRIDGE=br0,
>> IP=10.0.0.2,
>> MAC=00:03:c1:00:00:ca,
>> NETWORK=network,
>> VNID=0 ]
>> VMID=36
>> -------------------------------------------------------------------
>>
>>
>>
>>
>>
>>
>> -----------------------------$ virsh dumpxml one-36----------------
>> Connecting to uri: qemu:///system
>> <domain type='kvm' id='9'>
>> <name>one-36</name>
>> <uuid>fd9dde78-1033-986e-003b-b353b9eaf8b3</uuid>
>> <memory>65536</memory>
>> <currentMemory>65536</currentMemory>
>> <vcpu>1</vcpu>
>> <os>
>> <type arch='x86_64' machine='pc'>hvm</type>
>> <boot dev='hd'/>
>> </os>
>> <clock offset='utc'/>
>> <on_poweroff>destroy</on_poweroff>
>> <on_reboot>restart</on_reboot>
>> <on_crash>destroy</on_crash>
>> <devices>
>> <emulator>/usr/bin/kvm</emulator>
>> <disk type='file' device='disk'>
>> <source file='/srv/cloud/one/var//36/images/disk.0'/>
>> <target dev='hda' bus='ide'/>
>> </disk>
>> <interface type='bridge'>
>> <mac address='00:03:c1:00:00:ca'/>
>> <source bridge='br0'/>
>> <target dev='vnet0'/>
>> </interface>
>> </devices>
>> </domain>
>> -------------------------------------------------------------------
>>
>>
>> ---------------------------$ onevm show 38-------------------------
>> VIRTUAL MACHINE 38 INFORMATION
>>
>> ID : 38
>> NAME : ttylinux
>> STATE : ACTIVE
>> LCM_STATE : RUNNING
>> START TIME : 06/11 14:54:30
>> END TIME : -
>> DEPLOY ID: : one-38
>>
>> VIRTUAL MACHINE TEMPLATE
>>
>> CPU=0.1
>> DISK=[
>> READONLY=no,
>> SOURCE=/srv/cloud/images/ttylinux.img,
>> TARGET=hda ]
>> FEATURES=[
>> ACPI=no ]
>> MEMORY=64
>> NAME=ttylinux
>> NIC=[
>> BRIDGE=br0,
>> IP=10.0.0.3,
>> MAC=00:03:c1:00:00:cb,
>> NETWORK=network,
>> VNID=0 ]
>> VMID=38
>> -------------------------------------------------------------------
>>
>>
>>
>>
>> -----------------------------$ virsh dumpxml one-38----------------
>> <domain type='kvm' id='8'>
>> <name>one-38</name>
>> <uuid>c2b88adf-80d1-abf8-b3b2-4babfd1ebff4</uuid>
>> <memory>65536</memory>
>> <currentMemory>65536</currentMemory>
>> <vcpu>1</vcpu>
>> <os>
>> <type arch='x86_64' machine='pc'>hvm</type>
>> <boot dev='hd'/>
>> </os>
>> <clock offset='utc'/>
>> <on_poweroff>destroy</on_poweroff>
>> <on_reboot>restart</on_reboot>
>> <on_crash>destroy</on_crash>
>> <devices>
>> <emulator>/usr/bin/kvm</emulator>
>> <disk type='file' device='disk'>
>> <source file='/srv/cloud/one/var//38/images/disk.0'/>
>> <target dev='hda' bus='ide'/>
>> </disk>
>> <interface type='bridge'>
>> <mac address='00:03:c1:00:00:cb'/>
>> <source bridge='br0'/>
>> <target dev='vnet0'/>
>> </interface>
>> </devices>
>> </domain>
>> -------------------------------------------------------------------
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
>
> --
> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
--
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
More information about the Users
mailing list