[one-users] live migration using occi-storage fails

Harder, Stefan stefan.harder at fokus.fraunhofer.de
Fri Jul 23 05:00:30 PDT 2010


Hi Javier,

I've added the chmod line to the script but nothing changes. The images
still have 600 permissions instead of 644 needed for a working live
migration. I also restarted one.

By the way, thank you all again for the tutorial!

Regards,

Stefan

> -----Ursprüngliche Nachricht-----
> Von: users-bounces at lists.opennebula.org [mailto:users-
> bounces at lists.opennebula.org] Im Auftrag von Javier Fontan
> Gesendet: Donnerstag, 22. Juli 2010 15:53
> An: Strutz, Marco
> Cc: users at lists.opennebula.org
> Betreff: Re: [one-users] live migration using occi-storage fails
> 
> Hello Marco,
> 
> To change the permissions of the image uploaded by OCCI server you can
> edit $ONE_LOCATION/lib/ruby/cloud/image.rb. Around line 102 there is
> this function:
> 
> --8<------
>         def copy_image(path, move=false)
>             if move
>                 FileUtils.mv(path, image_path)
>             else
>                 FileUtils.cp(path, image_path)
>             end
>             self.path=image_path
>         end
> ------>8--
> 
> You have to add there a line so it looks like this:
> 
> --8<------
>         def copy_image(path, move=false)
>             if move
>                 FileUtils.mv(path, image_path)
>             else
>                 FileUtils.cp(path, image_path)
>                 FileUtils.chmod(0666, image_path)
>             end
>             self.path=image_path
>         end
> ------>8--
> 
> Feel free to change permissions parameter to suit you needs and tell
> me if that solves the problem.
> 
> Bye
> 
> On Fri, Jun 25, 2010 at 11:13 AM, Strutz, Marco
> <marco.strutz at fokus.fraunhofer.de> wrote:
> > Hello Javier.
> >
> > As described in the documentation[1] umask is not set in
> "/etc/exports":
> >        /srv/cloud      10.0.0.6(rw)
> >
> > If I upload an image via "occi-storage create <occi xml file>" an
> image will be created in "/srv/cloud/images. This image has rw-
> permission only for "oneadmin":"
> >        -rw------- 1 oneadmin cloud
> >
> > The migration fails for that permissions until I change it to
> >        -rw-r---r- 1 oneadmin cloud
> > Then the migration works fine.
> >
> > If I manually create a file as oneadmin in "/srv/cloud/images" via
> "touch testfile", then "testfile" has correct (read) permission which
> works fine for migration:
> >        oneadmin at b:/srv/cloud/images$ touch testfile && ls -la
> testfile
> >        -rw-r--r-- 1 oneadmin cloud 0 2010-06-25 10:57 testfile
> >
> >
> > Occi-server run's as "oneadmin" user:
> >        $ ps aux | grep "ruby"
> >        oneadmin  3038  0.0  0.0  31032  4472 ?        SNl  Jun11
> 8:17 ruby /srv/cloud/one/lib/mads/one_vmm_kvm.rb
> >        oneadmin  3049  0.0  0.0  37860  5140 ?        SNl  Jun11
> 9:39 ruby /srv/cloud/one/lib/mads/one_im_ssh.rb im_kvm/im_kvm.conf
> >        oneadmin  3063  0.0  0.0  30560  3988 ?        SNl  Jun11
> 7:44 ruby /srv/cloud/one/lib/mads/one_tm.rb tm_nfs/tm_nfs.conf
> >        oneadmin  3077  0.0  0.0  30320  3652 ?        SNl  Jun11
> 7:35 ruby /srv/cloud/one/lib/mads/one_hm.rb
> >        oneadmin  3091  0.1  0.4 115116 37400 ?        Rl   Jun11
>  35:22 ruby /srv/cloud/one/lib/ruby/cloud/occi/occi-server.rb
> >
> >
> > I'm clueless about further testing. Could you please assist? I would
> appreciate it.
> >
> >
> >
> > [1] http://www.opennebula.org/documentation:rel1.4:plan  -->
> Preparing the Cluster : storage :
> >    $ cat /etc/exports
> >    /srv/cloud 192.168.0.0/255.255.255.0(rw)
> >
> >
> >
> > Thanks + bye
> > Marco
> >
> >
> > -----Original Message-----
> > From: Javier Fontan [mailto:jfontan at gmail.com]
> > Sent: Thursday, June 24, 2010 12:23 PM
> > To: Strutz, Marco
> > Cc: users at lists.opennebula.org
> > Subject: Re: [one-users] live migration using occi-storage fails
> >
> > Hello,
> >
> > We don't explicitly set image file permissions, take a look at umask
> > for oneadmin user.
> >
> > Bye
> >
> >
> > On Wed, Jun 23, 2010 at 2:23 PM, Strutz, Marco
> > <marco.strutz at fokus.fraunhofer.de> wrote:
> >> I have add read permission.. now the live migration works! (My setup
> uses KVM as hypervisor)
> >> Thanks!
> >>
> >> oneadmin at v:~/var/36/images$ ls -la disk.0
> >> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:13 disk.0
> >>
> >>
> >> What can I do to have the read-permission automatically set by
> OpenNebula every time a virtual machine is deployed via OCCI? Is this
> read-only file permission a bug in the OCCI implementation, should I
> open a ticket?
> >>
> >>
> >>
> >> Marco
> >>
> >> -----Original Message-----
> >> From: Javier Fontan [mailto:jfontan at gmail.com]
> >> Sent: Tuesday, June 22, 2010 4:54 PM
> >> To: Strutz, Marco
> >> Cc: users at lists.opennebula.org
> >> Subject: Re: [one-users] live migration using occi-storage fails
> >>
> >> Hello,
> >>
> >> Those write only permissions are probably causing that error. xen
> >> daemon uses root permissions to read disk image files. As the
> >> filesystem is nfs mounted these permissions are enforced for root
> user
> >> and that can cause the problem.
> >>
> >> On Mon, Jun 21, 2010 at 9:15 PM, Strutz, Marco
> >> <marco.strutz at fokus.fraunhofer.de> wrote:
> >>> Hello Javier.
> >>>
> >>>
> >>> The destination node "v" uses a shared storage (via nfs) to access
> >>> /srv/cloud and "disk.0" can be accessed from both machines ("v" and
> "b"). A
> >>> symlink seems not no be used for the image(s):
> >>>
> >>>
> >>> id=36:
> >>>
> >>> oneadmin at v:~/var/36/images$ ls -la
> /srv/cloud/one/var/36/images/disk.0
> >>> -rw--w--w- 1 oneadmin cloud 41943040 2010-06-11 14:13
> >>> /srv/cloud/one/var/36/images/disk.0
> >>>
> >>>
> >>>
> >>> id=38:
> >>>
> >>> oneadmin at v:~/var/36/images$ ls -la
> /srv/cloud/one/var/38/images/disk.0
> >>> -rw-rw-rw- 1 oneadmin cloud 41943040 2010-06-11 14:55
> >>> /srv/cloud/one/var/38/images/disk.0
> >>>
> >>> oneadmin at v:~/var/36/images$  ls -la /srv/cloud/images/2
> >>> -rw------- 1 oneadmin cloud 41943040 2010-06-09 10:36
> /srv/cloud/images/2
> >>>
> >>> oneadmin at v:~/var/36/images$ ls -la /srv/cloud/images/ttylinux.img
> >>> -rw-r--r-- 1 oneadmin cloud 41943040 2010-03-30 13:57
> >>> /srv/cloud/images/ttylinux.img
> >>>
> >>>
> >>>
> >>> The file-permissions seems to be different. Could that be a
> potential
> >>> problem?
> >>>
> >>>
> >>>
> >>>
> >>> thanks
> >>> Marco
> >>>
> >>>
> >>>
> >>>
> >>> -----Ursprüngliche Nachricht-----
> >>> Von: Javier Fontan [mailto:jfontan at gmail.com]
> >>> Gesendet: Mo 21.06.2010 17:58
> >>> An: Strutz, Marco
> >>> Cc: users at lists.opennebula.org
> >>> Betreff: Re: [one-users] live migration using occi-storage fails
> >>>
> >>> Hello,
> >>>
> >>> Can you check that /srv/cloud/one/var//36/images/disk.0 is
> accessible
> >>> from destination node (I suppose "v")? Also check if that it is a
> >>> symlink the target file is readable there.
> >>>
> >>> Bye
> >>>
> >>> On Fri, Jun 11, 2010 at 3:21 PM, Strutz, Marco
> >>> <marco.strutz at fokus.fraunhofer.de> wrote:
> >>>> Hi everyone.
> >>>>
> >>>> I have deployed ttyLinux twice, once via occi (id=36) and the
> other via
> >>>> cli (onevm create ...).
> >>>> Both machines are up and running.
> >>>>
> >>>> Unfortunately live-migration doesn't work with the occi machine
> id=36.
> >>>> BUT the live migration for id=38 work like a charme.
> >>>>
> >>>>
> >>>> The ttyLinux image for Id=36 was uploaded via occi as storage
> resource
> >>>> (disk-id=2).
> >>>> The ttyLinux image for Id=38 get never in contact with occi ->
> >>>> /srv/cloud/images/ttyLinux.img
> >>>>
> >>>> (both images are identical, confirmed via the 'diff' command)
> >>>>
> >>>> Strange: If I deploy a third ttyLinux (same configuration as
> id=38) but
> >>>> point it's source to the occi-storage "SOURCE=/srv/cloud/images/2"
> then
> >>>> the live-migration fails as well.
> >>>>
> >>>>
> >>>> Any guesses? (log files see below)
> >>>>
> >>>>
> >>>>
> >>>> thanks in advance
> >>>> Marco
> >>>>
> >>>>
> >>>>
> >>>> environment:
> >>>> Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC
> 2010
> >>>> x86_64 GNU/Linux
> >>>> OpenNebula v1.4 (Last Stable Release)
> >>>>
> >>>>
> >>>>
> >>>> -------------------------/srv/cloud/one/var/36/vm.log-------------
> -
> >>>> (...)
> >>>> Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
> >>>> Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
> >>>> --connect qemu:///system migrate --live one-36
> qemu+ssh://v/session
> >>>> Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
> >>>> Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
> >>>> warning: Insecure world writable dir /srv/cloud in PATH, mode
> 040777
> >>>> Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri:
> qemu:///system
> >>>> Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed
> to
> >>>> start listening VM
> >>>> Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
> >>>> Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
> >>>> Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM.
> Assuming
> >>>> that the VM is still RUNNING (will poll VM).
> >>>> (...)
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>> -------------------------/srv/cloud/one/var/38/vm.log-------------
> -
> >>>> (...)
> >>>> Fri Jun 11 14:56:52 2010 [LCM][I]: New VM state is MIGRATE
> >>>> Fri Jun 11 14:56:53 2010 [LCM][I]: New VM state is RUNNING
> >>>> (...)
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>>
> >>>> -----------------------------$onevm list--------------------------
> -
> >>>>  ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
> >>>>  36 oneadmin ttyLinux runn   0   65536               b 00 00:01:03
> >>>>  38 oneadmin ttylinux runn   0   65536               b 00 00:01:14
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>>
> >>>> ----------------------------$onehost list-------------------------
> -
> >>>>  ID NAME                      RVM   TCPU   FCPU   ACPU    TMEM
> FMEM
> >>>> STAT
> >>>>   2 v                           0    400    400    400 8078448
> 8006072
> >>>> on
> >>>>   3 b                           2    400    394    394 8078448
> 7875748
> >>>> on
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> ---------------------------$ onevm show 36------------------------
> -
> >>>> VIRTUAL MACHINE 36 INFORMATION
> >>>>
> >>>> ID             : 36
> >>>> NAME           : ttyLinux01
> >>>> STATE          : ACTIVE
> >>>> LCM_STATE      : RUNNING
> >>>> START TIME     : 06/11 14:11:15
> >>>> END TIME       : -
> >>>> DEPLOY ID:     : one-36
> >>>>
> >>>> VIRTUAL MACHINE TEMPLATE
> >>>>
> >>>> CPU=1
> >>>> DISK=[
> >>>>  IMAGE_ID=2,
> >>>>  READONLY=no,
> >>>>  SOURCE=/srv/cloud/images/2,
> >>>>  TARGET=hda ]
> >>>> FEATURES=[
> >>>>  ACPI=no ]
> >>>> INSTANCE_TYPE=small
> >>>> MEMORY=64
> >>>> NAME=ttyLinux01
> >>>> NIC=[
> >>>>  BRIDGE=br0,
> >>>>  IP=10.0.0.2,
> >>>>  MAC=00:03:c1:00:00:ca,
> >>>>  NETWORK=network,
> >>>>  VNID=0 ]
> >>>> VMID=36
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> -----------------------------$ virsh dumpxml one-36---------------
> -
> >>>> Connecting to uri: qemu:///system
> >>>> <domain type='kvm' id='9'>
> >>>>  <name>one-36</name>
> >>>>  <uuid>fd9dde78-1033-986e-003b-b353b9eaf8b3</uuid>
> >>>>  <memory>65536</memory>
> >>>>  <currentMemory>65536</currentMemory>
> >>>>  <vcpu>1</vcpu>
> >>>>  <os>
> >>>>    <type arch='x86_64' machine='pc'>hvm</type>
> >>>>    <boot dev='hd'/>
> >>>>  </os>
> >>>>  <clock offset='utc'/>
> >>>>  <on_poweroff>destroy</on_poweroff>
> >>>>  <on_reboot>restart</on_reboot>
> >>>>  <on_crash>destroy</on_crash>
> >>>>  <devices>
> >>>>    <emulator>/usr/bin/kvm</emulator>
> >>>>    <disk type='file' device='disk'>
> >>>>      <source file='/srv/cloud/one/var//36/images/disk.0'/>
> >>>>      <target dev='hda' bus='ide'/>
> >>>>    </disk>
> >>>>    <interface type='bridge'>
> >>>>      <mac address='00:03:c1:00:00:ca'/>
> >>>>      <source bridge='br0'/>
> >>>>      <target dev='vnet0'/>
> >>>>    </interface>
> >>>>  </devices>
> >>>> </domain>
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>> ---------------------------$ onevm show 38------------------------
> -
> >>>> VIRTUAL MACHINE 38 INFORMATION
> >>>>
> >>>> ID             : 38
> >>>> NAME           : ttylinux
> >>>> STATE          : ACTIVE
> >>>> LCM_STATE      : RUNNING
> >>>> START TIME     : 06/11 14:54:30
> >>>> END TIME       : -
> >>>> DEPLOY ID:     : one-38
> >>>>
> >>>> VIRTUAL MACHINE TEMPLATE
> >>>>
> >>>> CPU=0.1
> >>>> DISK=[
> >>>>  READONLY=no,
> >>>>  SOURCE=/srv/cloud/images/ttylinux.img,
> >>>>  TARGET=hda ]
> >>>> FEATURES=[
> >>>>  ACPI=no ]
> >>>> MEMORY=64
> >>>> NAME=ttylinux
> >>>> NIC=[
> >>>>  BRIDGE=br0,
> >>>>  IP=10.0.0.3,
> >>>>  MAC=00:03:c1:00:00:cb,
> >>>>  NETWORK=network,
> >>>>  VNID=0 ]
> >>>> VMID=38
> >>>> ------------------------------------------------------------------
> -
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> -----------------------------$ virsh dumpxml one-38---------------
> -
> >>>> <domain type='kvm' id='8'>
> >>>>  <name>one-38</name>
> >>>>  <uuid>c2b88adf-80d1-abf8-b3b2-4babfd1ebff4</uuid>
> >>>>  <memory>65536</memory>
> >>>>  <currentMemory>65536</currentMemory>
> >>>>  <vcpu>1</vcpu>
> >>>>  <os>
> >>>>    <type arch='x86_64' machine='pc'>hvm</type>
> >>>>    <boot dev='hd'/>
> >>>>  </os>
> >>>>  <clock offset='utc'/>
> >>>>  <on_poweroff>destroy</on_poweroff>
> >>>>  <on_reboot>restart</on_reboot>
> >>>>  <on_crash>destroy</on_crash>
> >>>>  <devices>
> >>>>    <emulator>/usr/bin/kvm</emulator>
> >>>>    <disk type='file' device='disk'>
> >>>>      <source file='/srv/cloud/one/var//38/images/disk.0'/>
> >>>>      <target dev='hda' bus='ide'/>
> >>>>    </disk>
> >>>>    <interface type='bridge'>
> >>>>      <mac address='00:03:c1:00:00:cb'/>
> >>>>      <source bridge='br0'/>
> >>>>      <target dev='vnet0'/>
> >>>>    </interface>
> >>>>  </devices>
> >>>> </domain>
> >>>> ------------------------------------------------------------------
> -
> >>>> _______________________________________________
> >>>> Users mailing list
> >>>> Users at lists.opennebula.org
> >>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> >>> DSA Research Group: http://dsa-research.org
> >>> Globus GridWay Metascheduler: http://www.GridWay.org
> >>> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users at lists.opennebula.org
> >>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> >> DSA Research Group: http://dsa-research.org
> >> Globus GridWay Metascheduler: http://www.GridWay.org
> >> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
> >>
> >
> >
> >
> > --
> > Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> > DSA Research Group: http://dsa-research.org
> > Globus GridWay Metascheduler: http://www.GridWay.org
> > OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
> >
> 
> 
> 
> --
> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 7099 bytes
Desc: not available
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20100723/5cf07e58/attachment-0003.bin>


More information about the Users mailing list