[one-users] live migration using occi-storage fails

Strutz, Marco marco.strutz at fokus.fraunhofer.de
Fri Jun 11 06:21:57 PDT 2010


Hi everyone.

I have deployed ttyLinux twice, once via occi (id=36) and the other via
cli (onevm create ...).
Both machines are up and running.

Unfortunately live-migration doesn't work with the occi machine id=36.
BUT the live migration for id=38 work like a charme.


The ttyLinux image for Id=36 was uploaded via occi as storage resource
(disk-id=2).
The ttyLinux image for Id=38 get never in contact with occi ->
/srv/cloud/images/ttyLinux.img

(both images are identical, confirmed via the 'diff' command)

Strange: If I deploy a third ttyLinux (same configuration as id=38) but
point it's source to the occi-storage "SOURCE=/srv/cloud/images/2" then
the live-migration fails as well.


Any guesses? (log files see below)



thanks in advance
Marco



environment:
Linux b 2.6.28-19-server #61-Ubuntu SMP Thu May 27 00:22:27 UTC 2010
x86_64 GNU/Linux
OpenNebula v1.4 (Last Stable Release)



-------------------------/srv/cloud/one/var/36/vm.log--------------
(...)
Fri Jun 11 14:24:05 2010 [LCM][I]: New VM state is MIGRATE
Fri Jun 11 14:24:35 2010 [VMM][I]: Command execution fail: virsh
--connect qemu:///system migrate --live one-36 qemu+ssh://vodka/session
Fri Jun 11 14:24:35 2010 [VMM][I]: STDERR follows.
Fri Jun 11 14:24:35 2010 [VMM][I]: /usr/lib/ruby/1.8/open3.rb:67:
warning: Insecure world writable dir /srv/cloud in PATH, mode 040777
Fri Jun 11 14:24:35 2010 [VMM][I]: Connecting to uri: qemu:///system
Fri Jun 11 14:24:35 2010 [VMM][I]: error: operation failed: failed to
start listening VM
Fri Jun 11 14:24:35 2010 [VMM][I]: ExitCode: 1
Fri Jun 11 14:24:35 2010 [VMM][E]: Error live-migrating VM, -
Fri Jun 11 14:24:35 2010 [LCM][I]: Fail to life migrate VM. Assuming
that the VM is still RUNNING (will poll VM).
(...)
-------------------------------------------------------------------


-------------------------/srv/cloud/one/var/38/vm.log--------------
(...)
Fri Jun 11 14:56:52 2010 [LCM][I]: New VM state is MIGRATE
Fri Jun 11 14:56:53 2010 [LCM][I]: New VM state is RUNNING
(...)
-------------------------------------------------------------------



-----------------------------$onevm list---------------------------
  ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
  36 oneadmin ttyLinux runn   0   65536               b 00 00:01:03
  38 oneadmin ttylinux runn   0   65536               b 00 00:01:14
-------------------------------------------------------------------



----------------------------$onehost list--------------------------
  ID NAME                      RVM   TCPU   FCPU   ACPU    TMEM    FMEM
STAT
   2 v                           0    400    400    400 8078448 8006072
on
   3 b                           2    400    394    394 8078448 7875748
on
-------------------------------------------------------------------




---------------------------$ onevm show 36-------------------------
VIRTUAL MACHINE 36 INFORMATION

ID             : 36                  
NAME           : ttyLinux01          
STATE          : ACTIVE              
LCM_STATE      : RUNNING             
START TIME     : 06/11 14:11:15      
END TIME       : -                   
DEPLOY ID:     : one-36              

VIRTUAL MACHINE TEMPLATE

CPU=1
DISK=[
  IMAGE_ID=2,
  READONLY=no,
  SOURCE=/srv/cloud/images/2,
  TARGET=hda ]
FEATURES=[
  ACPI=no ]
INSTANCE_TYPE=small
MEMORY=64
NAME=ttyLinux01
NIC=[
  BRIDGE=br0,
  IP=10.0.0.2,
  MAC=00:03:c1:00:00:ca,
  NETWORK=network,
  VNID=0 ]
VMID=36
-------------------------------------------------------------------






-----------------------------$ virsh dumpxml one-36----------------
Connecting to uri: qemu:///system
<domain type='kvm' id='9'>
  <name>one-36</name>
  <uuid>fd9dde78-1033-986e-003b-b353b9eaf8b3</uuid>
  <memory>65536</memory>
  <currentMemory>65536</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc'>hvm</type>
    <boot dev='hd'/>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <source file='/srv/cloud/one/var//36/images/disk.0'/>
      <target dev='hda' bus='ide'/>
    </disk>
    <interface type='bridge'>
      <mac address='00:03:c1:00:00:ca'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
    </interface>
  </devices>
</domain>
-------------------------------------------------------------------


---------------------------$ onevm show 38-------------------------
VIRTUAL MACHINE 38 INFORMATION

ID             : 38                  
NAME           : ttylinux            
STATE          : ACTIVE              
LCM_STATE      : RUNNING             
START TIME     : 06/11 14:54:30      
END TIME       : -                   
DEPLOY ID:     : one-38              

VIRTUAL MACHINE TEMPLATE

CPU=0.1
DISK=[
  READONLY=no,
  SOURCE=/srv/cloud/images/ttylinux.img,
  TARGET=hda ]
FEATURES=[
  ACPI=no ]
MEMORY=64
NAME=ttylinux
NIC=[
  BRIDGE=br0,
  IP=10.0.0.3,
  MAC=00:03:c1:00:00:cb,
  NETWORK=network,
  VNID=0 ]
VMID=38
-------------------------------------------------------------------




-----------------------------$ virsh dumpxml one-38----------------
<domain type='kvm' id='8'>
  <name>one-38</name>
  <uuid>c2b88adf-80d1-abf8-b3b2-4babfd1ebff4</uuid>
  <memory>65536</memory>
  <currentMemory>65536</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc'>hvm</type>
    <boot dev='hd'/>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <source file='/srv/cloud/one/var//38/images/disk.0'/>
      <target dev='hda' bus='ide'/>
    </disk>
    <interface type='bridge'>
      <mac address='00:03:c1:00:00:cb'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
    </interface>
  </devices>
</domain>
-------------------------------------------------------------------



More information about the Users mailing list