Hi,<br> I am using openbebula as a Virtual Infrastructure Manager on Ubuntu 9.10 ( kernel : 2.6.32-16-generic arch:x86_64 )<br>Currently i am using kvm hyperwiser. I have one front-end and two worker nodes in cloud ( private ).<br>
When i submit the vm using ( at front end )<br> " onevm create myfirstVM.template "<br><br>After few time shows the status as<br><br> ID NAME STAT CPU MEM HOSTNAME TIME<br>
11 vm-examp prol 0 0 192.168.1.193 03 19:20:21<br> <span style="background-color: rgb(255, 0, 0);"> 18 vm-examp fail 0 0 192.168.1.193 00 00:31:29</span><br>
<br>the error log is as follow ( /var/log/one/18.log ) :<br><br>Mon Apr 5 14:22:41 2010 [DiM][I]: New VM state is ACTIVE.<br>Mon Apr 5 14:22:41 2010 [LCM][I]: New VM state is PROLOG.<br>Mon Apr 5 14:22:41 2010 [TM][I]: tm_clone.sh: sandeep-laptop:/tmp/disk.img 192.168.1.193:/var/lib/one//18/images/disk.0<br>
Mon Apr 5 14:22:41 2010 [TM][I]: tm_clone.sh: DST: /var/lib/one//18/images/disk.0<br>Mon Apr 5 14:22:41 2010 [TM][I]: tm_clone.sh: Creating directory /var/lib/one//18/images<br>Mon Apr 5 14:22:41 2010 [TM][I]: tm_clone.sh: Executed "ssh 192.168.1.193 mkdir -p /var/lib/one//18/images".<br>
Mon Apr 5 14:22:41 2010 [TM][I]: tm_clone.sh: Cloning sandeep-laptop:/tmp/disk.img<br>Mon Apr 5 14:53:24 2010 [TM][I]: tm_clone.sh: Executed "scp sandeep-laptop:/tmp/disk.img 192.168.1.193:/var/lib/one//18/images/disk.0".<br>
Mon Apr 5 14:53:24 2010 [TM][I]: tm_clone.sh: Executed "ssh 192.168.1.193 chmod a+w /var/lib/one//18/images/disk.0".<br>Mon Apr 5 14:53:25 2010 [TM][I]: tm_mkswap.sh: Creating 1024Mb image in /var/lib/one//18/images/disk.1<br>
Mon Apr 5 14:53:25 2010 [TM][I]: tm_mkswap.sh: Executed "ssh 192.168.1.193 mkdir -p /var/lib/one//18/images".<br>Mon Apr 5 14:53:25 2010 [TM][I]: tm_mkswap.sh: Executed "ssh 192.168.1.193 dd if=/dev/zero of=/var/lib/one//18/images/disk.1 bs=1 count=1 seek=1024M".<br>
Mon Apr 5 14:53:25 2010 [TM][I]: tm_mkswap.sh: Initializing swap space<br>Mon Apr 5 14:53:26 2010 [TM][I]: tm_mkswap.sh: Executed "ssh 192.168.1.193 /sbin/mkswap /var/lib/one//18/images/disk.1".<br>Mon Apr 5 14:53:26 2010 [TM][I]: tm_mkswap.sh: Executed "ssh 192.168.1.193 chmod a+w /var/lib/one//18/images/disk.1".<br>
Mon Apr 5 14:53:26 2010 [LCM][I]: New VM state is BOOT<br>Mon Apr 5 14:53:26 2010 [VMM][I]: Generating deployment file: /var/lib/one/18/deployment.0<br>Mon Apr 5 14:53:26 2010 [VMM][I]: Command: scp /var/lib/one/18/deployment.0 192.168.1.193:/var/lib/one//18/images/deployment.0<br>
Mon Apr 5 14:53:26 2010 [VMM][I]: Copy success<br>Mon Apr 5 14:53:57 2010 [VMM][I]: Connecting to uri: qemu:///system<br><span style="background-color: rgb(255, 0, 0);">Mon Apr 5 14:53:57 2010 [VMM][I]: error: Failed to create domain from /var/lib/one//18/images/deployment.0</span><br style="background-color: rgb(255, 0, 0);">
<span style="background-color: rgb(255, 0, 0);">Mon Apr 5 14:53:57 2010 [VMM][I]: error: monitor socket did not show up.: Connection refused</span><br>Mon Apr 5 14:53:57 2010 [VMM][I]: ExitCode: 1<br>Mon Apr 5 14:53:57 2010 [VMM][E]: Error deploying virtual machine: Failed to create domain from /var/lib/one//18/images/deployment.0<br>
Mon Apr 5 14:53:57 2010 [LCM][I]: Fail to boot VM.<br>Mon Apr 5 14:53:57 2010 [DiM][I]: New VM state is FAILED<br><br>I am not getting the exact reason of the error message. KVM hyperwiser is running at both worker node. So, What should I do to avoid this error message.<br>
I am using the following template files :<br><br>myfirstVM.template<br><br>NAME = vm-example<br>CPU = 0.5<br>MEMORY = 1024<br>OS = [<br> kernel = "/boot/vmlinuz-2.6.32-16-generic",<br> initrd = "/boot/initrd.img-2.6.32-16-generic",<br>
root = "sda1" ]<br>DISK = [<br> source = "/tmp/disk.img",<br> target = "sda",<br> readonly = "no" ]<br> DISK = [<br> type = "swap",<br> size = 1024,<br>
target = "sdb"]<br>NIC = [ NETWORK = "Private LAN" ]<br><br>vmnet.template<br><br>NAME = "Private LAN"<br>TYPE = FIXED<br>BRIDGE = virbr0<br>LEASES = [IP=192.166.122.1]<br><br>Can someone suggest the any modification so as to overcome the error message.<br>
<br><br clear="all"><br>-- <br>Thanks & Regards,<br>Sandeep Kapse<br><br>