Hi Xasima,<div><br></div><div>sorry for the late response. I'm bookmarking this thread, to link to it if someone wants to know how to install a VM based on the EC2 ubuntu one. Thanks a lot!</div><div><br></div><div>Answering your questions:</div>
<div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><b>1) What is the proper correspondence between DISK DEV_PREFIX , VM OS ROOT and actual image partition mappings. <br>
</b></blockquote><div><br></div><div>What you're doing with those parameters is to let the hypervisor know to what virtual bus you are plugging your disks to. In other words, if you have a physical server, you could be plug your HDs to an IDE bus, or a SATA bus. To specify that to the hypervisor you can do: DEV_PREFIX=hd => IDE bus *or* DEV_PREFIX=sd => SATA BUS.<br>
<br>However, what the OS does with that is entirely up to the OS. For example Ubuntu will always map the devices to /dev/sdX and never to /dev/hdX. With CentOS 5, for instace (if I remember correctly) an IDE driver will be mapeed to /dev/hdX. So this is a bit tricky and confusing. The best way to understand this is by trial and error for each hypervisor-VM pair.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><b></b><b>2) Where is my contextualization cdrom ? </b></blockquote>
</div><div><b><br></b></div><div>Your CD is, as you already know, in /dev/sr0. A very nice and quick way to figure out what your CD is, is to run this command:</div>CDROM_DEVICE=$(ls /dev/cdrom* /dev/scd* /dev/sr* | sort | head -n 1)<div>
<br></div><div>However, your problem here of not being able to load the iso9660 module is probably related to the fact that the ramdisk for that image has been altered. I reckon the best way to solve that is by inspecting the ramdisk and the kernel, but we cannot be of much help here.</div>
<div><br></div><div>Thanks again for your thorough guide and I hope you've been finally lucky in importing the VM.</div><div><br></div><div>regards,<br>Jaime</div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Wed, Nov 14, 2012 at 3:59 PM, Xasima <span dir="ltr"><<a href="mailto:xasima@gmail.com" target="_blank">xasima@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>Sorry for the very long letter, but some clarifications may be needed</div><div><br></div><div>a) My step "<b>Test the type of the image" </b>needs to be </div><div><div><div>weblab@metrics:~/ubuntu-kvm$ sudo qemu-img info <u>base-m1s.qcow2</u></div>
<div>image: base-m1s.qcow2</div><div><u>file format: qcow2</u></div><div class="im"><div>virtual size: 5.0G (5368709120 bytes)</div></div><div>disk size: 580M</div><div>cluster_size: 65536</div></div></div><div><br></div>
<div>Since I have actually test both raw image "base-m1s.img" and qcow image "base-m1s.qcow2". The latest working configuration is for qcow2</div>
<div><br></div><div>b) In the question "w<b style="font-family:arial,sans-serif;font-size:12.727272033691406px">here is my contextualization cdrom " </b><span style="font-family:arial,sans-serif;font-size:12.727272033691406px">I have got error when trying to manually mount sr0 device. </span></div>
<div><div>weblab@vm3:~$ sudo mount /dev/sr0 /mnt</div><div>mount: unknown filesystem type 'iso9660'</div></div><div><br></div><div>So it's completely not clear for me what is the proper configuration to mount context cdrom on startup on ubuntu (virtual kernel, per <span style="font-size:14px;font-family:'Segoe UI',Frutiger,Tahoma,Helvetica,'Helvetica Neue',Arial,sans-serif">sudo vmbuilder kvm ubuntu --suite=precise </span><u style="font-size:14px;font-family:'Segoe UI',Frutiger,Tahoma,Helvetica,'Helvetica Neue',Arial,sans-serif">--flavour=virtual</u><span style="font-size:14px;font-family:'Segoe UI',Frutiger,Tahoma,Helvetica,'Helvetica Neue',Arial,sans-serif"> --arch=amd64) </span></div>
<div class="HOEnZb"><div class="h5">
<div class="gmail_extra"><br><div class="gmail_quote">On Wed, Nov 14, 2012 at 5:41 PM, Xasima <span dir="ltr"><<a href="mailto:xasima@gmail.com" target="_blank">xasima@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Hello, Thank you. I have successfully updated opennebula up to 3.8.1, and this fix cdrom mapping issue. <div>However, I had a long time attempts to manage to set up ubuntu 12.04 with serial access (virsh console enabled). I have included my steps in the bottom of the mail if that helps anyone else. </div>
<div><br></div><div>Besides I was able to set up my quest, I have a little question on the configuration. </div><div><br></div><div><b>1) What is the proper correspondence between DISK DEV_PREFIX , VM OS ROOT and actual image partition mappings. </b></div>
<div>My already prepared qcow2 image has /dev/sda1 partition mapping inside. </div><div>But I leave unchanged the opennebula image templates with </div><div><div>DEV_PREFIX="hd"</div><div>DRIVER = qcow2</div></div>
<div><br></div>
<div>While opennebula vm template has </div><div><div><div><div>OS = [ ARCH = x86_64,</div><div> BOOT = hd,</div></div><div> ROOT = sda1,</div></div></div><div>...]</div><div><br></div>
<div>While "onevm show" display </div>
<div> TARGET="hda" </div><div>against this disk<br></div><div><br></div><div>Don't I need to place DEV_PREFIX to "sd" instead ? </div><div><br></div><div><b>2) Where is my contextualization cdrom ? </b></div>
<div>My contextualization cd-rom is automatically mapped to TARGET="hdb" (as displayed by "onevm show")</div><div>But guest do know only about sda devices</div><div><div>weblab@vm3:~$ ls /dev/ | grep hd</div>
<div>weblab@vm3:~$ ls /dev/ | grep sd</div><div>sda</div><div>sda1</div><div>sda2</div></div><div><br></div><div><div>weblab@vm3:~$ sudo dmesg | grep -i cd</div><div>[ 0.328988] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver</div>
<div>[ 0.329619] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver</div><div>[ 0.330202] uhci_hcd: USB Universal Host Controller Interface driver</div><div>[ 0.330828] uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11</div>
<div>[ 0.331665] uhci_hcd 0000:00:01.2: setting latency timer to 64</div><div>[ 0.331674] uhci_hcd 0000:00:01.2: UHCI Host Controller</div><div>[ 0.336156] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1</div>
<div>[ 0.336936] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c100</div><div>[ 0.495645] scsi 0:0:1:0: CD-ROM QEMU QEMU DVD-ROM 1.0 PQ: 0 ANSI: 5</div><div>[ 0.497029] sr0: scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray</div>
<div>[ 0.497686] cdrom: Uniform CD-ROM driver Revision: 3.20</div><div>[ 0.501810] sr 0:0:1:0: Attached scsi CD-ROM sr0</div></div><div><br></div><div><div>weblab@vm3:~$ cat /boot/config-3.2.0-32-virtual | grep -i iso9660</div>
<div>CONFIG_ISO9660_FS=m</div></div><div><br></div><div><div>weblab@vm3:~$ sudo modprobe iso9660</div><div>FATAL: Could not load /lib/modules/3.2.0-32-generic/modules.dep</div></div><div><br></div><div>Seems that either contextualization init.sh is not ok, or do I need to use generic / not a virtual kernel? </div>
<div><br></div><div>The part of my init.sh contextualization script (just copied from ttylinux template)</div><div><div>if [ -f /mnt/context/context.sh ]</div><div>then</div><div> . /mnt/context/context.sh</div><div>else</div>
<div> mount -t iso9660 /dev/sr0 /mnt</div><div> ./mnt/context/context.sh</div><div>fi</div></div><div><br></div><div><br></div><div>--------------<br></div><div>Steps to enable serial console on ubuntu 12.04 with opennebula</div>
<div>--------------</div><div><b>Preparation</b><br>
</div><div>1) Install (apt-get install) questfish + libguestfs-tools with dependencies to be able to manage easily quest fs, since some changes to guest grub and init.d are required to enable serial console</div>
<div>2) Create an qcow2 image using vm-builder with predefined ip/gateway (on the case if contextualization script fails)</div><div>3) Start image with libvirt, change xml to include serial console and test if it works. Ensure if it behaves well both with ssh and virsh console access. </div>
<div><br></div><div><b>Test the type of the image</b><br></div><div><div>weblab@metrics:~/ubuntu-kvm$ qemu-img info base-m1s.img</div><div>image: base-m1s.img</div><div>file format: raw</div><div>virtual size: 5.0G (5368709120 bytes)</div>
<div>disk size: 565M</div><div>weblab@metrics:~/ubuntu-kvm$</div><div><br></div><div><b>Check if the grub is configured to use serial console</b></div><div>weblab@metrics:~/ubuntu-kvm$ sudo virt-edit base-m1s.img /boot/grub/menu.lst</div>
<div>...</div><div>title Ubuntu 12.04.1 LTS, kernel 3.2.0-32-virtual</div><div>uuid c645f23d-9d48-43d3-b042-7b06ae9f56b3</div><div>kernel /boot/vmlinuz-3.2.0-32-virtual root=UUID=c645f23d-9d48-43d3-b042-7b06ae9f56b3 ro quiet splash console=tty1 console=ttyS0,115200n8</div>
<div>initrd /boot/initrd.img-3.2.0-32-virtual</div><div>...</div><div><br></div><div><b>Check if initrs and vmlinuz are presented so will explicitly point to them in opennebula vm template</b></div><div>weblab@metrics:~/ubuntu-kvm$ sudo virt-ls -la base-m1s.img / | grep boot</div>
<div>drwxr-xr-x 3 0 0 4096 Nov 12 14:58 boot</div><div>lrwxrwxrwx 1 0 0 33 Nov 12 14:58 initrd.img -> /boot/initrd.img-3.2.0-32-virtual</div><div>lrwxrwxrwx 1 0 0 29 Nov 12 14:58 vmlinuz -> boot/vmlinuz-3.2.0-32-virtual</div>
<div><br></div><div><b>Double check that ttyS0 service will be up </b>(just followed some instructions)</div><div>weblab@metrics:~/ubuntu-kvm$ sudo virt-cat -a base-m1s.img /etc/init/ttyS0.conf</div><div># ttyS0 - getty</div>
<div>#</div><div># This service maintains a getty on ttyS0 from the point the system is</div><div># started until it is shut down again.</div><div>start on stopped rc or RUNLEVEL=[2345]</div><div>stop on runlevel [!2345]</div>
<div><br></div><div>respawn</div><div>exec /sbin/getty -L 115200 ttyS0 vt102</div><div><br></div><div>or use guestfish to create such file. </div><div><br></div><div><b>Important! Figure out that partition is sda, not hda, so... will change this correspondingly in opennebula image and vm templates</b></div>
<div>weblab@metrics:~/ubuntu-kvm$ sudo virt-filesystems -a base-m1s.img --all --long --uuid -h</div><div>Name Type VFS Label MBR Size Parent UUID</div><div>/dev/sda1 filesystem ext4 - - 3.8G - c645f23d-9d48-43d3-b042-7b06ae9f56b3</div>
<div>/dev/sda2 filesystem swap - - 976M - 6ad7b5f6-9503-413b-a660-99dfb7686459</div><div>/dev/sda1 partition - - 83 3.8G /dev/sda -</div><div>/dev/sda2 partition - - 82 976M /dev/sda -</div>
<div>/dev/sda device - - - 5.0G - -</div><div><br></div><div><b>Check if network settings are already in place, so even if the opennebula contextualization script fails, it will be brought up under the predefined ip. </b></div>
<div>weblab@metrics:~/ubuntu-kvm$ sudo virt-cat -a base-m1s.img /etc/network/interfaces</div><div># This file describes the network interfaces available on your system</div><div># and how to activate them. For more information, see interfaces(5).</div>
<div><br></div><div># The loopback network interface</div><div>auto lo</div><div>iface lo inet loopback</div><div><br></div><div># The primary network interface</div><div>auto eth0</div><div>iface eth0 inet static</div><div>
address 10.0.0.95</div><div> netmask 255.128.0.0</div><div> network 10.0.0.0</div><div> broadcast 10.127.255.255</div><div> gateway 10.0.0.1</div><div> # dns-* options are implemented by the resolvconf package, if installed</div>
<div> dns-nameservers 10.0.0.1</div><div> dns-search defaultdomain</div><div><br></div><div><b>Opennebula Image template </b></div><div>weblab@metrics:~/ubuntu-kvm$ cat base-m1s.image.template</div><div><div>
NAME = "base-m1.small - qcow"<br></div><div>PATH = /home/weblab/ubuntu-kvm/base-m1s.qcow2</div><div>TYPE = OS</div><div>DRIVER = qcow2</div></div><div><br></div><div>sudo -u oneadmin oneimage create base-m1s.image.template -d default</div>
<div>sudo -u oneadmin oneimage show 18</div><div>..</div><div>DEV_PREFIX="hd"</div><div>DRIVER = qcow2<br></div><div><br></div><div><b>Opennebula VM template</b><br></div><div><div>weblab@metrics:~/ubuntu-kvm$ cat base-m1s.vm.template</div>
<div>NAME = vm3-on-qcow</div><div><div>CPU = 0.6</div><div>MEMORY = 512</div><div><br></div><div>OS = [ ARCH = x86_64,</div><div> BOOT = hd,</div></div><div> ROOT = sda1,</div><div>
KERNEL = /vmlinuz,</div>
<div> INITRD = /initrd.img,</div><div> KERNEL_CMD = "ro console=tty1 console=ttyS0,115200n8" ]</div><div><br></div><div>DISK = [ IMAGE_ID = 18,</div><div> DRIVER = qcow2,</div><div>
READONLY = no ]</div><div><br></div><div>NIC = [ NETWORK_ID = 14 ]</div><div><div><br></div><div>FEATURES = [ acpi = yes ]</div><div><br></div><div>REQUIREMENTS = "FALSE"</div><div><br>
</div><div><br></div>
<div>CONTEXT = [</div><div> HOSTNAME = "$NAME",</div><div> IP_PUBLIC = "$NIC[IP]",</div><div> DNS = "$NETWORK[DNS, NETWORK_ID=9]",</div><div> GATEWAY = "$NETWORK[GATEWAY, NETWORK_ID=9]",</div>
<div> NETMASK = "$NETWORK[NETWORK_MASK, NETWORK_ID=9]",</div><div> FILES = "/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",</div></div><div> ROOT_PUBKEY = "id_rsa.pub" ]</div>
<div><br></div><div>RAW = [ type = "kvm",</div><div> data = "<devices><serial type=\"pty\"><target port=\"0\"/></serial><console type=\"pty\"><target port=\"0\" type=\"serial\"/></console></devices>" ]</div>
</div><div><br></div><div><br></div><div><b>Show</b></div></div><div><div>-----------</div><div>sudo -u oneadmin onevm show 77</div><div><div>VIRTUAL MACHINE TEMPLATE<br></div><div>CONTEXT=[</div><div> DISK_ID="1",</div>
</div><div> FILES="/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",</div><div> HOSTNAME="vm3-on-qcow",</div><div> IP_PUBLIC="10.0.0.95",</div><div> ROOT_PUBKEY="id_rsa.pub",</div>
<div>
TARGET="hdb" ]</div><div><div>CPU="0.6"</div><div>DISK=[</div><div> CLONE="YES",</div><div> DATASTORE="default",</div><div> DATASTORE_ID="1",</div><div> DEV_PREFIX="hd",</div>
<div> DISK_ID="0",</div><div> DRIVER="qcow2",</div></div><div> IMAGE="base-m1.small - qcow",</div><div> IMAGE_ID="18",</div><div><div> READONLY="NO",</div>
<div> SAVE="NO",</div>
</div><div> SOURCE="/var/lib/one/datastores/1/3fdc724b56b20346ed18687e677d6ae8",</div><div><div> TARGET="hda",</div><div> TM_MAD="ssh",</div><div> TYPE="FILE" ]</div>
<div>FEATURES=[</div>
<div>
ACPI="yes" ]</div><div>MEMORY="512"</div></div><div>NAME="vm3-on-qcow"</div><div>NIC=[</div><div> BRIDGE="br0",</div><div> IP="10.0.0.95",</div><div> MAC="02:00:0a:00:00:5f",</div>
<div> NETWORK="m1 network",</div><div> NETWORK_ID="14",</div><div><div> VLAN="NO" ]</div><div>OS=[</div><div> ARCH="x86_64",</div><div> BOOT="hd",</div><div>
INITRD="/initrd.img",</div>
</div><div> KERNEL="/vmlinuz",</div><div> KERNEL_CMD="ro console=tty1 console=ttyS0,115200n8",</div><div> ROOT="sda1" ]</div><div>RAW=[</div><div> DATA="<devices><serial type=\"pty\"><target port=\"0\"/></serial><console type=\"pty\"><target port=\"0\" type=\"serial\"/></console></devices>",</div>
<div> TYPE="kvm" ]</div><div>REQUIREMENTS="FALSE"</div><div>VMID="77"</div><div><br></div></div><div><b>Checking with virsh on nodehost</b></div><div>frontend >> ssh nodehost </div><div>
<div>nodehost >> sudo virsh --connect qemu:///system</div></div><div><br></div><div><div>virsh # list --all</div><div> Id Name State</div><div>----------------------------------</div><div> 11 one-77 running </div>
<div> - vm3 shut off</div></div><div><br></div><div><div>virsh # ttyconsole 11</div><div>/dev/pts/0</div><div><br></div></div><div><div>virsh # console 11</div><div>Connected to domain one-77</div><div>
Escape character is ^]</div><div>(--- Press Enter) </div><div>Ubuntu 12.04.1 LTS vm3 ttyS0</div><div><br></div><div>vm3 login: </div></div><div>...</div><div>(-- Press "Ctr + ]" to logout from vm to virsh )<br>
</div>
<div><br></div><div>virsh # dumpxml 11<br></div><div><div><domain type='kvm' id='11'></div><div> <name>one-77</name></div><div> <uuid>9f5fe3e7-5abd-1a45-6df8-84c91fb0af9e</uuid></div>
<div> <memory>524288</memory></div><div> <currentMemory>524288</currentMemory></div><div> <vcpu>1</vcpu></div><div> <cputune></div><div> <shares>615</shares></div>
<div> </cputune></div><div> <os></div><div> <type arch='x86_64' machine='pc-1.0'>hvm</type></div><div> <kernel>/vmlinuz</kernel></div><div> <initrd>/initrd.img</initrd></div>
<div> <cmdline>root=/dev/sda1 ro console=tty1 console=ttyS0,115200n8</cmdline></div><div> <boot dev='hd'/></div><div> </os></div><div> <features></div><div> <acpi/></div>
<div> </features></div><div> <clock offset='utc'/></div><div> <on_poweroff>destroy</on_poweroff></div><div> <on_reboot>restart</on_reboot></div><div> <on_crash>destroy</on_crash></div>
<div> <devices></div><div> <emulator>/usr/bin/kvm</emulator></div><div> <disk type='file' device='disk'></div><div> <driver name='qemu' type='qcow2'/></div>
<div> <source file='/var/lib/one/datastores/0/77/disk.0'/></div><div> <target dev='hda' bus='ide'/></div><div> <alias name='ide0-0-0'/></div><div> <address type='drive' controller='0' bus='0' unit='0'/></div>
<div> </disk></div><div> <disk type='file' device='cdrom'></div><div> <driver name='qemu' type='raw'/></div><div> <source file='/var/lib/one/datastores/0/77/disk.1'/></div>
<div> <target dev='hdb' bus='ide'/></div><div> <readonly/></div><div> <alias name='ide0-0-1'/></div><div> <address type='drive' controller='0' bus='0' unit='1'/></div>
<div> </disk></div><div> <controller type='ide' index='0'></div><div> <alias name='ide0'/></div><div> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/></div>
<div> </controller></div><div> <interface type='bridge'></div><div> <mac address='02:00:0a:00:00:5f'/></div><div> <source bridge='br0'/></div><div> <target dev='vnet0'/></div>
<div> <alias name='net0'/></div><div> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/></div><div> </interface></div>
<div> <serial type='pty'></div><div> <source path='/dev/pts/0'/></div><div> <target port='0'/></div><div> <alias name='serial0'/></div><div> </serial></div>
<div> <console type='pty' tty='/dev/pts/0'></div><div> <source path='/dev/pts/0'/></div><div> <target type='serial' port='0'/></div><div> <alias name='serial0'/></div>
<div> </console></div><div> <memballoon model='virtio'></div><div> <alias name='balloon0'/></div><div> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/></div>
<div> </memballoon></div><div> </devices></div><div></domain></div></div><div><div><div><br></div><div class="gmail_extra"><div class="gmail_extra"> </div><div><br></div><div class="gmail_quote">
On Fri, Nov 2, 2012 at 1:50 PM, Jaime Melis <span dir="ltr"><<a href="mailto:j.melis@gmail.com" target="_blank">j.melis@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hello,<div><br></div><div>I believe you are affected by the bug that incorrectly maps the context cdroms. I recommend you update to 3.8.1 where this bug is fixed.</div>
<div><br></div><div>More info on the problem: <a href="http://dev.opennebula.org/issues/1594" target="_blank">http://dev.opennebula.org/issues/1594</a></div>
<div>
<br></div><div>cheers,<br>Jaime</div>
<div class="gmail_extra"><br><br><div class="gmail_quote"><div><div>On Fri, Nov 2, 2012 at 11:05 AM, Xasima <span dir="ltr"><<a href="mailto:xasima@gmail.com" target="_blank">xasima@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div>
Hello. I have some p<span style="font-family:arial,sans-serif;font-size:12.727272033691406px">roblems with boot of cloud-based Ubuntu. </span><span style="font-family:arial,sans-serif;font-size:12.727272033691406px">There are two ubuntu 12.04 server (front-end and node) with openebula upgraded up to 3.8. </span><span style="font-family:arial,sans-serif;font-size:12.727272033691406px">I have successfully deployed opennebula-ttylinux with qemu / kvm for the first time to try. I want now to deploy already prepared EC2-compatible image of recent ubuntu. </span><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">
<br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">Actually the image and VM are deployed with no error (logs are ok), but VM doesn't consume CPU at all. I think it doesn't boot properly. </div>
<div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><div style="font-family:arial;font-size:small"><i>> sudo -u oneadmin onevm list</i></div>
<div style="font-family:arial;font-size:small"> ID USER GROUP NAME STAT UCPU UMEM HOST TIME</div><div style="font-family:arial;font-size:small"> 61 oneadmin oneadmin ttylinux runn 6 64M metrics-ba 0d 01h29</div>
<div style="font-family:arial;font-size:small"> 62 oneadmin oneadmin ubuntu-cloud64- runn <b>0 </b>512M<b> </b>metrics-ba 0d 00h10</div></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">
<br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">The only thing that seems strange for me in logs is the drive mapping (available from libvirt-qemu log on the node). </div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">
<br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><div style="font-family:arial;font-size:small"><i>> ssh node && cat /var/log/libvirt/qemu/one-62.log</i></div><div style="font-family:arial;font-size:small">
2012-11-02 09:16:56.096+0000: starting up</div><div style="font-family:arial;font-size:small">LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name one-62 -uuid 2c15ca04-7d5f-ab4c-8bdb-43d2add1a2fe -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-62.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -kernel /vmlinuz -initrd /initrd.img <b>-drive file=/var/lib/one/datastores/0/62/disk.0,if=none,id=drive-ide0-0-0,format=qcow2</b> -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive <b>file=/var/lib/one/datastores/0/62/disk.0,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw</b> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:00:00:5d,bus=pci.0,addr=0x3 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4</div>
</div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">Could anyone help to determine what is the cause of the failure and how to resolve this?</div>
<div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">-----------------</div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">
Here is the full information on my steps </div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><b>1. Image specific information</b></div>
<div style="font-family:arial,sans-serif;font-size:12.727272033691406px">Download <a href="http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img" target="_blank">http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img</a> to front-end. Manifest and ovf are available <a href="http://cloud-images.ubuntu.com/releases/precise/release/" target="_blank">http://cloud-images.ubuntu.com/releases/precise/release/</a> as well to check what is installed on the image,</div>
<div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><b>2. Image file format information</b></div><div><font face="arial, sans-serif">> <i>qemu-img info precise-server-cloudimg-amd64-disk1.img</i><br>
</font></div><div><font face="arial, sans-serif">image: precise-server-cloudimg-amd64-disk1.img</font></div><div><font face="arial, sans-serif">file format: qcow2</font></div><div><font face="arial, sans-serif">virtual size: 2.0G <a href="tel:%282147483648" value="+12147483648" target="_blank">(2147483648</a> bytes)</font></div>
<div><font face="arial, sans-serif">disk size: 222M</font></div><div><font face="arial, sans-serif">cluster_size: 65536</font></div><div><font face="arial, sans-serif"><br></font></div><div><b style="font-family:arial,sans-serif;font-size:12.727272033691406px">3. Content of the image</b><font face="arial, sans-serif"><br>
</font></div><div><font face="arial, sans-serif">Using <i>qemu-img convert (to raw) && </i></font><i><span style="font-family:arial,sans-serif;font-size:12.727272033691406px">kpartx -a -v precise...img && </span><span style="font-family:arial,sans-serif;font-size:12.727272033691406px"> mount /dev/mapper/loop1p1 /mnt/</span></i></div>
<div><span style="font-family:arial,sans-serif;font-size:12.727272033691406px">I have ensured the content of the image</span></div><div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><i>> ls /mnt/</i></div>
<div style="font-family:arial,sans-serif;font-size:12.727272033691406px">bin dev home lib lost+found mnt proc run selinux sys usr <b>vmlinuz</b></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">
boot etc <b>initrd.img</b> lib64 media opt root sbin srv tmp var</div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">
<i>> cat /mnt/etc/fstab</i></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"> LABEL=cloudimg-rootfs / ext4 defaults 0 0</div></div><div><font face="arial, sans-serif"><br>
</font></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><i>> umount && kpartx -d</i></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div><font face="arial, sans-serif">4. <b>Opennebula Image template</b></font></div>
<div><font face="arial, sans-serif"><i> > cat 64base-image.one</i></font></div><div><font face="arial, sans-serif">NAME = ubuntu-cloud64-qcow2</font></div><div><font face="arial, sans-serif">PATH = "/tmp/ttylinux/precise-server-cloudimg-amd64-disk1.img"</font></div>
<div><font face="arial, sans-serif">TYPE = OS</font></div><div><font face="arial, sans-serif">FSTYPE= "qcow2"</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">The state of drive on opennebula</font></div>
<div><font face="arial, sans-serif"><div><i>> sudo -u oneadmin oneimage show 12</i></div><div> IMAGE 12 INFORMATION</div><div>ID : 12</div><div>NAME : ubuntu-cloud64-qcow2</div><div>USER : oneadmin</div>
<div>GROUP : oneadmin</div><div>DATASTORE : default</div><div>TYPE : OS</div><div>REGISTER TIME : 11/02 12:04:47</div><div>PERSISTENT : No</div><div>SOURCE : /var/lib/one/datastores/1/a4d9b6af3313f826d9113b4e3b0ac25b</div>
<div>PATH : /tmp/ttylinux/precise-server-cloudimg-amd64-disk1.img</div><div>SIZE : 223M</div><div>STATE : used</div><div>RUNNING_VMS : 1</div><div><br></div><div>PERMISSIONS</div><div>OWNER : um-</div>
<div>GROUP : ---</div><div>OTHER : ---</div><div><br></div><div>IMAGE TEMPLATE</div><div>DEV_PREFIX="hd"</div><div>FSTYPE="qcow2"</div></font></div><div><font face="arial, sans-serif"><br>
</font></div><div><font face="arial, sans-serif">5. <b>Opennebula VM template</b></font></div><div><font face="arial, sans-serif"><i>> cat 64base.one</i></font></div><div><font face="arial, sans-serif">NAME = ubuntu-cloud64-on-qcow2</font></div>
<div><font face="arial, sans-serif">CPU = 0.6</font></div><div><font face="arial, sans-serif">MEMORY = 512</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">OS = [ ARCH = x86_64,</font></div>
<div><font face="arial, sans-serif"> BOOT = hd,</font></div><div><font face="arial, sans-serif"> KERNEL = /vmlinuz,</font></div><div><font face="arial, sans-serif"> INITRD = /initrd.img ]</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">DISK = [ IMAGE_ID = 12,</font></div><div><font face="arial, sans-serif"> DRIVER = qcow2,</font></div><div><font face="arial, sans-serif"> TYPE = disk,</font></div>
<div><font face="arial, sans-serif"> READONLY = no ]</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">NIC = [ NETWORK_ID = 9 ]</font></div><div><font face="arial, sans-serif"><br>
</font></div><div><font face="arial, sans-serif">FEATURES = [ acpi = yes ]</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">REQUIREMENTS = "FALSE"</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">CONTEXT = [</font></div><div><font face="arial, sans-serif"> HOSTNAME = "$NAME",</font></div><div><font face="arial, sans-serif"> IP_PUBLIC = "$NIC[IP]",</font></div>
<div><font face="arial, sans-serif"> DNS = "$NETWORK[DNS, NETWORK_ID=9]",</font></div><div><font face="arial, sans-serif"> GATEWAY = "$NETWORK[GATEWAY, NETWORK_ID=9]",</font></div><div><font face="arial, sans-serif"> NETMASK = "$NETWORK[NETWORK_MASK, NETWORK_ID=9]",</font></div>
<div><font face="arial, sans-serif"> FILES = "/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",</font></div><div><font face="arial, sans-serif"> TARGET = "hdc",</font></div><div><font face="arial, sans-serif"> ROOT_PUBKEY = "id_rsa.pub"</font></div>
<div><font face="arial, sans-serif">]</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">6. <b>Log of VM deployment (on front-end) </b></font></div><div><font face="arial, sans-serif"><div>
<i>> sudo -u oneadmin onevm deploy 62 5</i></div><div><i>> tail -f /var/log/one/62.log</i></div><div>Fri Nov 2 12:11:01 2012 [DiM][I]: New VM state is ACTIVE.</div><div>Fri Nov 2 12:11:02 2012 [LCM][I]: New VM state is PROLOG.</div>
<div>Fri Nov 2 12:17:05 2012 [TM][I]: clone: Cloning metrics:/var/lib/one/datastores/1/a4d9b6af3313f826d9113b4e3b0ac25b in /var/lib/one/datastores/0/62/disk.0</div><div>Fri Nov 2 12:17:05 2012 [TM][I]: ExitCode: 0</div>
<div>Fri Nov 2 12:17:09 2012 [TM][I]: context: Generating context block device at metrics-backend:/var/lib/one/datastores/0/62/disk.1</div><div>Fri Nov 2 12:17:09 2012 [TM][I]: ExitCode: 0</div><div>Fri Nov 2 12:17:09 2012 [LCM][I]: New VM state is BOOT</div>
<div>Fri Nov 2 12:17:09 2012 [VMM][I]: Generating deployment file: /var/lib/one/62/deployment.0</div><div>Fri Nov 2 12:17:11 2012 [VMM][I]: ExitCode: 0</div><div>Fri Nov 2 12:17:11 2012 [VMM][I]: Successfully execute network driver operation: pre.</div>
<div>Fri Nov 2 12:17:13 2012 [VMM][I]: ExitCode: 0</div><div>Fri Nov 2 12:17:13 2012 [VMM][I]: Successfully execute virtualization driver operation: deploy.</div><div>Fri Nov 2 12:17:13 2012 [VMM][I]: ExitCode: 0</div>
<div>Fri Nov 2 12:17:13 2012 [VMM][I]: Successfully execute network driver operation: post.</div><div>Fri Nov 2 12:17:13 2012 [LCM][I]: New VM state is RUNNING</div><div><br></div></font></div><div><i>> ssh node && cat /var/log/libvirt/qemu/one-62.log</i></div>
<div>2012-11-02 09:16:56.096+0000: starting up</div><div>LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name one-62 -uuid 2c15ca04-7d5f-ab4c-8bdb-43d2add1a2fe -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-62.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -kernel /vmlinuz -initrd /initrd.img -drive file=/var/lib/one/datastores/0/62/disk.0,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/var/lib/one/datastores/0/62/disk.0,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:00:00:5d,bus=pci.0,addr=0x3 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4</div>
<div>~</div><div><br></div><div>7. <b>Status of the machine on opennebula</b></div><div><div><i>> sudo -u oneadmin onevm list</i></div><div> ID USER GROUP NAME STAT UCPU UMEM HOST TIME</div>
<div> 61 oneadmin oneadmin ttylinux runn 6 64M metrics-ba 0d 01h29</div><div> 62 oneadmin oneadmin ubuntu-cloud64- runn <b>0 </b>512M<b> </b>metrics-ba 0d 00h10</div></div><div><br></div><div>
<div><i>> sudo -u oneadmin onevm show 62</i></div><div>VIRTUAL MACHINE 62 INFORMATION</div><div>ID : 62</div><div>NAME : ubuntu-cloud64-on-qcow2</div><div>USER : oneadmin</div>
<div>GROUP : oneadmin</div><div>STATE : ACTIVE</div><div>LCM_STATE : RUNNING</div><div>RESCHED : No</div><div>HOST : metrics-backend</div><div>START TIME : 11/02 12:08:37</div>
<div>END TIME : -</div><div>DEPLOY ID : one-62</div><div><br></div><div>VIRTUAL MACHINE MONITORING</div><div>USED CPU : 0</div><div>NET_RX : 1M</div><div>USED MEMORY : 512M</div>
<div>NET_TX : 0K</div><div><br></div><div>PERMISSIONS</div><div>OWNER : um-</div><div>GROUP : ---</div><div>OTHER : ---</div><div><br></div><div>VIRTUAL MACHINE TEMPLATE</div>
<div>CONTEXT=[</div><div> DISK_ID="1",</div><div> DNS="10.0.0.20",</div><div> FILES="/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",</div><div> GATEWAY="10.0.0.1",</div><div> HOSTNAME="ubuntu-cloud64-on-qcow2",</div>
<div> IP_PUBLIC="10.*.*.*" ,</div><div> NETMASK="255.128.0.0",</div><div> ROOT_PUBKEY="id_rsa.pub",</div><div> TARGET="hdc" ]</div><div>CPU="0.6"</div><div>DISK=[</div>
<div> CLONE="YES",</div><div> DATASTORE="default",</div><div> DATASTORE_ID="1",</div><div> DEV_PREFIX="hd",</div><div> DISK_ID="0",</div><div> DRIVER="qcow2",</div>
<div> IMAGE="ubuntu-cloud64-qcow2",</div><div> IMAGE_ID="12",</div><div> READONLY="NO",</div><div> SAVE="NO",</div><div> SOURCE="/var/lib/one/datastores/1/a4d9b6af3313f826d9113b4e3b0ac25b",</div>
<div> TARGET="hda",</div><div> TM_MAD="ssh",</div><div> TYPE="FILE" ]</div><div>FEATURES=[</div><div> ACPI="yes" ]</div><div>MEMORY="512"</div><div>NAME="ubuntu-cloud64-on-qcow2"</div>
<div>NIC=[</div><div> BRIDGE="br0",</div><div> IP="10.*.*.*",</div><div> MAC="02:00:0a:00:00:5d",</div><div> NETWORK="Server 10.0.0.x network with br0",</div><div> NETWORK_ID="9",</div>
<div> VLAN="NO" ]</div><div>OS=[</div><div> ARCH="x86_64",</div><div> BOOT="hd",</div><div> INITRD="/initrd.img",</div><div> KERNEL="/vmlinuz" ]</div><div>REQUIREMENTS="FALSE"</div>
<div>VMID="62"</div><div><br></div><div>VIRTUAL MACHINE HISTORY</div><div> SEQ HOST REASON START TIME PROLOG_TIME</div><div> 0 metrics-backend none 11/02 12:11:01 0d 00h28m06s 0d 00h06m08s</div>
</div><div><br></div><div><br></div><div>Thank you. </div><span><font color="#888888"><div>-- <br></div>Best regards,<br> ~ Xasima ~<br>
</font></span><br></div></div>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Best regards,<br> ~ Xasima ~<br>
</div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Best regards,<br> ~ Xasima ~<br>
</div>
</div></div><br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div><br clear="all"><div><br></div>-- <br>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a><br>