[one-users] ubuntu-cloud boot (?) problems
Jaime Melis
jmelis at opennebula.org
Fri Dec 7 03:20:18 PST 2012
Hi Xasima,
sorry for the late response. I'm bookmarking this thread, to link to it if
someone wants to know how to install a VM based on the EC2 ubuntu one.
Thanks a lot!
Answering your questions:
> *1) What is the proper correspondence between DISK DEV_PREFIX , VM OS
> ROOT and actual image partition mappings.
> *
What you're doing with those parameters is to let the hypervisor know to
what virtual bus you are plugging your disks to. In other words, if you
have a physical server, you could be plug your HDs to an IDE bus, or a SATA
bus. To specify that to the hypervisor you can do: DEV_PREFIX=hd => IDE bus
*or* DEV_PREFIX=sd => SATA BUS.
However, what the OS does with that is entirely up to the OS. For example
Ubuntu will always map the devices to /dev/sdX and never to /dev/hdX. With
CentOS 5, for instace (if I remember correctly) an IDE driver will be
mapeed to /dev/hdX. So this is a bit tricky and confusing. The best way to
understand this is by trial and error for each hypervisor-VM pair.
> ***2) Where is my contextualization cdrom ? *
*
*
Your CD is, as you already know, in /dev/sr0. A very nice and quick way to
figure out what your CD is, is to run this command:
CDROM_DEVICE=$(ls /dev/cdrom* /dev/scd* /dev/sr* | sort | head -n 1)
However, your problem here of not being able to load the iso9660 module is
probably related to the fact that the ramdisk for that image has been
altered. I reckon the best way to solve that is by inspecting the ramdisk
and the kernel, but we cannot be of much help here.
Thanks again for your thorough guide and I hope you've been finally lucky
in importing the VM.
regards,
Jaime
On Wed, Nov 14, 2012 at 3:59 PM, Xasima <xasima at gmail.com> wrote:
> Sorry for the very long letter, but some clarifications may be needed
>
> a) My step "*Test the type of the image" *needs to be
> weblab at metrics:~/ubuntu-kvm$ sudo qemu-img info *base-m1s.qcow2*
> image: base-m1s.qcow2
> *file format: qcow2*
> virtual size: 5.0G (5368709120 bytes)
> disk size: 580M
> cluster_size: 65536
>
> Since I have actually test both raw image "base-m1s.img" and qcow image
> "base-m1s.qcow2". The latest working configuration is for qcow2
>
> b) In the question "w*here is my contextualization cdrom " *I have got
> error when trying to manually mount sr0 device.
> weblab at vm3:~$ sudo mount /dev/sr0 /mnt
> mount: unknown filesystem type 'iso9660'
>
> So it's completely not clear for me what is the proper configuration to
> mount context cdrom on startup on ubuntu (virtual kernel, per sudo
> vmbuilder kvm ubuntu --suite=precise *--flavour=virtual* --arch=amd64)
>
> On Wed, Nov 14, 2012 at 5:41 PM, Xasima <xasima at gmail.com> wrote:
>
>> Hello, Thank you. I have successfully updated opennebula up to 3.8.1,
>> and this fix cdrom mapping issue.
>> However, I had a long time attempts to manage to set up ubuntu 12.04 with
>> serial access (virsh console enabled). I have included my steps in the
>> bottom of the mail if that helps anyone else.
>>
>> Besides I was able to set up my quest, I have a little question on the
>> configuration.
>>
>> *1) What is the proper correspondence between DISK DEV_PREFIX , VM OS
>> ROOT and actual image partition mappings. *
>> My already prepared qcow2 image has /dev/sda1 partition mapping inside.
>> But I leave unchanged the opennebula image templates with
>> DEV_PREFIX="hd"
>> DRIVER = qcow2
>>
>> While opennebula vm template has
>> OS = [ ARCH = x86_64,
>> BOOT = hd,
>> ROOT = sda1,
>> ...]
>>
>> While "onevm show" display
>> TARGET="hda"
>> against this disk
>>
>> Don't I need to place DEV_PREFIX to "sd" instead ?
>>
>> *2) Where is my contextualization cdrom ? *
>> My contextualization cd-rom is automatically mapped to TARGET="hdb" (as
>> displayed by "onevm show")
>> But guest do know only about sda devices
>> weblab at vm3:~$ ls /dev/ | grep hd
>> weblab at vm3:~$ ls /dev/ | grep sd
>> sda
>> sda1
>> sda2
>>
>> weblab at vm3:~$ sudo dmesg | grep -i cd
>> [ 0.328988] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
>> [ 0.329619] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
>> [ 0.330202] uhci_hcd: USB Universal Host Controller Interface driver
>> [ 0.330828] uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11
>> (level, high) -> IRQ 11
>> [ 0.331665] uhci_hcd 0000:00:01.2: setting latency timer to 64
>> [ 0.331674] uhci_hcd 0000:00:01.2: UHCI Host Controller
>> [ 0.336156] uhci_hcd 0000:00:01.2: new USB bus registered, assigned
>> bus number 1
>> [ 0.336936] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c100
>> [ 0.495645] scsi 0:0:1:0: CD-ROM QEMU QEMU DVD-ROM
>> 1.0 PQ: 0 ANSI: 5
>> [ 0.497029] sr0: scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
>> [ 0.497686] cdrom: Uniform CD-ROM driver Revision: 3.20
>> [ 0.501810] sr 0:0:1:0: Attached scsi CD-ROM sr0
>>
>> weblab at vm3:~$ cat /boot/config-3.2.0-32-virtual | grep -i iso9660
>> CONFIG_ISO9660_FS=m
>>
>> weblab at vm3:~$ sudo modprobe iso9660
>> FATAL: Could not load /lib/modules/3.2.0-32-generic/modules.dep
>>
>> Seems that either contextualization init.sh is not ok, or do I need to
>> use generic / not a virtual kernel?
>>
>> The part of my init.sh contextualization script (just copied from
>> ttylinux template)
>> if [ -f /mnt/context/context.sh ]
>> then
>> . /mnt/context/context.sh
>> else
>> mount -t iso9660 /dev/sr0 /mnt
>> ./mnt/context/context.sh
>> fi
>>
>>
>> --------------
>> Steps to enable serial console on ubuntu 12.04 with opennebula
>> --------------
>> *Preparation*
>> 1) Install (apt-get install) questfish + libguestfs-tools with
>> dependencies to be able to manage easily quest fs, since some changes to
>> guest grub and init.d are required to enable serial console
>> 2) Create an qcow2 image using vm-builder with predefined ip/gateway (on
>> the case if contextualization script fails)
>> 3) Start image with libvirt, change xml to include serial console and
>> test if it works. Ensure if it behaves well both with ssh and virsh console
>> access.
>>
>> *Test the type of the image*
>> weblab at metrics:~/ubuntu-kvm$ qemu-img info base-m1s.img
>> image: base-m1s.img
>> file format: raw
>> virtual size: 5.0G (5368709120 bytes)
>> disk size: 565M
>> weblab at metrics:~/ubuntu-kvm$
>>
>> *Check if the grub is configured to use serial console*
>> weblab at metrics:~/ubuntu-kvm$ sudo virt-edit base-m1s.img
>> /boot/grub/menu.lst
>> ...
>> title Ubuntu 12.04.1 LTS, kernel 3.2.0-32-virtual
>> uuid c645f23d-9d48-43d3-b042-7b06ae9f56b3
>> kernel /boot/vmlinuz-3.2.0-32-virtual
>> root=UUID=c645f23d-9d48-43d3-b042-7b06ae9f56b3 ro quiet splash console=tty1
>> console=ttyS0,115200n8
>> initrd /boot/initrd.img-3.2.0-32-virtual
>> ...
>>
>> *Check if initrs and vmlinuz are presented so will explicitly point to
>> them in opennebula vm template*
>> weblab at metrics:~/ubuntu-kvm$ sudo virt-ls -la base-m1s.img / | grep boot
>> drwxr-xr-x 3 0 0 4096 Nov 12 14:58 boot
>> lrwxrwxrwx 1 0 0 33 Nov 12 14:58 initrd.img ->
>> /boot/initrd.img-3.2.0-32-virtual
>> lrwxrwxrwx 1 0 0 29 Nov 12 14:58 vmlinuz ->
>> boot/vmlinuz-3.2.0-32-virtual
>>
>> *Double check that ttyS0 service will be up *(just followed some
>> instructions)
>> weblab at metrics:~/ubuntu-kvm$ sudo virt-cat -a base-m1s.img
>> /etc/init/ttyS0.conf
>> # ttyS0 - getty
>> #
>> # This service maintains a getty on ttyS0 from the point the system is
>> # started until it is shut down again.
>> start on stopped rc or RUNLEVEL=[2345]
>> stop on runlevel [!2345]
>>
>> respawn
>> exec /sbin/getty -L 115200 ttyS0 vt102
>>
>> or use guestfish to create such file.
>>
>> *Important! Figure out that partition is sda, not hda, so... will change
>> this correspondingly in opennebula image and vm templates*
>> weblab at metrics:~/ubuntu-kvm$ sudo virt-filesystems -a base-m1s.img --all
>> --long --uuid -h
>> Name Type VFS Label MBR Size Parent UUID
>> /dev/sda1 filesystem ext4 - - 3.8G -
>> c645f23d-9d48-43d3-b042-7b06ae9f56b3
>> /dev/sda2 filesystem swap - - 976M -
>> 6ad7b5f6-9503-413b-a660-99dfb7686459
>> /dev/sda1 partition - - 83 3.8G /dev/sda -
>> /dev/sda2 partition - - 82 976M /dev/sda -
>> /dev/sda device - - - 5.0G - -
>>
>> *Check if network settings are already in place, so even if the
>> opennebula contextualization script fails, it will be brought up under the
>> predefined ip. *
>> weblab at metrics:~/ubuntu-kvm$ sudo virt-cat -a base-m1s.img
>> /etc/network/interfaces
>> # This file describes the network interfaces available on your system
>> # and how to activate them. For more information, see interfaces(5).
>>
>> # The loopback network interface
>> auto lo
>> iface lo inet loopback
>>
>> # The primary network interface
>> auto eth0
>> iface eth0 inet static
>> address 10.0.0.95
>> netmask 255.128.0.0
>> network 10.0.0.0
>> broadcast 10.127.255.255
>> gateway 10.0.0.1
>> # dns-* options are implemented by the resolvconf package, if
>> installed
>> dns-nameservers 10.0.0.1
>> dns-search defaultdomain
>>
>> *Opennebula Image template *
>> weblab at metrics:~/ubuntu-kvm$ cat base-m1s.image.template
>> NAME = "base-m1.small - qcow"
>> PATH = /home/weblab/ubuntu-kvm/base-m1s.qcow2
>> TYPE = OS
>> DRIVER = qcow2
>>
>> sudo -u oneadmin oneimage create base-m1s.image.template -d default
>> sudo -u oneadmin oneimage show 18
>> ..
>> DEV_PREFIX="hd"
>> DRIVER = qcow2
>>
>> *Opennebula VM template*
>> weblab at metrics:~/ubuntu-kvm$ cat base-m1s.vm.template
>> NAME = vm3-on-qcow
>> CPU = 0.6
>> MEMORY = 512
>>
>> OS = [ ARCH = x86_64,
>> BOOT = hd,
>> ROOT = sda1,
>> KERNEL = /vmlinuz,
>> INITRD = /initrd.img,
>> KERNEL_CMD = "ro console=tty1 console=ttyS0,115200n8" ]
>>
>> DISK = [ IMAGE_ID = 18,
>> DRIVER = qcow2,
>> READONLY = no ]
>>
>> NIC = [ NETWORK_ID = 14 ]
>>
>> FEATURES = [ acpi = yes ]
>>
>> REQUIREMENTS = "FALSE"
>>
>>
>> CONTEXT = [
>> HOSTNAME = "$NAME",
>> IP_PUBLIC = "$NIC[IP]",
>> DNS = "$NETWORK[DNS, NETWORK_ID=9]",
>> GATEWAY = "$NETWORK[GATEWAY, NETWORK_ID=9]",
>> NETMASK = "$NETWORK[NETWORK_MASK, NETWORK_ID=9]",
>> FILES = "/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",
>> ROOT_PUBKEY = "id_rsa.pub" ]
>>
>> RAW = [ type = "kvm",
>> data = "<devices><serial type=\"pty\"><target
>> port=\"0\"/></serial><console type=\"pty\"><target port=\"0\"
>> type=\"serial\"/></console></devices>" ]
>>
>>
>> *Show*
>> -----------
>> sudo -u oneadmin onevm show 77
>> VIRTUAL MACHINE TEMPLATE
>> CONTEXT=[
>> DISK_ID="1",
>> FILES="/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",
>> HOSTNAME="vm3-on-qcow",
>> IP_PUBLIC="10.0.0.95",
>> ROOT_PUBKEY="id_rsa.pub",
>> TARGET="hdb" ]
>> CPU="0.6"
>> DISK=[
>> CLONE="YES",
>> DATASTORE="default",
>> DATASTORE_ID="1",
>> DEV_PREFIX="hd",
>> DISK_ID="0",
>> DRIVER="qcow2",
>> IMAGE="base-m1.small - qcow",
>> IMAGE_ID="18",
>> READONLY="NO",
>> SAVE="NO",
>> SOURCE="/var/lib/one/datastores/1/3fdc724b56b20346ed18687e677d6ae8",
>> TARGET="hda",
>> TM_MAD="ssh",
>> TYPE="FILE" ]
>> FEATURES=[
>> ACPI="yes" ]
>> MEMORY="512"
>> NAME="vm3-on-qcow"
>> NIC=[
>> BRIDGE="br0",
>> IP="10.0.0.95",
>> MAC="02:00:0a:00:00:5f",
>> NETWORK="m1 network",
>> NETWORK_ID="14",
>> VLAN="NO" ]
>> OS=[
>> ARCH="x86_64",
>> BOOT="hd",
>> INITRD="/initrd.img",
>> KERNEL="/vmlinuz",
>> KERNEL_CMD="ro console=tty1 console=ttyS0,115200n8",
>> ROOT="sda1" ]
>> RAW=[
>> DATA="<devices><serial type=\"pty\"><target
>> port=\"0\"/></serial><console type=\"pty\"><target port=\"0\"
>> type=\"serial\"/></console></devices>",
>> TYPE="kvm" ]
>> REQUIREMENTS="FALSE"
>> VMID="77"
>>
>> *Checking with virsh on nodehost*
>> frontend >> ssh nodehost
>> nodehost >> sudo virsh --connect qemu:///system
>>
>> virsh # list --all
>> Id Name State
>> ----------------------------------
>> 11 one-77 running
>> - vm3 shut off
>>
>> virsh # ttyconsole 11
>> /dev/pts/0
>>
>> virsh # console 11
>> Connected to domain one-77
>> Escape character is ^]
>> (--- Press Enter)
>> Ubuntu 12.04.1 LTS vm3 ttyS0
>>
>> vm3 login:
>> ...
>> (-- Press "Ctr + ]" to logout from vm to virsh )
>>
>> virsh # dumpxml 11
>> <domain type='kvm' id='11'>
>> <name>one-77</name>
>> <uuid>9f5fe3e7-5abd-1a45-6df8-84c91fb0af9e</uuid>
>> <memory>524288</memory>
>> <currentMemory>524288</currentMemory>
>> <vcpu>1</vcpu>
>> <cputune>
>> <shares>615</shares>
>> </cputune>
>> <os>
>> <type arch='x86_64' machine='pc-1.0'>hvm</type>
>> <kernel>/vmlinuz</kernel>
>> <initrd>/initrd.img</initrd>
>> <cmdline>root=/dev/sda1 ro console=tty1
>> console=ttyS0,115200n8</cmdline>
>> <boot dev='hd'/>
>> </os>
>> <features>
>> <acpi/>
>> </features>
>> <clock offset='utc'/>
>> <on_poweroff>destroy</on_poweroff>
>> <on_reboot>restart</on_reboot>
>> <on_crash>destroy</on_crash>
>> <devices>
>> <emulator>/usr/bin/kvm</emulator>
>> <disk type='file' device='disk'>
>> <driver name='qemu' type='qcow2'/>
>> <source file='/var/lib/one/datastores/0/77/disk.0'/>
>> <target dev='hda' bus='ide'/>
>> <alias name='ide0-0-0'/>
>> <address type='drive' controller='0' bus='0' unit='0'/>
>> </disk>
>> <disk type='file' device='cdrom'>
>> <driver name='qemu' type='raw'/>
>> <source file='/var/lib/one/datastores/0/77/disk.1'/>
>> <target dev='hdb' bus='ide'/>
>> <readonly/>
>> <alias name='ide0-0-1'/>
>> <address type='drive' controller='0' bus='0' unit='1'/>
>> </disk>
>> <controller type='ide' index='0'>
>> <alias name='ide0'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
>> function='0x1'/>
>> </controller>
>> <interface type='bridge'>
>> <mac address='02:00:0a:00:00:5f'/>
>> <source bridge='br0'/>
>> <target dev='vnet0'/>
>> <alias name='net0'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
>> function='0x0'/>
>> </interface>
>> <serial type='pty'>
>> <source path='/dev/pts/0'/>
>> <target port='0'/>
>> <alias name='serial0'/>
>> </serial>
>> <console type='pty' tty='/dev/pts/0'>
>> <source path='/dev/pts/0'/>
>> <target type='serial' port='0'/>
>> <alias name='serial0'/>
>> </console>
>> <memballoon model='virtio'>
>> <alias name='balloon0'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
>> function='0x0'/>
>> </memballoon>
>> </devices>
>> </domain>
>>
>>
>>
>> On Fri, Nov 2, 2012 at 1:50 PM, Jaime Melis <j.melis at gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I believe you are affected by the bug that incorrectly maps the context
>>> cdroms. I recommend you update to 3.8.1 where this bug is fixed.
>>>
>>> More info on the problem: http://dev.opennebula.org/issues/1594
>>>
>>> cheers,
>>> Jaime
>>>
>>>
>>> On Fri, Nov 2, 2012 at 11:05 AM, Xasima <xasima at gmail.com> wrote:
>>>
>>>> Hello. I have some problems with boot of cloud-based Ubuntu. There are
>>>> two ubuntu 12.04 server (front-end and node) with openebula upgraded up to
>>>> 3.8. I have successfully deployed opennebula-ttylinux with qemu / kvm
>>>> for the first time to try. I want now to deploy already prepared
>>>> EC2-compatible image of recent ubuntu.
>>>>
>>>> Actually the image and VM are deployed with no error (logs are ok), but
>>>> VM doesn't consume CPU at all. I think it doesn't boot properly.
>>>>
>>>> *> sudo -u oneadmin onevm list*
>>>> ID USER GROUP NAME STAT UCPU UMEM
>>>> HOST TIME
>>>> 61 oneadmin oneadmin ttylinux runn 6
>>>> 64M metrics-ba 0d 01h29
>>>> 62 oneadmin oneadmin ubuntu-cloud64- runn *0 *512M*
>>>> *metrics-ba 0d 00h10
>>>>
>>>> The only thing that seems strange for me in logs is the drive mapping
>>>> (available from libvirt-qemu log on the node).
>>>>
>>>> *> ssh node && cat /var/log/libvirt/qemu/one-62.log*
>>>> 2012-11-02 09:16:56.096+0000: starting up
>>>> LC_ALL=C
>>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
>>>> /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp
>>>> 1,sockets=1,cores=1,threads=1 -name one-62 -uuid
>>>> 2c15ca04-7d5f-ab4c-8bdb-43d2add1a2fe -nographic -nodefconfig -nodefaults
>>>> -chardev
>>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-62.monitor,server,nowait
>>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
>>>> -kernel /vmlinuz -initrd /initrd.img *-drive
>>>> file=/var/lib/one/datastores/0/62/disk.0,if=none,id=drive-ide0-0-0,format=qcow2
>>>> * -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0
>>>> -drive *
>>>> file=/var/lib/one/datastores/0/62/disk.0,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
>>>> * -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
>>>> -netdev tap,fd=19,id=hostnet0 -device
>>>> rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:00:00:5d,bus=pci.0,addr=0x3
>>>> -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
>>>>
>>>> Could anyone help to determine what is the cause of the failure and how
>>>> to resolve this?
>>>>
>>>> -----------------
>>>> Here is the full information on my steps
>>>>
>>>> *1. Image specific information*
>>>> Download
>>>> http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img to
>>>> front-end. Manifest and ovf are available
>>>> http://cloud-images.ubuntu.com/releases/precise/release/ as well to
>>>> check what is installed on the image,
>>>>
>>>> *2. Image file format information*
>>>> > *qemu-img info precise-server-cloudimg-amd64-disk1.img*
>>>> image: precise-server-cloudimg-amd64-disk1.img
>>>> file format: qcow2
>>>> virtual size: 2.0G (2147483648 bytes)
>>>> disk size: 222M
>>>> cluster_size: 65536
>>>>
>>>> *3. Content of the image*
>>>> Using *qemu-img convert (to raw) && **kpartx -a -v precise...img && mount
>>>> /dev/mapper/loop1p1 /mnt/*
>>>> I have ensured the content of the image
>>>> *> ls /mnt/*
>>>> bin dev home lib lost+found mnt proc run selinux sys
>>>> usr *vmlinuz*
>>>> boot etc *initrd.img* lib64 media opt root sbin srv
>>>> tmp var
>>>>
>>>> *> cat /mnt/etc/fstab*
>>>> LABEL=cloudimg-rootfs / ext4 defaults 0 0
>>>>
>>>> *> umount && kpartx -d*
>>>>
>>>> 4. *Opennebula Image template*
>>>> * > cat 64base-image.one*
>>>> NAME = ubuntu-cloud64-qcow2
>>>> PATH = "/tmp/ttylinux/precise-server-cloudimg-amd64-disk1.img"
>>>> TYPE = OS
>>>> FSTYPE= "qcow2"
>>>>
>>>> The state of drive on opennebula
>>>> *> sudo -u oneadmin oneimage show 12*
>>>> IMAGE 12 INFORMATION
>>>> ID : 12
>>>> NAME : ubuntu-cloud64-qcow2
>>>> USER : oneadmin
>>>> GROUP : oneadmin
>>>> DATASTORE : default
>>>> TYPE : OS
>>>> REGISTER TIME : 11/02 12:04:47
>>>> PERSISTENT : No
>>>> SOURCE :
>>>> /var/lib/one/datastores/1/a4d9b6af3313f826d9113b4e3b0ac25b
>>>> PATH : /tmp/ttylinux/precise-server-cloudimg-amd64-disk1.img
>>>> SIZE : 223M
>>>> STATE : used
>>>> RUNNING_VMS : 1
>>>>
>>>> PERMISSIONS
>>>> OWNER : um-
>>>> GROUP : ---
>>>> OTHER : ---
>>>>
>>>> IMAGE TEMPLATE
>>>> DEV_PREFIX="hd"
>>>> FSTYPE="qcow2"
>>>>
>>>> 5. *Opennebula VM template*
>>>> *> cat 64base.one*
>>>> NAME = ubuntu-cloud64-on-qcow2
>>>> CPU = 0.6
>>>> MEMORY = 512
>>>>
>>>> OS = [ ARCH = x86_64,
>>>> BOOT = hd,
>>>> KERNEL = /vmlinuz,
>>>> INITRD = /initrd.img ]
>>>>
>>>> DISK = [ IMAGE_ID = 12,
>>>> DRIVER = qcow2,
>>>> TYPE = disk,
>>>> READONLY = no ]
>>>>
>>>> NIC = [ NETWORK_ID = 9 ]
>>>>
>>>> FEATURES = [ acpi = yes ]
>>>>
>>>> REQUIREMENTS = "FALSE"
>>>>
>>>> CONTEXT = [
>>>> HOSTNAME = "$NAME",
>>>> IP_PUBLIC = "$NIC[IP]",
>>>> DNS = "$NETWORK[DNS, NETWORK_ID=9]",
>>>> GATEWAY = "$NETWORK[GATEWAY, NETWORK_ID=9]",
>>>> NETMASK = "$NETWORK[NETWORK_MASK, NETWORK_ID=9]",
>>>> FILES = "/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",
>>>> TARGET = "hdc",
>>>> ROOT_PUBKEY = "id_rsa.pub"
>>>> ]
>>>>
>>>> 6. *Log of VM deployment (on front-end) *
>>>> *> sudo -u oneadmin onevm deploy 62 5*
>>>> *> tail -f /var/log/one/62.log*
>>>> Fri Nov 2 12:11:01 2012 [DiM][I]: New VM state is ACTIVE.
>>>> Fri Nov 2 12:11:02 2012 [LCM][I]: New VM state is PROLOG.
>>>> Fri Nov 2 12:17:05 2012 [TM][I]: clone: Cloning
>>>> metrics:/var/lib/one/datastores/1/a4d9b6af3313f826d9113b4e3b0ac25b in
>>>> /var/lib/one/datastores/0/62/disk.0
>>>> Fri Nov 2 12:17:05 2012 [TM][I]: ExitCode: 0
>>>> Fri Nov 2 12:17:09 2012 [TM][I]: context: Generating context block
>>>> device at metrics-backend:/var/lib/one/datastores/0/62/disk.1
>>>> Fri Nov 2 12:17:09 2012 [TM][I]: ExitCode: 0
>>>> Fri Nov 2 12:17:09 2012 [LCM][I]: New VM state is BOOT
>>>> Fri Nov 2 12:17:09 2012 [VMM][I]: Generating deployment file:
>>>> /var/lib/one/62/deployment.0
>>>> Fri Nov 2 12:17:11 2012 [VMM][I]: ExitCode: 0
>>>> Fri Nov 2 12:17:11 2012 [VMM][I]: Successfully execute network driver
>>>> operation: pre.
>>>> Fri Nov 2 12:17:13 2012 [VMM][I]: ExitCode: 0
>>>> Fri Nov 2 12:17:13 2012 [VMM][I]: Successfully execute virtualization
>>>> driver operation: deploy.
>>>> Fri Nov 2 12:17:13 2012 [VMM][I]: ExitCode: 0
>>>> Fri Nov 2 12:17:13 2012 [VMM][I]: Successfully execute network driver
>>>> operation: post.
>>>> Fri Nov 2 12:17:13 2012 [LCM][I]: New VM state is RUNNING
>>>>
>>>> *> ssh node && cat /var/log/libvirt/qemu/one-62.log*
>>>> 2012-11-02 09:16:56.096+0000: starting up
>>>> LC_ALL=C
>>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
>>>> /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp
>>>> 1,sockets=1,cores=1,threads=1 -name one-62 -uuid
>>>> 2c15ca04-7d5f-ab4c-8bdb-43d2add1a2fe -nographic -nodefconfig -nodefaults
>>>> -chardev
>>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-62.monitor,server,nowait
>>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
>>>> -kernel /vmlinuz -initrd /initrd.img -drive
>>>> file=/var/lib/one/datastores/0/62/disk.0,if=none,id=drive-ide0-0-0,format=qcow2
>>>> -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive
>>>> file=/var/lib/one/datastores/0/62/disk.0,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
>>>> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
>>>> tap,fd=19,id=hostnet0 -device
>>>> rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:00:00:5d,bus=pci.0,addr=0x3
>>>> -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
>>>> ~
>>>>
>>>> 7. *Status of the machine on opennebula*
>>>> *> sudo -u oneadmin onevm list*
>>>> ID USER GROUP NAME STAT UCPU UMEM HOST
>>>> TIME
>>>> 61 oneadmin oneadmin ttylinux runn 6 64M metrics-ba
>>>> 0d 01h29
>>>> 62 oneadmin oneadmin ubuntu-cloud64- runn *0 *512M* *metrics-ba
>>>> 0d 00h10
>>>>
>>>> *> sudo -u oneadmin onevm show 62*
>>>> VIRTUAL MACHINE 62 INFORMATION
>>>> ID : 62
>>>> NAME : ubuntu-cloud64-on-qcow2
>>>> USER : oneadmin
>>>> GROUP : oneadmin
>>>> STATE : ACTIVE
>>>> LCM_STATE : RUNNING
>>>> RESCHED : No
>>>> HOST : metrics-backend
>>>> START TIME : 11/02 12:08:37
>>>> END TIME : -
>>>> DEPLOY ID : one-62
>>>>
>>>> VIRTUAL MACHINE MONITORING
>>>> USED CPU : 0
>>>> NET_RX : 1M
>>>> USED MEMORY : 512M
>>>> NET_TX : 0K
>>>>
>>>> PERMISSIONS
>>>> OWNER : um-
>>>> GROUP : ---
>>>> OTHER : ---
>>>>
>>>> VIRTUAL MACHINE TEMPLATE
>>>> CONTEXT=[
>>>> DISK_ID="1",
>>>> DNS="10.0.0.20",
>>>> FILES="/tmp/ttylinux/init.sh /tmp/ttylinux/id_rsa.pub",
>>>> GATEWAY="10.0.0.1",
>>>> HOSTNAME="ubuntu-cloud64-on-qcow2",
>>>> IP_PUBLIC="10.*.*.*" ,
>>>> NETMASK="255.128.0.0",
>>>> ROOT_PUBKEY="id_rsa.pub",
>>>> TARGET="hdc" ]
>>>> CPU="0.6"
>>>> DISK=[
>>>> CLONE="YES",
>>>> DATASTORE="default",
>>>> DATASTORE_ID="1",
>>>> DEV_PREFIX="hd",
>>>> DISK_ID="0",
>>>> DRIVER="qcow2",
>>>> IMAGE="ubuntu-cloud64-qcow2",
>>>> IMAGE_ID="12",
>>>> READONLY="NO",
>>>> SAVE="NO",
>>>> SOURCE="/var/lib/one/datastores/1/a4d9b6af3313f826d9113b4e3b0ac25b",
>>>> TARGET="hda",
>>>> TM_MAD="ssh",
>>>> TYPE="FILE" ]
>>>> FEATURES=[
>>>> ACPI="yes" ]
>>>> MEMORY="512"
>>>> NAME="ubuntu-cloud64-on-qcow2"
>>>> NIC=[
>>>> BRIDGE="br0",
>>>> IP="10.*.*.*",
>>>> MAC="02:00:0a:00:00:5d",
>>>> NETWORK="Server 10.0.0.x network with br0",
>>>> NETWORK_ID="9",
>>>> VLAN="NO" ]
>>>> OS=[
>>>> ARCH="x86_64",
>>>> BOOT="hd",
>>>> INITRD="/initrd.img",
>>>> KERNEL="/vmlinuz" ]
>>>> REQUIREMENTS="FALSE"
>>>> VMID="62"
>>>>
>>>> VIRTUAL MACHINE HISTORY
>>>> SEQ HOST REASON START TIME
>>>> PROLOG_TIME
>>>> 0 metrics-backend none 11/02 12:11:01 0d 00h28m06s 0d
>>>> 00h06m08s
>>>>
>>>>
>>>> Thank you.
>>>> --
>>>> Best regards,
>>>> ~ Xasima ~
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>
>>>>
>>>
>>
>>
>> --
>> Best regards,
>> ~ Xasima ~
>>
>
>
>
> --
> Best regards,
> ~ Xasima ~
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
--
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20121207/a6ecbb03/attachment-0001.htm>
More information about the Users
mailing list