[one-users] OpenNebula 3.4 with Clustered LVM
Jaime Melis
jmelis at opennebula.org
Mon May 14 06:56:48 PDT 2012
Hi Yves,
I'm glad they finally worked out. I've added it to OpenNebula 3.4
documentation:
http://opennebula.org/documentation:rel3.4:lvm
Thanks for your input!
cheers,
Jaime
On Sat, May 12, 2012 at 4:56 PM, Vogl, Yves <vogl at adesso-mobile.de> wrote:
> Hi Jaime,
>
> awesome - now it works. It's totally awesome how fast I can deploy a VM
> now.
> You made me so happy, dude!
>
> Thanks a lot and keep up the great work!
>
> Yves
> ------------------------------
> *Von:* Jaime Melis [jmelis at opennebula.org]
> *Gesendet:* Freitag, 11. Mai 2012 17:26
> *An:* Vogl, Yves
> *Cc:* users at lists.opennebula.org
> *Betreff:* Re: [one-users] OpenNebula 3.4 with Clustered LVM
>
> Hi,
>
> thanks for you quick response.
>
> No worries, I'm interested in seeing them working in OpenNebula 3.4.
>
> have you enabled the lvm drivers in oned.conf ? (you will need to restart
> one)
>
> DATASTORE_MAD = [
> executable = "one_datastore",
> arguments = "-t 15 -d fs,vmware,iscsi,lvm"
> ]
>
> TM_MAD = [
> executable = "one_tm",
> arguments = "-t 15 -d dummy,lvm,shared,qcow2,ssh,vmware,iscsi" ]
>
> cheers,
> Jaime
>
> On Fri, May 11, 2012 at 5:18 PM, Vogl, Yves <vogl at adesso-mobile.de> wrote:
>
>> Hi,
>>
>> thanks for you quick response.
>>
>> Now there's something going on after creating an image. It fails with
>> an error. I'll investigate this and get back to you soon with results :)
>>
>> Error copying image in the repository: Datastore driver 'lvm' not
>> available
>>
>>
>>
>>
>>
>> On 11.05.2012, at 17:14, Jaime Melis wrote:
>>
>> Hi Yves,
>>
>> in the image template (im.conf) do PATH = /var/lib/one/Master.raw,
>> instead of SOURCE = /var/lib/one/Master.raw
>>
>> The rest looks fine.
>>
>> cheers,
>> Jaime
>>
>> On Fri, May 11, 2012 at 5:10 PM, Vogl, Yves <vogl at adesso-mobile.de>
>> wrote:
>>
>> Hi Jamie,
>>
>>
>> thanks for you patience. I'm still failing :-( I've attached all steps
>> I've done and simplified my setup for this example.
>>
>>
>> Two servers:
>>
>>
>> one01 => Running OpenNebula 3.4
>>
>> kvm02 => Running Linux KVM
>>
>>
>> Both systems have are member of an LVM cluster.
>>
>>
>> Now my steps:
>>
>>
>> 1. Create a datastore on one01
>>
>>
>> sudo -uoneadmin onedatastore create ds.conf
>>
>>
>> NAME = kvm02
>>
>> DS_MAD = lvm
>>
>> TM_MAD = lvm
>>
>> VG_NAME = vg1
>>
>> HOST = kvm02
>>
>>
>>
>> $ onedatastore list
>>
>> ID NAME CLUSTER IMAGES TYPE TM
>>
>> 0 system - 0 fs ssh
>>
>> 1 default - 0 fs ssh
>>
>> 123 kvm02 - 0 lvm lvm
>>
>>
>>
>> DATASTORE 123 INFORMATION
>>
>> ID : 123
>>
>> NAME : kvm02
>>
>> USER : oneadmin
>>
>> GROUP : oneadmin
>>
>> CLUSTER : -
>>
>> DS_MAD : lvm
>>
>> TM_MAD : lvm
>>
>> BASE PATH : /var/lib/one/datastores/123
>>
>>
>> PERMISSIONS
>>
>> OWNER : um-
>>
>> GROUP : u--
>>
>> OTHER : ---
>>
>>
>> DATASTORE TEMPLATE
>>
>> DS_MAD="lvm"
>>
>> HOST="kvm02.example.org"
>>
>> TM_MAD="lvm"
>>
>> VG_NAME="vg1"
>>
>>
>> IMAGES
>>
>> 23
>>
>>
>>
>>
>> 2. Create an image template
>>
>>
>> sudo -uoneadmin oneimage create im.conf -d kvm02
>>
>>
>>
>> im.conf
>>
>>
>> NAME = "kvm02
>>
>> SOURCE = /var/lib/one/Master.raw
>>
>> TYPE = OS
>>
>> BUS = virtio
>>
>> DRIVER = raw
>>
>> DESCRIPTION = "CentOS 6.2 64-Bit LVM Snapshot Master"
>>
>>
>>
>> Master.raw exists.
>>
>>
>>
>> $ oneimage list
>>
>> ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS
>>
>> 23 oneadmin oneadmin kvm02 kvm02 0M OS No rdy 0
>>
>>
>>
>> $ oneimage show kvm02
>>
>> IMAGE 23 INFORMATION
>>
>> ID : 23
>>
>> NAME : kvm02
>>
>> USER : oneadmin
>>
>> GROUP : oneadmin
>>
>> DATASTORE : kvm02
>>
>> TYPE : OS
>>
>> REGISTER TIME : 05/11 16:44:10
>>
>> PERSISTENT : No
>>
>> SOURCE : /var/lib/one/Master.raw
>>
>> SIZE : 0
>>
>> STATE : rdy
>>
>> RUNNING_VMS : 0
>>
>>
>> PERMISSIONS
>>
>> OWNER : um-
>>
>> GROUP : ---
>>
>> OTHER : ---
>>
>>
>> IMAGE TEMPLATE
>>
>> BUS="virtio"
>>
>> DESCRIPTION="CentOS 6.2 64-Bit LVM Snapshot Master"
>>
>> DEV_PREFIX="hd"
>>
>> DRIVER="raw"
>>
>>
>>
>>
>> At this point, nothing has happened so far.
>>
>> OpenNebula did neither create a volume nor it dumped the image.
>> Eventhough the image is show as ready.
>>
>>
>> Is that the correct behaviour?
>>
>>
>> By the way... the same thing is true for virtual networks. When I
>> created a virtual network in OpenNebula I also had to create the
>> corresponding virtual network on the KVM host manually.
>>
>>
>> Maybe there's something wrong at this point?
>>
>>
>> Anyway... I tried to go on and created an VM image like this:
>>
>>
>> sudo -uoneadmin onetemplate create vm.conf
>>
>>
>> vm.conf
>>
>>
>> CPU="100"
>>
>> DISK=[
>>
>> BUS="virtio",
>>
>> DRIVER="raw",
>>
>> IMAGE="kvm02",
>>
>> IMAGE_UNAME="oneadmin" ]
>>
>> FEATURES=[
>>
>> PAE="no" ]
>>
>> MEMORY="1024"
>>
>> NAME="kvm02"
>>
>> NIC=[
>>
>> MODEL="virtio",
>>
>> NETWORK="kvm02",
>>
>> NETWORK_UNAME="oneadmin" ]
>>
>> OS=[
>>
>> ARCH="x86_64",
>>
>> BOOT="hd" ]
>>
>> RAW=[
>>
>> TYPE="kvm" ]
>>
>> VCPU="1"
>>
>>
>>
>> $ onetemplate list
>>
>> ID USER GROUP NAME REGTIME
>>
>> 27 oneadmin oneadmin kvm02 05/11 16:59:15
>>
>>
>> $ onetemplate show kvm02
>>
>> TEMPLATE 27 INFORMATION
>>
>> ID : 27
>>
>> NAME : kvm02
>>
>> USER : oneadmin
>>
>> GROUP : oneadmin
>>
>> REGISTER TIME : 05/11 16:59:15
>>
>>
>> PERMISSIONS
>>
>> OWNER : um-
>>
>> GROUP : ---
>>
>> OTHER : ---
>>
>>
>> TEMPLATE CONTENTS
>>
>> CPU="100"
>>
>> DISK=[
>>
>> BUS="virtio",
>>
>> DRIVER="raw",
>>
>> IMAGE="kvm02",
>>
>> IMAGE_UNAME="oneadmin" ]
>>
>> FEATURES=[
>>
>> PAE="no" ]
>>
>> MEMORY="1024"
>>
>> NAME="kvm02"
>>
>> NIC=[
>>
>> MODEL="virtio",
>>
>> NETWORK="kvm02",
>>
>> NETWORK_UNAME="oneadmin" ]
>>
>> OS=[
>>
>> ARCH="x86_64",
>>
>> BOOT="hd" ]
>>
>> RAW=[
>>
>> TYPE="kvm" ]
>>
>> TEMPLATE_ID="27"
>>
>> VCPU="1"
>>
>>
>>
>> Next I try to instantiate a vm with this:
>>
>>
>> $ onetemplate instantiate kvm02
>>
>> VM ID: 440
>>
>>
>> $ onevm list
>>
>> ID USER GROUP NAME STAT CPU MEM HOSTNAME
>> TIME
>>
>> 440 oneadmin oneadmin one-440 pend 0 0K
>> 0d 00:00
>>
>>
>>
>> $ onevm show 440
>>
>> VIRTUAL MACHINE 440 INFORMATION
>>
>> ID : 440
>>
>> NAME : one-440
>>
>> USER : oneadmin
>>
>> GROUP : oneadmin
>>
>> STATE : PENDING
>>
>> LCM_STATE : LCM_INIT
>>
>> START TIME : 05/11 17:03:11
>>
>> END TIME : -
>>
>> DEPLOY ID : -
>>
>>
>> VIRTUAL MACHINE MONITORING
>>
>> USED CPU : 0
>>
>> NET_RX : 0
>>
>> USED MEMORY : 0
>>
>> NET_TX : 0
>>
>>
>> PERMISSIONS
>>
>> OWNER : um-
>>
>> GROUP : ---
>>
>> OTHER : ---
>>
>>
>> VIRTUAL MACHINE TEMPLATE
>>
>> CPU="100"
>>
>> DISK=[
>>
>> BUS="virtio",
>>
>> CLONE="YES",
>>
>> DATASTORE="kvm02",
>>
>> DATASTORE_ID="123",
>>
>> DISK_ID="0",
>>
>> DRIVER="raw",
>>
>> IMAGE="kvm02",
>>
>> IMAGE_ID="23",
>>
>> IMAGE_UNAME="oneadmin",
>>
>> READONLY="NO",
>>
>> SAVE="NO",
>>
>> SOURCE="/var/lib/one/Master.raw",
>>
>> TARGET="hda",
>>
>> TM_MAD="lvm",
>>
>> TYPE="DISK" ]
>>
>> FEATURES=[
>>
>> PAE="no" ]
>>
>> MEMORY="1024"
>>
>> NAME="one-440"
>>
>> NIC=[
>>
>> BRIDGE="br0",
>>
>> IP="176.1.1.1",
>>
>> MAC="02:00:b0:02:a3:01",
>>
>> MODEL="virtio",
>>
>> NETWORK="kvm02",
>>
>> NETWORK_ID="10",
>>
>> NETWORK_UNAME="oneadmin",
>>
>> VLAN="NO" ]
>>
>> OS=[
>>
>> ARCH="x86_64",
>>
>> BOOT="hd" ]
>>
>> RAW=[
>>
>> TYPE="kvm" ]
>>
>> TEMPLATE_ID="27"
>>
>> VCPU="1"
>>
>> VMID="440"
>>
>>
>>
>>
>>
>> Now deploy:
>>
>>
>> $ onevm deploy 440 kvm02.example.org
>>
>>
>>
>> It fails instantly:
>>
>>
>> MESSAGE="Error executing image transfer script: Error cloning
>> /dev//var/lib/one/Master/raw to /dev//var/lib/one/Master/raw-440-0"
>>
>>
>>
>> That's the log file:
>>
>>
>> Fri May 11 17:04:41 2012 [DiM][I]: New VM state is ACTIVE.
>>
>> Fri May 11 17:04:42 2012 [LCM][I]: New VM state is PROLOG.
>>
>> Fri May 11 17:04:42 2012 [VM][I]: Virtual Machine has no context
>>
>> Fri May 11 17:04:42 2012 [TM][I]: Command execution fail:
>> /var/lib/one/remotes/tm/lvm/clone one01:/var/lib/one/Master.raw
>> kvm02.example.org:/var/lib/one//datastores/0/440/disk.0
>>
>> Fri May 11 17:04:42 2012 [TM][I]:
>> /var/lib/one/remotes/tm/lvm/../../datastore/xpath.rb: unrecognized option
>> `--stdin'
>>
>> Fri May 11 17:04:42 2012 [TM][E]: clone: Command " set -e
>>
>> Fri May 11 17:04:42 2012 [TM][I]: mkdir -p /var/lib/one/datastores/0/440
>>
>> Fri May 11 17:04:42 2012 [TM][I]: sudo lvcreate -n raw-440-0 -L40960 -s
>> /dev//var/lib/one/Master/raw
>>
>> Fri May 11 17:04:42 2012 [TM][I]: ln -s
>> "/dev//var/lib/one/Master/raw-440-0"
>> "/var/lib/one/datastores/0/440/disk.0"" failed:
>> "/dev//var/lib/one/Master/raw": Invalid path for Logical Volume
>>
>> Fri May 11 17:04:42 2012 [TM][I]: The origin name should include the
>> volume group.
>>
>> Fri May 11 17:04:42 2012 [TM][I]: Run `lvcreate --help' for more
>> information.
>>
>> Fri May 11 17:04:42 2012 [TM][E]: Error cloning
>> /dev//var/lib/one/Master/raw to /dev//var/lib/one/Master/raw-440-0
>>
>> Fri May 11 17:04:42 2012 [TM][I]: ExitCode: 3
>>
>> Fri May 11 17:04:42 2012 [TM][E]: Error executing image transfer script:
>> Error cloning /dev//var/lib/one/Master/raw to
>> /dev//var/lib/one/Master/raw-440-0
>>
>> Fri May 11 17:04:42 2012 [DiM][I]: New VM state is FAILED
>>
>> [oneadmin at one01 ~]$
>>
>>
>>
>>
>> First thing that's wrong:
>>
>>
>> # sudo lvcreate -n raw-440-0 -L40960 -s /dev//var/lib/one/Master/raw
>>
>>
>>
>> It should be:
>>
>>
>> lvcreate -n raw-440-0 -L40960 -s /dev/vg1/master
>>
>>
>>
>> The next issue:
>>
>>
>> # ln -s "/dev//var/lib/one/Master/raw-440-0"
>> "/var/lib/one/datastores/0/440/disk.0""
>>
>>
>>
>> It should be:
>>
>>
>> ln -s /dev/vg1/raw-440-0 /var/lib/one/datastores/0/440/disk.0
>>
>>
>>
>>
>>
>>
>> Cheers, Yves
>>
>>
>>
>>
>>
>> On 11.05.2012, at 14:19, Jaime Melis wrote:
>>
>>
>> Hi Yves,
>>
>>
>> no sorry, I didn't explain myself correctly. The LVM workflow is as
>> follows:
>>
>>
>> You have an already existing file with an image (in your case
>> Master.qcow2, although for these drivers it needs to be in RAW format).
>> When you do 'oneimage create' with PATH (instead of SOURCE) pointing to the
>> file, the drivers will do automatically what you did manually: create a new
>> LV of the appropriate size, and dump it (using 'dd') to the new LV.
>>
>>
>> Once it's created, you can instantiante vms with that image. If the
>> image is persistent, the VM will access the LV directly. If it's not
>> persistent, the drivers will create a new LV, a snapshot of the original
>> image, and the VM will access it. Thus creating a copy almost
>> instantaneously. If you execute saveas on that image, it will be saved to a
>> new image so that you don't lose your changes.
>>
>>
>> I hope I explained it better this time.
>>
>>
>> Cheers,
>>
>> Jaime
>>
>>
>> On Fri, May 11, 2012 at 1:37 PM, Vogl, Yves <vogl at adesso-mobile.de>
>> wrote:
>>
>> Hi Jamie,
>>
>>
>> thanks for pointing out a solution to the sudo error. This was clear
>> to me because of the well documented instructions - I think there's no need
>> to improve it. I just forgot to do this step.
>>
>>
>> On 11.05.2012, at 13:31, Jaime Melis wrote:
>>
>>
>> About the way you registered the image, it's not usually the way it's
>> meant to be done. You should use PATH=/path/to/Master.raw
>>
>> instead of using directly the SOURCE.
>>
>>
>> So if I do not want to deal with images at all I've to create a "pure
>> snapshot" driver by myself?
>>
>>
>> The idea is to populate a master image as clustered LVM volume and to
>> have faster deployment by just snapshotting the logical volume.
>>
>> If I have to use an image after creating a logical volume - there's no
>> need to snapshot anything before.
>>
>>
>> Yves
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> Jaime Melis
>>
>> Project Engineer
>>
>> OpenNebula - The Open Source Toolkit for Cloud Computing
>>
>> www.OpenNebula.org | jmelis at opennebula.org
>>
>>
>>
>>
>>
>> --
>> Jaime Melis
>> Project Engineer
>> OpenNebula - The Open Source Toolkit for Cloud Computing
>> www.OpenNebula.org | jmelis at opennebula.org
>>
>>
>>
>
>
> --
> Jaime Melis
> Project Engineer
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org | jmelis at opennebula.org
>
--
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120514/23b5898c/attachment-0003.htm>
More information about the Users
mailing list