[one-users] Error with TM_LVM

Anoop L anpl1980 at gmail.com
Tue Feb 22 19:31:20 PST 2011


Hi Jaime,

Thank you for the patch.

It all went good and I launched the second VM. Error I mentioned was due to
VNC configuration on VM template. I have used the same VNC port for second
VM.

TM_LVM works fine now and it creates the LV with VMID.

I have another doubt, how can I force to create  on the fly SWAP and other
disks to be  on LVM.

Once again thanks for the help.

Thanks,
Anoop

On Tue, Feb 22, 2011 at 9:02 AM, Anoop L <anpl1980 at gmail.com> wrote:

> Thanks Jaime.
>
> I have edited the  tm_lvmrc and now I get this error on launching a second
> VM:
>
> Tue Feb 22 08:51:33 2011 [LCM][I]: New VM state is BOOT
> Tue Feb 22 08:51:33 2011 [VMM][I]: Generating deployment file:
> /opt/cloud/one/var/87/deployment.0
> Tue Feb 22 08:53:21 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/var/tmp/one/vmm/xen/deploy" ]; then /var/tmp/one/vmm/xen/deploy
> /opt/cloud/one/var//87/images/deployment.0;
> else                              exit 42; fi'
> Tue Feb 22 08:53:21 2011 [VMM][I]: STDERR follows.
> Tue Feb 22 08:53:21 2011 [VMM][I]: Error: Device 0 (vkbd) could not be
> connected. Hotplug scripts not working.
> Tue Feb 22 08:53:21 2011 [VMM][I]: ExitCode: 1
> Tue Feb 22 08:53:21 2011 [VMM][E]: Error deploying virtual machine: Error:
> Device 0 (vkbd) could not be connected. Hotplug scripts not working.
>
>
> To test everything from scratch I have deleted the first VM and trying it
> again.
>
> Thanks,
> Anoop
>
>
> On Mon, Feb 21, 2011 at 10:08 PM, Jaime Melis <jmelis at opennebula.org>wrote:
>
>> Hi Anoop,
>>
>> we have figured out what's wrong. A while back we changed the $SED
>> variable and to use by default the '-r' option (extended reg-exps), but that
>> change broke the sed script which parses de VM ID which is passed to the
>> tm_clone script. We have reported a bug and submitted a patch:
>> http://dev.opennebula.org/issues/496
>>
>> In order to fix it manually, edit "$ONE_LOCATION/etc/tm_lvm/tm_lvmrc" and
>> change this line:
>>     echo $1 |$SED -e 's%^.*/\([^/]*\)/images.*$%\1%'
>> to this one:
>>     echo $1 |$SED -e 's%^.*/([^/]*)/images.*$%\1%'
>>
>> The link to the patch:
>> http://dev.opennebula.org/projects/opennebula/repository/revisions/ef738d5d51cefff55f7ee2aec4f7721473ebeb0e/diff/src/tm_mad/lvm/tm_lvmrc
>>
>> Thanks for reporting this!
>>
>> cheers,
>> Jaime
>>
>> On Wed, Feb 16, 2011 at 6:08 PM, Anoop L <anpl1980 at gmail.com> wrote:
>>
>>> sure Jaime.
>>>
>>> I have attached the vm.log as well.
>>>
>>> Vm.log:
>>>
>>> Wed Feb 16 20:49:43 2011 [DiM][I]: New VM state is ACTIVE.
>>> Wed Feb 16 20:49:43 2011 [LCM][I]: New VM state is PROLOG.
>>> Wed Feb 16 20:49:43 2011 [VM][I]: Virtual Machine has no context
>>> Wed Feb 16 20:49:54 2011 [TM][I]: Command execution fail:
>>> /opt/cloud/one/lib/tm_commands/lvm/tm_clone.sh
>>> ast462:/opt/cloud/one/var//images/fd34675a5656d3b5b92e01b11dbae9c819e5f4c4
>>> ast-wks-348:/opt/cloud/one/var//86/images/disk.0
>>> Wed Feb 16 20:49:54 2011 [TM][I]: STDERR follows.
>>> Wed Feb 16 20:49:54 2011 [TM][I]: /bin/sed: -e expression #1, char 29:
>>> invalid reference \1 on `s' command's RHS
>>> Wed Feb 16 20:49:54 2011 [TM][I]: ERROR MESSAGE --8<------
>>> Wed Feb 16 20:49:54 2011 [TM][I]: Logical volume "lv-one--0" already
>>> exists in volume group "vg00"
>>> Wed Feb 16 20:49:54 2011 [TM][I]: ERROR MESSAGE ------>8--
>>> Wed Feb 16 20:49:54 2011 [TM][I]: ExitCode: 5
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh:
>>> ast462:/opt/cloud/one/var//images/fd34675a5656d3b5b92e01b11dbae9c819e5f4c4
>>> ast-wks-348:/opt/cloud/one/var//86/images/disk.0
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh: DST:
>>> /opt/cloud/one/var//86/images/disk.0
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh: Creating directory
>>> /opt/cloud/one/var//86/images
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh: Executed "/usr/bin/ssh
>>> ast-wks-348 mkdir -p /opt/cloud/one/var//86/images".
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh: Creating LV lv-one--0
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh: ERROR: Command
>>> "/usr/bin/ssh ast-wks-348 /usr/bin/sudo /sbin/lvcreate -L20G -n lv-one--0
>>> vg00" failed.
>>> Wed Feb 16 20:49:54 2011 [TM][I]: tm_clone.sh: ERROR:   Logical volume
>>> "lv-one--0" already exists in volume group "vg00"
>>> Wed Feb 16 20:49:54 2011 [TM][E]: Error excuting image transfer script:
>>> Logical volume "lv-one--0" already exists in volume group "vg00"
>>> Wed Feb 16 20:49:55 2011 [DiM][I]: New VM state is FAILED
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 Command execution
>>> fail: /opt/cloud/one/lib/tm_commands/lvm/tm_delete.sh
>>> ast-wks-348:/opt/cloud/one/var//86/images
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 STDERR follows.
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 /bin/sed: -e
>>> expression #1, char 29: invalid reference \1 on `s' command's RHS
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 ERROR MESSAGE
>>> --8<------
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 Can't remove open
>>> logical volume "lv-one--0"
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 ERROR MESSAGE
>>> ------>8--
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 ExitCode: 5
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 tm_delete.sh:
>>> Deleting remote LVs
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 tm_delete.sh: ERROR:
>>> Command "/usr/bin/ssh ast-wks-348 /usr/bin/sudo /sbin/lvremove -f $(echo
>>> vg00/$(/usr/bin/sudo /sbin/lvs --noheadings vg00|awk '{print $1}'|grep
>>> lv-one-))" failed.
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: LOG - 86 tm_delete.sh:
>>> ERROR:   Can't remove open logical volume "lv-one--0"
>>>
>>> Wed Feb 16 20:50:00 2011 [TM][W]: Ignored: TRANSFER FAILURE 86 Can't
>>> remove open logical volume "lv-one--0"
>>>
>>> Thanks,
>>> Anoop
>>>
>>>
>>>
>>> On Wed, Feb 16, 2011 at 9:12 PM, Jaime Melis <jmelis at opennebula.org>wrote:
>>>
>>>> Hi Anoop,
>>>>
>>>> could you please send me the full vm.log of that VM?
>>>>
>>>> cheers,
>>>> Jaime
>>>>
>>>>
>>>> On Wed, Feb 16, 2011 at 4:31 PM, Anoop L <anpl1980 at gmail.com> wrote:
>>>>
>>>>> Hi Jaime,
>>>>>
>>>>> Thanks for the reply. My VM ID is 86 but still ONE uses 0 as identifier
>>>>> for LV. Even the working VM with ID  is using lv-one--0.
>>>>>
>>>>> I have already tried removing LV manually. I guess issue is that ONE is
>>>>> trying to create the LV of same name and some how VM ID is not appended to
>>>>> the LV_NAME.
>>>>>
>>>>> One more things is when second VM fails it deletes the lv-one--0. Also
>>>>> any idea how can I force set a VM template that a swap space/disk to be
>>>>> created on LV.
>>>>>
>>>>> Some more information:
>>>>>
>>>>>  onevm list
>>>>>    ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
>>>>>    85 oneadmin centos55 runn   0      2G     ast-wks-348 00 02:00:16
>>>>>    86 oneadmin centos55 fail   0      0K     ast-wks-348 00 00:00:42
>>>>>
>>>>>
>>>>> onevm show 86 gives:
>>>>>
>>>>> VIRTUAL MACHINE 86
>>>>> INFORMATION
>>>>> ID             : 86
>>>>> NAME           : centos55
>>>>> STATE          : FAILED
>>>>> LCM_STATE      : LCM_INIT
>>>>> START TIME     : 02/16 20:49:13
>>>>> END TIME       : 02/16 20:49:55
>>>>> DEPLOY ID:     : -
>>>>>
>>>>> VIRTUAL MACHINE
>>>>> MONITORING
>>>>> NET_TX         : 0
>>>>> NET_RX         : 0
>>>>> USED MEMORY    : 0
>>>>> USED CPU       : 0
>>>>>
>>>>> VIRTUAL MACHINE
>>>>> TEMPLATE
>>>>> CPU=1
>>>>> DISK=[
>>>>>   CLONE=YES,
>>>>>   DISK_ID=0,
>>>>>   IMAGE=centos5564_Base.img,
>>>>>   IMAGE_ID=8,
>>>>>   READONLY=NO,
>>>>>   SAVE=NO,
>>>>>
>>>>> SOURCE=/opt/cloud/one/var//images/fd34675a5656d3b5b92e01b11dbae9c819e5f4c4,
>>>>>   TARGET=hda,
>>>>>   TYPE=DISK ]
>>>>> DISK=[
>>>>>   DISK_ID=1,
>>>>>   SIZE=5120,
>>>>>   TARGET=hdd,
>>>>>   TYPE=swap ]
>>>>> FEATURES=[
>>>>>   ACPI=no ]
>>>>> GRAPHICS=[
>>>>>   LISTEN=0.0.0.0,
>>>>>   PORT=5916,
>>>>>   TYPE=vnc ]
>>>>> MEMORY=2048
>>>>> NAME=centos55
>>>>> NIC=[
>>>>>   BRIDGE=xenbr0,
>>>>>   IP=10.20.30.1,
>>>>>   MAC=02:00:0a:14:1e:01,
>>>>>   NETWORK=LAN2,
>>>>>   NETWORK_ID=2 ]
>>>>> OS=[
>>>>>   BOOTLOADER=/usr/bin/pygrub ]
>>>>> VMID=86
>>>>>
>>>>>
>>>>>
>>>>> lvdisplay on node:
>>>>>
>>>>>   --- Logical volume ---
>>>>>   LV Name                /dev/vg00/lv-one--0
>>>>>   VG Name                vg00
>>>>>   LV UUID                0ENQKV-L8qi-APmD-iGiv-uRfj-vVNv-nkWud4
>>>>>   LV Write Access        read/write
>>>>>   LV Status              available
>>>>>   # open                 2
>>>>>   LV Size                20.00 GB
>>>>>   Current LE             640
>>>>>   Segments               1
>>>>>   Allocation             inherit
>>>>>   Read ahead sectors     auto
>>>>>   - currently set to     256
>>>>>   Block device           253:2
>>>>>
>>>>>
>>>>>
>>>>> Any help would be appreciated.
>>>>>
>>>>> Thanks,
>>>>> Anoop
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 16, 2011 at 8:44 PM, Jaime Melis <jmelis at opennebula.org>wrote:
>>>>>
>>>>>> Hello Anoop,
>>>>>>
>>>>>> Is the ID of the VM you're trying to deploy '0' ? When you deploy a
>>>>>> new vm the following LV volume will be created: lv-one-<ID>. If that LV
>>>>>> already exists then it fails. My guess is that it's failing because for some
>>>>>> reason OpenNebula didn't get the change to do a 'delete' removing the LV
>>>>>> partition.
>>>>>>
>>>>>> I suggest you remove manually all the LVM partitions. You can find out
>>>>>> the existing ones with "lvs" and remove them with "lvremove" (do that as
>>>>>> root in the Xen node).
>>>>>>
>>>>>> Cheers,
>>>>>> Jaime
>>>>>>
>>>>>> On Wed, Feb 16, 2011 at 2:26 PM, Anoop L <anpl1980 at gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I have successfully created a VM from Front-End to a Xen node However
>>>>>>> on creating the second VM I get an error:
>>>>>>>
>>>>>>> Wed Feb 16 18:45:24 2011 [TM][I]: tm_clone.sh: Creating LV lv-one--0
>>>>>>> Wed Feb 16 18:45:24 2011 [TM][I]: tm_clone.sh: ERROR: Command
>>>>>>> "/usr/bin/ssh node-1 /usr/bin/sudo /sbin/lvcreate -L20G -n lv-one--0 vg00"
>>>>>>> failed.
>>>>>>>
>>>>>>> My VM template:
>>>>>>> NAME   = centos55
>>>>>>> CPU    = 1
>>>>>>> MEMORY = 2048
>>>>>>> OS      =   [ bootloader = "/usr/bin/pygrub" ]
>>>>>>>
>>>>>>> DISK  = [  image = "centos5564_Base.img"]
>>>>>>>
>>>>>>> DISK   = [
>>>>>>>  type     = swap,
>>>>>>>  size     = 5120
>>>>>>>  #target   = sdb
>>>>>>> ]
>>>>>>>
>>>>>>>
>>>>>>> NIC = [ BRIDGE = "xenbr0", MAC = "00:16:3E:02:03:05" ]
>>>>>>> FEATURES=[ acpi="no" ]
>>>>>>>
>>>>>>> GRAPHICS = [
>>>>>>>  type    = "vnc",
>>>>>>>  listen  = "0.0.0.0",
>>>>>>>  port    = "5916" ]
>>>>>>>
>>>>>>> The same template is used for creating first VM and it all was
>>>>>>> working fine. Except the swap is not created as an LV.
>>>>>>>
>>>>>>> How can I change this template so as to create a new LV for swap
>>>>>>> partiotion. Please note that I have not created any LV manually. If I create
>>>>>>> a LV manually how can I specify this in the VM template.
>>>>>>>
>>>>>>> I am stuck on this for some time and posted this multiple times with
>>>>>>> no answer.
>>>>>>>
>>>>>>> Any help would be appreciated.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Anoop
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at lists.opennebula.org
>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Jaime Melis, Cloud Technology Engineer/Researcher
>>>>>> Major Contributor
>>>>>> OpenNebula - The Open Source Toolkit for Cloud Computing
>>>>>> www.OpenNebula.org | jmelis at opennebula.org
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Jaime Melis, Cloud Technology Engineer/Researcher
>>>> Major Contributor
>>>> OpenNebula - The Open Source Toolkit for Cloud Computing
>>>> www.OpenNebula.org | jmelis at opennebula.org
>>>>
>>>
>>>
>>
>>
>> --
>> Jaime Melis, Cloud Technology Engineer/Researcher
>> Major Contributor
>> OpenNebula - The Open Source Toolkit for Cloud Computing
>> www.OpenNebula.org | jmelis at opennebula.org
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110223/ad8806a0/attachment-0003.htm>


More information about the Users mailing list