[one-users] Problem with the format of qcow2
Ruben S. Montero
rsmontero at opennebula.org
Mon Jul 15 07:06:58 PDT 2013
Hi
It seems that the problem is the volatile disk you are creating
Fri Jul 12 11:16:55 2013 [TM][I]: mkimage: Making filesystem of 22016M and
type ext4 at cldwn07:/vz/one/datastores/0/80/disk.1
But you are using qcow2 to access the disk:
Fri Jul 12 11:16:57 2013 [VMM][I]: error: internal error process exited
while connecting to monitor: qemu-kvm: -drive
file=/vz/one/datastores/0/80/disk.1,if=none,id=drive-ide0-1-1,format=qcow2,cache=none:
could not open disk image /vz/one/datastores/0/80/disk.1: Invalid argument
So either generate the disk as qcow2 (and mkfs it inside the guest) or use
driver=raw to access it.
Cheers
Ruben
On Mon, Jul 15, 2013 at 3:45 PM, Alexandr Baranov
<telecastcloud at gmail.com>wrote:
> *You need to execute*
> *
> *
> */var/tmp/one/tm/ssh/clone vm158.jinr.ru :/
> var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/
> vz/one/datastores/0/80/disk.0 80 1*
> *
> *
> That's what gives me the script:
> $ /var/lib/one/remotes/tm/ssh/clone vm158
> :/var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07
> :/vz/one/datastores/0/80/disk.0 80 1
> INFO: clone: Cloning vm158
> /var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105
> ERROR: clone: Command "scp -r vm158
> :/var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105" failed: cp:
> cannot stat `vm158': No such file or directory
> ERROR MESSAGE --8<------
> Error copying vm158 to
> :/var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105
> ERROR MESSAGE ------>8--
>
> As I understand it can not find the directory, but it is:
> $ ls -la /var/lib/one/datastores/1/
> total 4745108
> drwxr-x--- 2 oneadmin oneadmin 4096 Jul 12 11:11 .
> drwxr-x--- 6 oneadmin oneadmin 4096 May 17 16:49 ..
> -rw-r--r-- 1 oneadmin oneadmin 21474836481 Jul 11 11:19
> 2f74b5c000d9f12dcd20c93a8f461416
> -rw-r--r-- 1 oneadmin oneadmin 41943040 Jul 11 12:18
> 6afbcca141a5e41387fa611ecb149012
> -rw-r--r-- 1 oneadmin oneadmin 4342759424 Jul 10 16:46
> c03de7e41ccf4434b65324c3b91c6105
>
> *
> *
> *
> *
> *Are you also getting errors with TM=ssh? Are the same errors as before?*
> *
> *
> Yes I datastore works with TM = ssh:
>
> $ onedatastore list
> ID NAME CLUSTER IMAGES TYPE DS TM
>
> 0 system - 0 sys - ssh
> 1 default - 3 img fs ssh
> 2 files - 0 fil fs ssh
>
> error is present
> $ less /var/log/one/80.log
> Fri Jul 12 11:15:02 2013 [DiM][I]: New VM state is ACTIVE.
> Fri Jul 12 11:15:02 2013 [LCM][I]: New VM state is PROLOG.
> Fri Jul 12 11:15:33 2013 [LCM][E]: monitor_done_action, VM in a wrong state
> Fri Jul 12 11:16:46 2013 [TM][I]: clone: Cloning vm158.jinr.ru:/var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105
> in /vz/one/datastores/0/80/disk.0
> Fri Jul 12 11:16:46 2013 [TM][I]: ExitCode: 0
> Fri Jul 12 11:16:55 2013 [TM][I]: mkimage: Making filesystem of 22016M and
> type ext4 at cldwn07:/vz/one/datastores/0/80/disk.1
> Fri Jul 12 11:16:55 2013 [TM][I]: ExitCode: 0
> Fri Jul 12 11:16:56 2013 [TM][I]: context: Generating context block device
> at cldwn07:/vz/one/datastores/0/80/disk.2
> Fri Jul 12 11:16:56 2013 [TM][I]: ExitCode: 0
> Fri Jul 12 11:16:56 2013 [LCM][I]: New VM state is BOOT
> Fri Jul 12 11:16:56 2013 [VMM][I]: Generating deployment file:
> /var/lib/one/vms/80/deployment.0
> Fri Jul 12 11:16:56 2013 [VMM][I]: ExitCode: 0
> Fri Jul 12 11:16:56 2013 [VMM][I]: Successfully execute network driver
> operation: pre.
> Fri Jul 12 11:16:57 2013 [VMM][I]: Command execution fail: cat << EOT |
> /var/tmp/one/vmm/kvm/deploy /vz/one/datastores/0/80/deployment.0 cldwn07 80
> cldwn07
> Fri Jul 12 11:16:57 2013 [VMM][I]: error: Failed to create domain from
> /vz/one/datastores/0/80/deployment.0
> Fri Jul 12 11:16:57 2013 [VMM][I]: error: internal error process exited
> while connecting to monitor: qemu-kvm: -drive
> file=/vz/one/datastores/0/80/disk.1,if=none,id=drive-ide0-1-1,format=qcow2,cache=none:
> could not open disk image /vz/one/datastores/0/80/disk.1: Invalid argument
> Fri Jul 12 11:16:57 2013 [VMM][I]:
> Fri Jul 12 11:16:57 2013 [VMM][E]: Could not create domain from
> /vz/one/datastores/0/80/deployment.0
> Fri Jul 12 11:16:57 2013 [VMM][I]: ExitCode: 255
> Fri Jul 12 11:16:57 2013 [VMM][I]: Failed to execute virtualization driver
> operation: deploy.
> Fri Jul 12 11:16:57 2013 [VMM][E]: Error deploying virtual machine: Could
> not create domain from /vz/one/datastores/0/80/deployment.0
> Fri Jul 12 11:16:57 2013 [DiM][I]: New VM state is FAILED
>
> There are any ideas?
>
>
> 2013/7/13 Ruben S. Montero <rsmontero at opennebula.org>
>
>> You need to execute
>>
>> /var/tmp/one/tm/ssh/clone vm158.jinr.ru :/
>> var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/
>> vz/one/datastores/0/80/disk.0 80 1
>>
>> "CLONE ssh" is translated to "/var/tmp/one/tm/ssh/clone", so you can
>> easily debug the others....
>>
>> Are you also getting errors with TM=ssh? Are the same errors as before?
>>
>> Cheers
>>
>>
>>
>> On Fri, Jul 12, 2013 at 11:13 AM, Alexandr Baranov <
>> telecastcloud at gmail.com> wrote:
>>
>>> The contents of the file vms/80/transfer.0.prolog:
>>>
>>>
>>> CLONE ssh vm158.jinr.ru :/
>>> var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/
>>> vz/one/datastores/0/80/disk.0 80 1
>>> MKIMAGE ssh 22016 ext4 cldwn07 :/ vz/one/datastores/0/80/disk.1 80 0
>>> CONTEXT ssh / var/lib/one/vms/80/context.sh cldwn07 :/
>>> vz/one/datastores/0/80/disk.2 80 0
>>>
>>>
>>> 2013/7/12 Alexandr Baranov <telecastcloud at gmail.com>
>>>
>>>> Hi, Ruben,
>>>>
>>>> Checked on your recommendation datastore TM_MAD
>>>>
>>>> 1. I used TM_MAD = ssh
>>>>
>>>> 2. I can not get the command:
>>>> CLONE ssh vm158.jinr.ru :/
>>>> var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/
>>>> vz/one/datastores/0/80/disk.0 80 1
>>>> -bash: CLONE: command not found
>>>>
>>>> from the file vms/80/transfer.0.prolog
>>>>
>>>> Could you detail how to execute this script?
>>>> Thank you
>>>> 04.07.2013 16:00 пользователь "Ruben S. Montero" <
>>>> rsmontero at opennebula.org> написал:
>>>>
>>>> Hi Alexandr,
>>>>
>>>> This may depend on the storage backend you are using. If the Datastore
>>>> is uising TM_MAD=shared, it may be a problem with the NFS mount options or
>>>> user mapping. You can:
>>>>
>>>> 1.- Try with TM_MAD=ssh (create other system datastore, and a cluster
>>>> with a node and that system ds to make the test)
>>>>
>>>> 2.- Execute directly the TM commands to check that this is an storage
>>>> problem. Look for vms/50/transfer.0.prolog. In that file there is a clone
>>>> statement. Like
>>>>
>>>> CLONE qcow2 vm158 :/
>>>> var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667
>>>> cldwn07:/vz/one/datastores/0/50/disk.0
>>>>
>>>> Execute (probably with -xv to debug) the script
>>>>
>>>> /var/lib/one/remotes/tm/qcow2/clone vm158 :/
>>>> var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667
>>>> cldwn07:/vz/one/datastores/0/50/disk.0
>>>>
>>>> If that scripts creates the file on cldwn07 with the right permissions
>>>> the you have a problem with libvirt (try restarting it, double check
>>>> configuration and oneadmin membership....)
>>>>
>>>>
>>>> Cheers and good luck
>>>>
>>>> Ruben
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Jul 3, 2013 at 11:14 AM, Alexandr Baranov <
>>>> telecastcloud at gmail.com> wrote:
>>>>
>>>>> Hello everyone!
>>>>> the essence is that I can not yet get a successful operation to deploy
>>>>> disk type using qcow2.
>>>>> In the logs I see the following:
>>>>> *********************
>>>>>
>>>>> Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0
>>>>> VirtualMachinePoolInfo invoked, -2, -1, -1, -1
>>>>> Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0
>>>>> VirtualMachinePoolInfo result SUCCESS, "<VM_POOL> <VM> <ID> 50 <..."
>>>>> Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 clone:
>>>>> Cloning vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667
>>>>> in / vz / o
>>>>> ne/datastores/0/50/disk.0
>>>>>
>>>>> Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 ExitCode:
>>>>> 0
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 context:
>>>>> Generating context block device at cldwn07 :/ vz/one/datastores/0/50/disk.1
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 ExitCode:
>>>>> 0
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: TRANSFER SUCCESS
>>>>> 50 -
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50
>>>>> ExitCode: 0
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50
>>>>> Successfully execute network driver operation: pre.
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Command
>>>>> execution fail: cat << EOT | / var / tmp / one / vmm / kvm / deploy / vz /
>>>>> one / datastores / 0/50/deploy
>>>>> ment.0 cldwn07 50 cldwn07
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error:
>>>>> Failed to create domain from / vz/one/datastores/0/50/deployment.0
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error:
>>>>> internal error process exited while connecting to monitor: qemu-kvm:-drive
>>>>> file = / vz/one/datastores/0 / 50/disk.0, if = none, id = drive-ide0-0-0,
>>>>> format = qcow2, cache = none: could not open disk image /
>>>>> vz/one/datastores/0/50/disk.0: Invalid argument
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG E 50 Could
>>>>> not create domain from / vz/one/datastores/0/50/deployment.0
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50
>>>>> ExitCode: 255
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Failed
>>>>> to execute virtualization driver operation: deploy.
>>>>>
>>>>> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: DEPLOY FAILURE 50
>>>>> Could not create domain from / vz/one/datastores/0/50/deployment.0
>>>>>
>>>>> ***********************
>>>>>
>>>>> In the folder / vz/one/datastores/0/50/disk.0: This disc is created as
>>>>> root: root.
>>>>> So it seems the monitor can not handle it - hence the error.
>>>>>
>>>>> Now actually that is not clear - why it is created as root?
>>>>> In the configuration of the hypervisor I checked:
>>>>> root @ vm158 ~] # grep-vE '^ ($ | #)' / etc / libvirt / qemu.conf
>>>>> user = "oneadmin"
>>>>> group = "oneadmin"
>>>>> dynamic_ownership = 0
>>>>> It is true there is also a caveat onedmin I could not read it.
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at lists.opennebula.org
>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>
>>>>> --
>>>>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>>>>> --
>>>>> Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013
>>>>> --
>>>>> Ruben S. Montero, PhD
>>>>> Project co-Lead and Chief Architect
>>>>> OpenNebula - The Open Source Solution for Data Center Virtualization
>>>>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>>>>> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
>>>>>
>>>>
>>>
>>
>>
>> --
>> --
>> Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013
>> --
>> Ruben S. Montero, PhD
>> Project co-Lead and Chief Architect
>> OpenNebula - The Open Source Solution for Data Center Virtualization
>> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
>>
>
>
--
--
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013
--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130715/4f252ae9/attachment-0002.htm>
More information about the Users
mailing list