[one-users] Problem with the format of qcow2

Ruben S. Montero rsmontero at opennebula.org
Thu Jul 4 04:59:51 PDT 2013


Hi Alexandr,

This may depend on the storage backend you are using. If the Datastore is
uising TM_MAD=shared, it may be a problem with the NFS mount options or
user mapping. You can:

1.- Try with TM_MAD=ssh (create other system datastore, and a cluster with
a node and that system ds to make the test)

2.- Execute directly the TM commands to check that this is an storage
problem. Look for vms/50/transfer.0.prolog. In that file there is a clone
statement. Like

CLONE qcow2 vm158 :/
var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667
 cldwn07:/vz/one/datastores/0/50/disk.0

Execute (probably with -xv to debug) the script

/var/lib/one/remotes/tm/qcow2/clone vm158 :/
var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667
 cldwn07:/vz/one/datastores/0/50/disk.0

If that scripts creates the file on cldwn07 with the right permissions the
you have a problem with libvirt (try restarting it, double check
configuration and oneadmin membership....)


Cheers and good luck

Ruben




On Wed, Jul 3, 2013 at 11:14 AM, Alexandr Baranov
<telecastcloud at gmail.com>wrote:

> Hello everyone!
> the essence is that I can not yet get a successful operation to deploy
> disk type using qcow2.
> In the logs I see the following:
> *********************
>
> Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 VirtualMachinePoolInfo
> invoked, -2, -1, -1, -1
> Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 VirtualMachinePoolInfo
> result SUCCESS, "<VM_POOL> <VM> <ID> 50 <..."
> Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 clone:
> Cloning vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667
> in / vz / o
> ne/datastores/0/50/disk.0
>
> Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0
>
> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 context:
> Generating context block device at cldwn07 :/ vz/one/datastores/0/50/disk.1
>
> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0
>
> Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: TRANSFER SUCCESS 50 -
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 0
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Successfully
> execute network driver operation: pre.
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Command
> execution fail: cat << EOT | / var / tmp / one / vmm / kvm / deploy / vz /
> one / datastores / 0/50/deploy
> ment.0 cldwn07 50 cldwn07
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error:
> Failed to create domain from / vz/one/datastores/0/50/deployment.0
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error:
> internal error process exited while connecting to monitor: qemu-kvm:-drive
> file = / vz/one/datastores/0 / 50/disk.0, if = none, id = drive-ide0-0-0,
> format = qcow2, cache = none: could not open disk image /
> vz/one/datastores/0/50/disk.0: Invalid argument
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG E 50 Could not
> create domain from / vz/one/datastores/0/50/deployment.0
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 255
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Failed to
> execute virtualization driver operation: deploy.
>
> Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: DEPLOY FAILURE 50
> Could not create domain from / vz/one/datastores/0/50/deployment.0
>
> ***********************
>
> In the folder / vz/one/datastores/0/50/disk.0: This disc is created as
> root: root.
> So it seems the monitor can not handle it - hence the error.
>
> Now actually that is not clear - why it is created as root?
> In the configuration of the hypervisor I checked:
> root @ vm158 ~] # grep-vE '^ ($ | #)' / etc / libvirt / qemu.conf
> user = "oneadmin"
> group = "oneadmin"
> dynamic_ownership = 0
> It is true there is also a caveat onedmin I could not read it.
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> --
> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
> --
> Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013
> --
> Ruben S. Montero, PhD
> Project co-Lead and Chief Architect
> OpenNebula - The Open Source Solution for Data Center Virtualization
>  <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130704/b02eadff/attachment-0002.htm>


More information about the Users mailing list