<div dir="ltr">Hi Alexandr,<div><br></div><div>This may depend on the storage backend you are using. If the Datastore is uising TM_MAD=shared, it may be a problem with the NFS mount options or user mapping. You can:</div><div>
<br></div><div>1.- Try with TM_MAD=ssh (create other system datastore, and a cluster with a node and that system ds to make the test)</div><div><br></div><div>2.- Execute directly the TM commands to check that this is an storage problem. Look for vms/50/transfer.0.prolog. In that file there is a clone statement. Like</div>
<div><br></div><div><div>CLONE qcow2 vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 cldwn07:/vz/one/datastores/0/50/disk.0</div></div><div><br></div><div>Execute (probably with -xv to debug) the script </div>
<div><br></div><div>/var/lib/one/remotes/tm/qcow2/clone vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 cldwn07:/vz/one/datastores/0/50/disk.0</div><div><br></div><div>If that scripts creates the file on cldwn07 with the right permissions the you have a problem with libvirt (try restarting it, double check configuration and oneadmin membership....)</div>
<div><br></div><div><br></div><div>Cheers and good luck</div><div><br></div><div>Ruben</div><div><br></div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 3, 2013 at 11:14 AM, Alexandr Baranov <span dir="ltr"><<a href="mailto:telecastcloud@gmail.com" target="_blank">telecastcloud@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>Hello everyone!<br>
the essence is that I can not yet get a successful operation to deploy disk type using qcow2.<br>
In the logs I see the following:<br>
*********************</p>
<p>Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 VirtualMachinePoolInfo invoked, -2, -1, -1, -1<br>
Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 VirtualMachinePoolInfo result SUCCESS, "<VM_POOL> <VM> <ID> 50 <..."<br>
Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 clone: Cloning vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 in / vz / o<br>
ne/datastores/0/50/disk.0</p>
<p>Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0</p>
<p>Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 context: Generating context block device at cldwn07 :/ vz/one/datastores/0/50/disk.1</p>
<p>Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0</p>
<p>Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: TRANSFER SUCCESS 50 -</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 0</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Successfully execute network driver operation: pre.</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Command execution fail: cat << EOT | / var / tmp / one / vmm / kvm / deploy / vz / one / datastores / 0/50/deploy<br>
ment.0 cldwn07 50 cldwn07</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error: Failed to create domain from / vz/one/datastores/0/50/deployment.0</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error: internal error process exited while connecting to monitor: qemu-kvm:-drive file = / vz/one/datastores/0 / 50/disk.0, if = none, id = drive-ide0-0-0, format = qcow2, cache = none: could not open disk image / vz/one/datastores/0/50/disk.0: Invalid argument</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG E 50 Could not create domain from / vz/one/datastores/0/50/deployment.0</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 255</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Failed to execute virtualization driver operation: deploy.</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: DEPLOY FAILURE 50 Could not create domain from / vz/one/datastores/0/50/deployment.0</p>
<p>***********************</p>
<p>In the folder / vz/one/datastores/0/50/disk.0: This disc is created as root: root.<br>
So it seems the monitor can not handle it - hence the error.</p>
<p>Now actually that is not clear - why it is created as root?<br>
In the configuration of the hypervisor I checked:<br>
root @ vm158 ~] # grep-vE '^ ($ | #)' / etc / libvirt / qemu.conf<br>
user = "oneadmin"<br>
group = "oneadmin"<br>
dynamic_ownership = 0<br>
It is true there is also a caveat onedmin I could not read it.<br>
</p>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org<br clear="all"><div><br></div>-- <br></a><div dir="ltr"><a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank"><div>
<div><div>-- </div><div>Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013</div></div><div>-- </div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<br>OpenNebula - The Open Source Solution for Data Center Virtualization<br>
</a><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div>
</blockquote></div></div></div>