<div dir="ltr"><span id="result_box" class="" lang="en"><span class="">The contents of</span> <span class="">the file</span> <span class="">vms/80/transfer.0.prolog</span><span class="">:</span><br class=""><br class=""><span class="">CLONE ssh <a href="http://vm158.jinr.ru">vm158.jinr.ru</a> :/ var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/ vz/one/datastores/0/80/disk.0</span> <span class="">80 1</span><br class="">
<span class="">MKIMAGE ssh</span> <span class="">22016</span> <span class="">ext4 cldwn07 :/ vz/one/datastores/0/80/disk.1</span> <span class="">80</span> <span class="">0</span><br class=""><span class="">CONTEXT ssh / var/lib/one/vms/80/context.sh cldwn07 :/ vz/one/datastores/0/80/disk.2</span> <span class="">80</span> <span class="">0</span></span></div>
<div class="gmail_extra"><br><br><div class="gmail_quote">2013/7/12 Alexandr Baranov <span dir="ltr"><<a href="mailto:telecastcloud@gmail.com" target="_blank">telecastcloud@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<p>Hi, Ruben,</p>
<p>Checked on your recommendation datastore TM_MAD</p>
<p>1. I used TM_MAD = ssh</p>
<p>2. I can not get the command:<br>
š CLONE ssh <a href="http://vm158.jinr.ru" target="_blank">vm158.jinr.ru</a> :/ var/lib/one/datastores/1/c03de7e41ccf4434b65324c3b91c6105 cldwn07 :/ vz/one/datastores/0/80/disk.0 80 1<br>
-bash: CLONE: command not found</p>
<p>from the file vms/80/transfer.0.prolog</p>
<p>Could you detail how to execute this script?<br>
Thank you</p>
<div class="gmail_quote">04.07.2013 16:00 ÐÏÌØÚÏ×ÁÔÅÌØ "Ruben S. Montero" <<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>> ÎÁÐÉÓÁÌ:<div><div class="h5"><br type="attribution">
<blockquote style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi Alexandr,<div><br></div><div>This may depend on the storage backend you are using. If the Datastore is uising TM_MAD=shared, it may be a problem with the NFS mount options or user mapping. You can:</div>
<div>
<br></div><div>1.- Try with TM_MAD=ssh (create other system datastore, and a cluster with a node and that system ds to make the test)</div><div><br></div><div>2.- Execute directly the TM commands to check that this is an storage problem. Look for vms/50/transfer.0.prolog. In that file there is a clone statement. Like</div>
<div><br></div><div><div>CLONE qcow2 vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 šcldwn07:/vz/one/datastores/0/50/disk.0</div></div><div><br></div><div>Execute (probably with -xv to debug) the scriptš</div>
<div><br></div><div>/var/lib/one/remotes/tm/qcow2/clonešvm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 šcldwn07:/vz/one/datastores/0/50/disk.0</div><div><br></div><div>If that scripts creates the file on cldwn07 with the right permissions the you have a problem with libvirt (try restarting it, double check configuration and oneadmin membership....)</div>
<div><br></div><div><br></div><div>Cheers and good luck</div><div><br></div><div>Ruben</div><div><br></div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div>On Wed, Jul 3, 2013 at 11:14 AM, Alexandr Baranov <span dir="ltr"><<a href="mailto:telecastcloud@gmail.com" target="_blank">telecastcloud@gmail.com</a>></span> wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><p>Hello everyone!<br>
the essence is that I can not yet get a successful operation to deploy disk type using qcow2.<br>
In the logs I see the following:<br>
*********************</p>
<p>Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 VirtualMachinePoolInfo invoked, -2, -1, -1, -1<br>
Wed Jul 3 11:59:39 2013 [ReM] [D]: Req: 8976 UID: 0 VirtualMachinePoolInfo result SUCCESS, "<VM_POOL> <VM> <ID> 50 <..."<br>
Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 clone: Cloning vm158 :/ var/lib/one/datastores/1/e67952a1b1b91f1bdca0de1cba21d667 in / vz / o<br>
ne/datastores/0/50/disk.0</p>
<p>Wed Jul 3 11:59:39 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0</p>
<p>Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 context: Generating context block device at cldwn07 :/ vz/one/datastores/0/50/disk.1</p>
<p>Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: LOG I 50 ExitCode: 0</p>
<p>Wed Jul 3 11:59:40 2013 [TM] [D]: Message received: TRANSFER SUCCESS 50 -</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 0</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Successfully execute network driver operation: pre.</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Command execution fail: cat << EOT | / var / tmp / one / vmm / kvm / deploy / vz / one / datastores / 0/50/deploy<br>
ment.0 cldwn07 50 cldwn07</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error: Failed to create domain from / vz/one/datastores/0/50/deployment.0</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 error: internal error process exited while connecting to monitor: qemu-kvm:-drive file = / vz/one/datastores/0 / 50/disk.0, if = none, id = drive-ide0-0-0, format = qcow2, cache = none: could not open disk image / vz/one/datastores/0/50/disk.0: Invalid argument</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG E 50 Could not create domain from / vz/one/datastores/0/50/deployment.0</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 ExitCode: 255</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: LOG I 50 Failed to execute virtualization driver operation: deploy.</p>
<p>Wed Jul 3 11:59:40 2013 [VMM] [D]: Message received: DEPLOY FAILURE 50 Could not create domain from / vz/one/datastores/0/50/deployment.0</p>
<p>***********************</p>
<p>In the folder / vz/one/datastores/0/50/disk.0: This disc is created as root: root.<br>
So it seems the monitor can not handle it - hence the error.</p>
<p>Now actually that is not clear - why it is created as root?<br>
In the configuration of the hypervisor I checked:<br>
root @ vm158 ~] # grep-vE '^ ($ | #)' / etc / libvirt / qemu.conf<br>
user = "oneadmin"<br>
group = "oneadmin"<br>
dynamic_ownership = 0<br>
It is true there is also a caveat onedmin I could not read it.<br>
</p>
<br></div>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org<br clear="all"><div><br></div>-- <br></a><font color="#888888"><div dir="ltr">
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank"><div>
<div><div>--š</div><div>Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013</div></div><div>--š</div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<br>OpenNebula -šThe Open Source Solution for Data Center Virtualization<br>
</a><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div>
</font></blockquote></div></div></div>
</blockquote></div></div></div>
</blockquote></div><br></div>