Hi Carlos,<div><br></div><div>In your case the actual script used to do the cp is /var/lib/one/remotes/tm/shared/clone.</div><div><br></div><div>Look for:</div><div><br></div><div><div>ssh_exec_and_log $DST_HOST \</div><div>
"cd $DST_DIR; cp -r $SRC_PATH $DST_PATH" \</div><div> "Error copying $SRC to $DST"</div></div><div><br></div><div>This is the script executed at the remote node to cp the file. Just include the ionice command befor the cp:</div>
<div><br></div><div>"ionice -c 3; cd $DST_DIR; cp -r $SRC_PATH $DST_PATH"<br></div><div><br></div><div>Note that:</div><div><br></div><div>1.- Any modification to the script must be distributed to the hosts with onhost sync (you need to wait for a monitoring)</div>
<div><br></div><div>2.- If opennebula is reinstalled it'll overwrite your changes</div><div><br></div><div>3.- The command is executed as oneadmin, in case you need to setup sudos...</div><div><br></div><div>Cheers</div>
<div><br></div><div>Ruben</div><div><br></div><div><br></div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Nov 14, 2012 at 11:17 AM, Carlos Jiménez <span dir="ltr"><<a href="mailto:cjimenez@eneotecnologia.com" target="_blank">cjimenez@eneotecnologia.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
Thanks Ruben.<br>
<br>
I've realised that lower performance takes place each time a new VM
is in PROLOG status or when I create/clone an image. I guess all
that is related to disk access. As a testing purpose, I've cloned an
image and executed iotop in the server where OpenNebula and NFS
Server are running.<br>
<br>
This is a sample output:<br>
<br>
PID PRIO USER DISK READ DISK WRITE SWAPIN IO>
COMMAND
<br>
3238 be/7 oneadmin 1783.88 K/s 1783.88 K/s 0.00 % 93.98 % cp -f
/var/lib/one/datastores/1/9fedfe0d5cb02961~ne/datastores/1/72da3ba9d86fdc573b944c03253561ae<br>
<br>
Sometimes, there is another process (flush-147:0) with an important
IO usage:<br>
PID PRIO USER DISK READ DISK WRITE SWAPIN IO>
COMMAND
<br>
8028 be/4 root 1358.87 B/s 7.96 K/s 0.00 % 4.15 %
[flush-147:0]<br>
<br>
<br>
How could I set an "ionice -c 3" to that "cp" command? I would like
to do it in a global and persistent way so that future cp actions
(due to creation or cloning of a VM or an image) have a lower
ionice, not just once.<br>
Perhaps, I should set ionice in the driver configuration "exporting"
a new global variable in /etc/one/defaultrc in the same way as the
priority, shouldn't I? But, how do I configure ionice to interact
with the proper driver?<br>
<br>
<br>
Thanks for your help.<br>
<br>
Carlos.<br>
<br>
<br>
<div>On 11/13/2012 11:51 AM, Ruben S.
Montero wrote:<br>
</div>
<blockquote type="cite">Hi
<div><br>
</div>
<div>OpenNebula drivers are the piece of software that deals with
the underlying subsystems. Each driver starts a new
thread/process for each operation that should inherit the driver
priority.</div>
<div><br>
</div>
<div>If you take a look to /etc/one/defaultrc, you can change the
CPU priority assigned to the drivers. It defaults to 19, the
least favorable to the process. You may want to try to use sets
other io scheduling algorithm with ionice, or the blk module of
cgroups.... or any other tool to adjust the I/O priority of a
process.</div>
<div><br>
</div>
<div>Cheers</div>
<div><br>
</div>
<div>Ruben</div>
<div><br>
</div>
<div><br>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Mon, Nov 12, 2012 at 5:59 PM, Carlos
Jiménez <span dir="ltr"><<a href="mailto:cjimenez@eneotecnologia.com" target="_blank">cjimenez@eneotecnologia.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
I have a server running OpenNebula 3.8 and acting as NFS
server for storaging with another host (running KVM and
acting as a NFS client for the storage). I have one VM
running and then I try to create another VM using Sunstone.
Then, the running VM reduces its performance while the
creation of the new VM takes places. I guess OpenNebula I/O
processes on the shared disk have better
priority/nice/ionice than disk access of the already running
VM.<br>
The question is: Is there any way to control it so running
VMs don't decrease their performance? How do you reduce
priority/nice/ionice of the creation of the new VMs?<br>
<br>
Thanks in advance,<br>
<br>
Carlos.<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</blockquote>
</div>
<br>
<br clear="all"><span class="HOEnZb"><font color="#888888">
<div><br>
</div>
-- <br>
Ruben S. Montero, PhD<br>
Project co-Lead and Chief Architect<br>
OpenNebula - The Open Source Solution for Data Center
Virtualization<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula<br>
</font></span></div>
</blockquote>
<br>
</div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<br>OpenNebula - The Open Source Solution for Data Center Virtualization<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula<br>
</div>