Javier<br><br>Yes we are using KVM and OpenNebula 1.4.<br><br>We have been having this problem since a long time and we were doing all kinds of validations ourselves before submitting the request to OpenNebula. (there should be enough memory in the cloud that matches the requested memory & there should be atleast one host that has memory > requested memory ) We had to do those because OpenNebula would schedule to an arbitrary host based on the existing logic it had. <br>
So at last we thought that we need to make OpenNebula aware of memory allocated of running VM's on the host and started this discussion.<br><br>Thanks for taking up this issue as priority. Appreciate it.<br><br>Shashank came up with this patch to kvm.rb. Please take a look and let us know if that will work until we get a permanent solution.<br>
<br>====================================================================================<br><br>$mem_allocated_for_running_vms=0<br>for i in `virsh list|grep running|tr -s ' ' ' '|cut -f2 -d' '` do<br>
$dominfo=`virsh dominfo #{i}`<br> $dominfo.split(/\n/).each{|line|<br> if line.match('^Max memory')<br> $mem_allocated_for_running_vms += line.split(" ")[2].strip.to_i<br>
end<br>}<br>end<br><br>$mem_used_by_base_hypervisor = [some xyz kb that we want to set aside for hypervisor]<br><br>$free_memory = $total_memory.to_i - ( $mem_allocated_for_running_vms.to_i + $mem_used_by_base_hypervisor.to_i )<br>
<br>======================================================================================<br><br>Ranga<br><br><div class="gmail_quote">On Wed, Nov 3, 2010 at 2:16 PM, Javier Fontan <span dir="ltr"><<a href="mailto:jfontan@gmail.com">jfontan@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Hello,<br>
<br>
Sorry for the delay in the response.<br>
<br>
It looks that the problem is OpenNebula calculating available memory.<br>
For xen >= 3.2 there is a reliable way to get available memory that is<br>
calling "xm info" and getting "max_free_memory" attribute.<br>
Unfortunately for kvm or xen < 3.2 there is not such attribute. I<br>
suppose you are using kvm as you tell about "free" command.<br>
<br>
I began analyzing the kvm IM probe that gets memory information and<br>
there is a problem on the way to get total memory. Here is how it now<br>
gets memory information:<br>
<br>
TOTALMEMORY: runs virsh info that gets the real physical memory<br>
installed in the machine<br>
FREEMEMORY: runs free command and gets the free column data without<br>
buffers and cache<br>
USEDMEMORY: runs top command and gets used memory from it (this counts<br>
buffers and cache)<br>
<br>
This is a big problem as those values do not match one with another (I<br>
don't really know how I failed to see this before). Here is the<br>
monitoring data from a host without VMs.<br>
<br>
--8<------<br>
TOTALMEMORY=8193988<br>
USEDMEMORY=7819952<br>
FREEMEMORY=7911924<br>
------>8--<br>
<br>
As you can see it makes no sense at all. Even the TOTALMEMORY that is<br>
got from virsh info is very misleading for oned as the host linux<br>
instance does not have access to all that memory (some is consumed by<br>
the hypervisor itself) as seen calling a free command:<br>
<br>
--8<------<br>
total used free shared buffers cached<br>
Mem: 8193988 7819192 374796 0 64176 7473992<br>
------>8--<br>
<br>
I am also copying this text as an issue to solve this problem<br>
<a href="http://dev.opennebula.org/issues/388" target="_blank">http://dev.opennebula.org/issues/388</a>. It is masked to be solved for<br>
2.0.1 but the change will be compatible with 1.4 as it seems the the<br>
only changed needed is the IM problem.<br>
<br>
I can not offer you an immediate solution but we'll try to come up<br>
with one as soon as possible.<br>
<br>
Bye<br>
<br>
On Wed, Nov 3, 2010 at 7:08 PM, Rangababu Chakravarthula<br>
<div><div></div><div class="h5"><<a href="mailto:rbabu@hexagrid.com">rbabu@hexagrid.com</a>> wrote:<br>
> Hello Javier<br>
> Please let us know if you want us to provide more detailed information with<br>
> examples?<br>
><br>
> Ranga<br>
><br>
> On Fri, Oct 29, 2010 at 9:46 AM, Rangababu Chakravarthula<br>
> <<a href="mailto:rbabu@hexagrid.com">rbabu@hexagrid.com</a>> wrote:<br>
>><br>
>> Javier<br>
>><br>
>> We saw that VM's were being deployed to the host where the allocated<br>
>> memory of all the VM's was higher than the available memory on the host.<br>
>><br>
>> We think OpenNebula is executing free command on the host to determine if<br>
>> there is any room and since free would always return the actual memory that<br>
>> is being consumed and not the allocated, opennebula would push the new jobs<br>
>> to the host.<br>
>><br>
>> That's the reason we want OpenNebula to be aware of memory allocated to<br>
>> the VM's on the host.<br>
>><br>
>> Ranga<br>
>><br>
>> On Thu, Oct 28, 2010 at 2:02 PM, Javier Fontan <<a href="mailto:jfontan@gmail.com">jfontan@gmail.com</a>> wrote:<br>
>>><br>
>>> Hello,<br>
>>><br>
>>> Could you describe the problem you had? By default the scheduler will<br>
>>> not overcommit cpu nor memory.<br>
>>><br>
>>> Bye<br>
>>><br>
>>> On Thu, Oct 28, 2010 at 4:50 AM, Shashank Rachamalla<br>
>>> <<a href="mailto:shashank.rachamalla@hexagrid.com">shashank.rachamalla@hexagrid.com</a>> wrote:<br>
>>> > Hi<br>
>>> ><br>
>>> > We have a requirement where in the scheduler should not allow memory<br>
>>> > over<br>
>>> > committing while choosing a host for new vm. In order to achieve this,<br>
>>> > we<br>
>>> > have changed the way in which FREEMEMORY is being calculated for each<br>
>>> > host:<br>
>>> ><br>
>>> > FREE MEMORY = TOTAL MEMORY - [ Sum of memory values allocated to VMs<br>
>>> > which<br>
>>> > are currently running on the host ]<br>
>>> ><br>
>>> > Please let us know if the above approach is fine or is there any better<br>
>>> > way<br>
>>> > to accomplish the task. We are using opennebula 1.4.<br>
>>> ><br>
>>> > --<br>
>>> > Regards,<br>
>>> > Shashank Rachamalla<br>
>>> ><br>
>>> > _______________________________________________<br>
>>> > Users mailing list<br>
>>> > <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
>>> > <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>>> ><br>
>>> ><br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher<br>
>>> DSA Research Group: <a href="http://dsa-research.org" target="_blank">http://dsa-research.org</a><br>
>>> Globus GridWay Metascheduler: <a href="http://www.GridWay.org" target="_blank">http://www.GridWay.org</a><br>
>>> OpenNebula Virtual Infrastructure Engine: <a href="http://www.OpenNebula.org" target="_blank">http://www.OpenNebula.org</a><br>
>>> _______________________________________________<br>
>>> Users mailing list<br>
>>> <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
>>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>><br>
><br>
><br>
<br>
<br>
<br>
</div></div>--<br>
<div><div></div><div class="h5">Javier Fontan, Grid & Virtualization Technology Engineer/Researcher<br>
DSA Research Group: <a href="http://dsa-research.org" target="_blank">http://dsa-research.org</a><br>
Globus GridWay Metascheduler: <a href="http://www.GridWay.org" target="_blank">http://www.GridWay.org</a><br>
OpenNebula Virtual Infrastructure Engine: <a href="http://www.OpenNebula.org" target="_blank">http://www.OpenNebula.org</a><br>
</div></div></blockquote></div><br>