[one-users] cpu overcommit with kvm

Carlos A. caralla at upv.es
Mon May 9 05:52:22 PDT 2011


If you submit a template using the CPU parameter, the virtual machine 
never is deployed in to a VMWare machine. If you use VCPU instead, it is 
properly deployed...


El 09/05/11 14:18, Tino Vazquez escribió:
> Hi Carlos,
>
> Which bug are you referring to? AFAIK, the CPU parameter is only used
> by the scheduler, and the VMware integration does honor the VCPU
> paremeter, so overcommitment should work as expected.
>
> Regards,
>
> -Tino
>
> --
> Constantino Vázquez Blanco, MSc
> OpenNebula Major Contributor
> www.OpenNebula.org | @tinova79
>
>
>
> On Wed, May 4, 2011 at 6:57 PM, Carlos A.<caralla at upv.es>  wrote:
>> there was a bug about the CPU template parameter that was not working for
>> vmware so it was needed to use VCPU instead. Thus, I have no way to manage
>> the overcommitment of VMs to a host.
>>
>> Is this issue solved in the current version? (i have noticed that memory
>> issues are supposed to be (partially) patched)
>>
>> El 04/05/2011 18:39, Ruben S. Montero escribió:
>>
>> Hi,
>> You can guide the overcommitment by using the CPU attribute of the template.
>> For example if you want to put 16 VMs in nebula02 with 8 cores, just define
>> the VMs with
>> CPU = 0.5
>> If you need those VMs to have 2 virtual cores use:
>> CPU=0.5
>> VCPU=2
>> Cheers
>> Ruben
>>
>> On Wed, May 4, 2011 at 4:32 PM, Giovanni Toraldo<gt at libersoft.it>  wrote:
>>> Hi,
>>>
>>> I noticed only now that I've exhausted my opennebula available CPU
>>> resources:
>>>
>>>>    ID NAME              CLUSTER  RVM   TCPU   FCPU   ACPU    TMEM    FMEM
>>>> STAT
>>>>     2 nebula01          default    2    400    369      0   11.8G   10.7G
>>>>    on
>>>>     3 nebula02          default    4    800    792      0   11.8G    7.4G
>>>>    on
>>>>     4 nebula03          default    4    800    796      0   11.8G    9.9G
>>>>    on
>>>>     5 nebula04          default    4    800    774      0   11.8G   10.4G
>>>>    on
>>> However CPU isn't really used so much. There is a way to let the
>>> scheduler allocate new VM? I supposed that using RANK = FREEMEMORY in VM
>>> template should solve, but not.
>>>
>>> Any hints?
>>>
>>> --
>>> Giovanni Toraldo
>>> http://www.libersoft.it/
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>
>>
>> --
>> Dr. Ruben Santiago Montero
>> Associate Professor (Profesor Titular), Complutense University of Madrid
>>
>> URL: http://dsa-research.org/doku.php?id=people:ruben
>> Weblog: http://blog.dsa-research.org/?author=7
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>


-- 

Carlos de Alfonso Laguna
Ingeniero de I+D
Tel. +34 963877007, ext. 88254
mailto: caralla at upv.es

La información incluida en el presente correo electrónico y, en su caso, sus anexos, es CONFIDENCIAL, siendo para el uso exclusivo del destinatario a quien va dirigido y puede contener información privilegiada, profesional u otra clase de información privada. Si usted recibe este mensaje y no es el destinatario señalado le informamos de que esta prohibida cualquier utilización del mismo sin previa autorización y le rogamos que nos lo notifique inmediatamente de vuelta a la dirección remitente y proceda a la destrucción del mismo.




More information about the Users mailing list