[one-users] CPU and Memory Overcommitment

Igor Laskovy igor.laskovy at gmail.com
Thu Sep 5 06:24:09 PDT 2013


Ok, thanks. And one more: is it true that if I lost front-end host along
with some hypervisor hosts died VMs don't restart on alive hosts?


On Thu, Sep 5, 2013 at 1:34 PM, Carlos Martín Sánchez <
cmartin at opennebula.org> wrote:

> Hi,
>
> On Wed, Sep 4, 2013 at 12:03 PM, Igor Laskovy <igor.laskovy at gmail.com>
>  wrote:
>
>> One further question - how Fault Tolerance mechanism (via HOST_HOOK) deal
>> with this reservations? Does it reserved some "slots" for VMs recovery?
>>
>
> OpenNebula does not have any kind of reservation scheduling.
>
>
>> If not and if I at every turn manually don't control available resources
>> on hosts those may cause to situation when my automatically recreated VMs
>> will stuck in placement state.
>>
>
> Yes, that may happen. But you still can implement a reservation mechanism.
> I'm sure there are better alternatives, but this is a quick idea:
>
> - Disable half of your hosts.
> - For each enabled host, define a failover_host = <id> attribute in its
> template, pointing to one of the disabled hosts
> - Modify the fault tolerance hook to make it enable the defined failover
> host.
>
> Another quick hack is to create a dummy VM for each VM that you need to
> guarantee that reserved slot, using the requirements and current_vms
> features to deploy it in a host different from the original VM. Then delete
> that VM when the hook recreates the original VM...
>
>
> Regards.
>
> --
> Join us at OpenNebulaConf2013 <http://opennebulaconf.com> in Berlin,
> 24-26 September, 2013
> --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - The Open-source Solution for Data Center Virtualization
> www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>
>
>
> On Wed, Sep 4, 2013 at 12:03 PM, Igor Laskovy <igor.laskovy at gmail.com>wrote:
>
>> Thank you for reply!
>>
>> One further question - how Fault Tolerance mechanism (via HOST_HOOK) deal
>> with this reservations? Does it reserved some "slots" for VMs recovery? If
>> not and if I at every turn manually don't control available resources on
>> hosts those may cause to situation when my automatically recreated VMs will
>> stuck in placement state.
>>
>>
>> On Wed, Sep 4, 2013 at 12:23 PM, Carlos Martín Sánchez <
>> cmartin at opennebula.org> wrote:
>>
>>> Hi,
>>>
>>> On Tue, Sep 3, 2013 at 1:32 PM, Igor Laskovy <igor.laskovy at gmail.com>wrote:
>>>
>>>> Hello all!
>>>>
>>>> I found that this already have discussed not so far from now -
>>>> http://comments.gmane.org/gmane.comp.distributed.opennebula.user/10568
>>>>
>>>> As understand for now about Memory over-commitment for production I can
>>>> forget ;)
>>>>
>>>> For CPU over-commitment I can only use CPU & VCPU attributes, right?
>>>>
>>>
>>> That's right
>>>
>>>
>>>>  If I will, for example, set CPU to 0.2, than host will do only
>>>> reservation of processor time for that VM OR do limit ether, so this VM
>>>> will limited of 1/5 of one physical/logical hardware core?
>>>>
>>>
>>> We enforce the CPU reserved at hypervisor level with cgroups for kvm,
>>> credit scheduler for xen, and the esx cpu scheduler for vmware
>>>
>>> Regards
>>>
>>> --
>>> Join us at OpenNebulaConf2013 <http://opennebulaconf.com/> in Berlin,
>>> 24-26 September, 2013
>>> --
>>> Carlos Martín, MSc
>>> Project Engineer
>>> OpenNebula - The Open-source Solution for Data Center Virtualization
>>> www.OpenNebula.org <http://www.opennebula.org/> | cmartin at opennebula.org
>>>  | @OpenNebula <http://twitter.com/opennebula>
>>>
>>
>>
>>
>> --
>> Igor Laskovy
>> facebook.com/igor.laskovy
>> studiogrizzly.com
>>
>
>


-- 
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130905/1ccd3261/attachment-0002.htm>


More information about the Users mailing list