[one-users] Opennebula response time increasing
Carlos Martín Sánchez
cmartin at opennebula.org
Fri Sep 21 03:28:56 PDT 2012
Hi Christoph,
I'm sorry but I ran out of ideas. The only thing I can try is to replicate
your complete setup and give it a try with gdb when I find the time.
Could you please share your distribution, opennebula installation mode
(from packages or source), and the versions of
- xmlrpc-c library
- ruby
- nokogiri ruby gem
Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmartin at opennebula.org |
@OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>
On Wed, Sep 5, 2012 at 2:53 PM, Christoph Robbert
<chrobbert at googlemail.com>wrote:
> Hey,
>
> my oned.conf is very close to the default oned.conf. I only changed the DB
> Backend. So the values around MANAGER_TIMER looks like this:
>
> #MANAGER_TIMER = 30
>
> HOST_MONITORING_INTERVAL = 600
> #HOST_PER_INTERVAL = 15
> #HOST_MONITORING_EXPIRATION_TIME = 86400
>
> VM_POLLING_INTERVAL = 600
> #VM_PER_INTERVAL = 5
> #VM_MONITORING_EXPIRATION_TIME = 86400
>
> I assume, that the default values are used because of the comments.
>
> I didn't activate any external authentication or authorization drivers by
> hand and even could see any but the default in the oned.conf.
> My user is the oneadmin.
>
> Before using MySQL I used SQLite as Database. My first step to tackle the
> response time increasing was replacing SQLite with MySQL.
>
> I append the output of the MySQL command to this email. I executed after
> during a 400 seconds stuck.
>
> Thanks for your help.
>
> Regards,
>
> Christoph Robbert
>
>
> Am 05.09.2012 14:17, schrieb Carlos Martín Sánchez:
>
> Hi,
>
> Let's try to rule out one thing at a time.
>
> Did you set any timer values in oned.conf that may overload opennebula?
> If the values of MANAGER_TIMER, HOST and VM MONITORING_INTERVAL are too
> low, opennebula could choke.
>
> Do you have any external authentication or authorization drivers enabled
> in oned.conf? Are you using oneadmin to do the requests, or a regular user?
> Doing a call to external drivers for each request may be a possible
> reason...
>
> Is the communication with MySQL the problem? Next time you see
> OpenNebula slowing down, you could try to execute, from the front-end
> machine, the following:
>
> $ mysql -u oneadmin -poneadmin -h localhost -P 0 opennebula -e "SELECT
> body FROM vm_pool WHERE state<>6;"
>
>
> Thanks for your feedback
>
> --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - The Open-source Solution for Data Center Virtualization
> www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula<http://twitter.com/opennebula>
>
>
>
> On Tue, Sep 4, 2012 at 1:25 PM, Christoph Robbert <
> chrobbert at googlemail.com> wrote:
>
>> Hello,
>>
>> i use MySQL as Database. I query Opennebula from python via pyoca[1]. But
>> i registered the same effect using the command "onevm list".
>>
>> The effect depend also on the number of running VMs, but i run only at
>> maximum 30 VMs. The effect starts at round about 6 VMs. Usually the time
>> increases to round about one or two seconds. But suddenly it response very
>> slow >>>60 seconds or didn't answer.
>>
>> Also the creation of a new VM getting stucked (response time also
>> increases to over 60 seconds).
>>
>> Sometimes the time increases to round about 240 seconds for one call.
>> Then the next call takes about one or two seconds.
>>
>> I couldn't see an xml-rpc request in the oned.log because my gui wait
>> until the last xml-rpc request is finished.
>>
>> I profile every part of my code with time measurements and traced it down
>> to the xml-rpc requests to opennebula.
>>
>> Hope this help.
>>
>> Regards,
>>
>> Christoph Robbert
>>
>>
>>
>> [1] https://github.com/lukaszo/python-oca
>>
>>
>>
>> Am 04.09.2012 12:59, schrieb Carlos Martín Sánchez:
>>
>> Hi,
>>
>> Can you share some more information about your scenario? Are you using
>> sqlite, or mysql? MySQL can drastically improve the performance over sqlite.
>>
>> How are you querying OpenNebula, are you using the CLI, our ruby/java
>> OCA? The response time can be affected by the xml processing that the OCA
>> has to do. If you are using Ruby, it is crucial that you have the nokogiri
>> gem installed
>>
>> Does the response time increase always over time, or is it related to
>> the number of existing VMs? If so, how many VMs does it take to make it
>> irresponsive?
>>
>> Can you still see the xml-rpc requests in oned.log each second?
>>
>> I'm trying to reproduce the problem, having over a 1000 running VMs.
>> I'm doing a onevm create & shutdown every 5 seconds while checking the time
>> it takes to do a onevm list each second, but can't see any response taking
>> more than one or two seconds.
>>
>> Regards
>> --
>> Carlos Martín, MSc
>> Project Engineer
>> OpenNebula - The Open-source Solution for Data Center Virtualization
>> www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula<http://twitter.com/opennebula>
>>
>>
>>
>> On Tue, Aug 28, 2012 at 1:40 PM, Christoph Robbert <
>> chrobbert at googlemail.com> wrote:
>>
>>> Hello,
>>>
>>> I'm working on project with Opennebula 3.6 as cloudcontroller. We start
>>> and stop VMs via xml-rpc nearly every 15 seconds. To monitor the actions in
>>> realtime, i implemented a gui, which calls Opennebula every second via
>>> xml-rpc. Now i notice a real big increase of the response time after 10
>>> minutes. The response time increases from nearly 1 second to 5 Minutes.
>>> Some time i have to restart Opennebula because the response time increase
>>> to infinity.
>>> Could you give me a hind where i should start to trace the bottleneck in
>>> Opennebula?
>>>
>>>
>>> Best Regards,
>>>
>>> Christoph Robbert
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120921/9b21d1da/attachment-0002.htm>
More information about the Users
mailing list