Hi,<br><br>active-active mode in which the proxy acts as loadbalancer and gives authority token to a selected oned daemon.<br>oned has a support to perform a dry run i.e performs all the operations like updating its cache etc but commit/authority to perform operation on synchronized resource is restricted by a token which is given by proxy.<br>
which makes sure the cahce is sync in all the oned deamons and proxy has control even for fail over and load balance.<br><br>Regards,<br>Mani.<br><br><div class="gmail_quote">On Thu, Feb 17, 2011 at 5:28 PM, Danny Sternkopf <span dir="ltr"><<a href="mailto:danny.sternkopf@csc.fi">danny.sternkopf@csc.fi</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Hi,<br>
<br>
that would be an active - passive oned configuration. The 2nd oned only jumps in when needed. So it even could be running only in case the 1st oned has failed. HA software could manage that assuming the oned config directory is shared. The proxy makes sure that there is a single point of access, but this could be also done by HA management. What could go wrong if oned dies and another oned on a different machine takes over? Is there any possibility that information is lost due to the 1st oned's cache is gone?<br>
<br>
Active - active would probably only make sense if oned has integrated support for a redundancy mode. So that both oned's can exchange heartbeats and may also negotiate who is master and is taking care of the running configuration.<br>
<br>
Regards,<br><font color="#888888">
<br>
Danny</font><div><div></div><div class="h5"><br>
<br>
On 2011-02-17 13:00, Tino Vazquez wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hi again,<br>
<br>
To add up to my previous email, it is worth noting that there is other<br>
option that would avoid the fiddling with the cache. For this, both<br>
oned have to be active. One way to go could be:<br>
<br>
1) Set oned in two machines<br>
2) CLI, EC2 tools connect via a proxy that forwards the requests to<br>
the first daemon<br>
3) If this fails, the proxy should start forwarding to the second.<br>
Also, a coherence checking for VMs in intermediate states needs to be<br>
in place to avoid driver callback misses.<br>
4) The scheduler should be on a third, separate machine<br>
<br>
Regards,<br>
<br>
-Tino<br>
<br>
--<br>
Constantino Vázquez Blanco, MSc<br>
OpenNebula Major Contributor / Cloud Researcher<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @tinova79<br>
<br>
<br>
<br>
On Wed, Feb 16, 2011 at 4:06 PM, Tino Vazquez<<a href="mailto:tinova@opennebula.org" target="_blank">tinova@opennebula.org</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hi Steven,<br>
<br>
There may be incoherences between the two ONEs. Due to the cache (this<br>
can be disabled in ONE, with performance penalty), two ONEs can have<br>
the same VM record stored in memory, so if one instance of ONE writes<br>
to the DB, these changes won't reflect in the other ONE until it<br>
refreshes its caches, or worst still, the second instance of ONE may<br>
overwrite the changes. I am by no means saying this is not achievable,<br>
but there are several things (like the one in this email) to consider.<br>
<br>
We have been thinking of a setup as the one you propose, and actually,<br>
we would love to hear how this works in practice, as it is<br>
theoretically possible but haven't got around to try it out.<br>
<br>
Regards,<br>
<br>
-Tino<br>
<br>
--<br>
Constantino Vázquez Blanco, MSc<br>
OpenNebula Major Contributor / Cloud Researcher<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @tinova79<br>
<br>
<br>
<br>
On Wed, Feb 16, 2011 at 3:59 PM, Steven Timm<<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Tino--are you saying that there is state information in the oned<br>
that is not on disk at any given time?<br>
We were thinking of setting up an active-passive failover<br>
of our oned via heartbeat and DRBD. Is there any reason<br>
why that might not work?<br>
<br>
Steve Timm<br>
<br>
<br>
On Wed, 16 Feb 2011, Tino Vazquez wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hi Luis,<br>
<br>
That setup is not easily achievable. Operations are not transactional,<br>
and also ONE keeps a cache, so the information of multiple ONEs won't<br>
be in sync.<br>
<br>
It can be achieved, but not out of the box, a fair amount of fiddling<br>
is involved.<br>
<br>
Regards,<br>
<br>
-Tino<br>
<br>
--<br>
Constantino Vázquez Blanco, MSc<br>
OpenNebula Major Contributor / Cloud Researcher<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @tinova79<br>
<br>
<br>
<br>
On Mon, Jan 31, 2011 at 6:15 PM, Luis M. Carril<<a href="mailto:lmcarril@cesga.es" target="_blank">lmcarril@cesga.es</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
Hello,<br>
We have an OpenNebula installation and we wanted to deploy another ONE<br>
server for redundacy monitoring the same hosts and MVs. Could this be<br>
achieved if both ONE installations use the same mysql database? Are all<br>
the<br>
operations transactional?<br>
<br>
Cheers<br>
<br>
--<br>
Luis M. Carril<br>
Project Technician<br>
Galicia Supercomputing Center (CESGA)<br>
Avda. de Vigo s/n<br>
15706 Santiago de Compostela<br>
SPAIN<br>
<br>
Tel: 34-981569810 ext 249<br>
<a href="mailto:lmcarril@cesga.es" target="_blank">lmcarril@cesga.es</a><br>
<a href="http://www.cesga.es" target="_blank">www.cesga.es</a><br>
<br>
<br>
==================================================================<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br>
</blockquote>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br>
</blockquote>
<br>
--<br>
------------------------------------------------------------------<br>
Steven C. Timm, Ph.D (630) 840-8525<br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/%7Etimm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Computing Division, Scientific Computing Facilities,<br>
Grid Facilities Department, FermiGrid Services Group, Group Leader.<br>
Lead of FermiCloud project.<br>
<br>
<br>
</blockquote>
<br>
</blockquote>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</blockquote>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>K Manikanta Swamy<br>+919059014442<br>