[one-users] Private cloud with Front-end on Virtual MAchine HA

Bart bart at pleh.info
Mon Nov 17 02:52:31 PST 2014


Hi Daniel,

Thanks for the insight.

So basically, to sum it up, there is currently no way of running the
OpenNebula management node (with all functionality inside one VM) on it's
own virtualisation cluster (and thus managing itself along with the rest of
the cluster).

This would mean that you need two physical servers to create a (proper)
redundant (active / passive) setup?!

Thats a shame since we'd like to keep hardware to the minimum for this. But
if the best practice is to have two physical servers then I guess we'll
have to live with that and just maken it happen.

Although I'm still hoping for a better solution. So I'd be very interested
in hearing how others created their redundant OpenNebula management node,
is 2x server hardware "the best practice" or are there other solutions
(with less hardware) to achieve this?



2014-11-13 12:33 GMT+01:00 Daniel Dehennin <daniel.dehennin at baby-gnu.org>:

> Giancarlo De Filippis <gdefilippis at ltbl.it> writes:
>
> > I hope (like you) that someone (users or OpenNebula Team) have best
> > practices on how to run OpenNebula in a VM.
>
> Hello,
>
> The ONE frontend VM can not manage itself, you must use something else.
>
> I made a test with pacemaker/corosync and it can be quite easy[1]:
>
> #+begin_src conf
> primitive Stonith-ONE-Frontend stonith:external/libvirt \
>         params hostlist="one-frontend" hypervisor_uri="qemu:///system" \
>         pcmk_host_list="one-frontend" pcmk_host_check="static-list" \
>         op monitor interval="30m"
> primitive ONE-Frontend-VM ocf:heartbeat:VirtualDomain \
>         params config="/var/lib/one/datastores/one/one.xml" \
>         op start interval="0" timeout="90" \
>         op stop interval="0" timeout="100" \
>         utilization cpu="1" hv_memory="1024"
> group ONE-Frontend Stonith-ONE-Frontend ONE-Frontend-VM
> location ONE-Frontend-run-on-hypervisor ONE-Frontend \
>         rule $id="ONE-Frontend-run-on-hypervisor-rule" 40: #uname eq
> nebula1 \
>         rule $id="ONE-Frontend-run-on-hypervisor-rule-0" 30: #uname eq
> nebula3 \
>         rule $id="ONE-Frontend-run-on-hypervisor-rule-1" 20: #uname eq
> nebula2
> #+end_src
>
> I have troubles with my cluster because my nodes _and_ the ONE frontend
> needs to access the same SAN.
>
> My nodes have two LUNs over multipath FC (/dev/mapper/SAN-FS{1,2}), they
> are both PV of a cluster volume group (cLVM) with a GFS2 on top.
>
> So I need:
>
> - corosync for messaging
> - dlm for cLVM and GFS2
> - cLVM
> - GFS2
>
> I add the LUNs as raw block disks to my frontend VM and install the
> whole stack in it, but I'm facing some communication issues, and manage
> to solve somes[3].
>
> According to the pacemaker mailing list, having the nodes _and_ a VM in
> the same pacemaker/corosync cluster ”sounds like a recipe for
> disaster”[2].
>
> Hope this will help you have a picture of the topic.
>
> Regards.
>
> Footnotes:
> [1]
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html-single/Clusters_from_Scratch/index.html
>
> [2]
> http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/023000.html
>
> [3]
> http://oss.clusterlabs.org/pipermail/pacemaker/2014-November/022964.html
>
> --
> Daniel Dehennin
> Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
> Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
Bart G.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20141117/2c1e85f0/attachment.htm>


More information about the Users mailing list