[one-users] Front-End node

Carlos Martín Sánchez cmartin at opennebula.org
Wed Dec 18 03:54:27 PST 2013


Hi,

On Tue, Dec 17, 2013 at 3:27 PM, Giancarlo De Filippis <gdefilippis at ltbl.it>
 wrote:
>
> Is possible to have a doc for installation of front-end as VM in one host
> (already in one cluster???)
>
The installation of OpenNebula itself will be the same for a physical and a
virtual machine.

One thing you need to keep in mind is that OpenNebula assumes full control
of the hosts. This means that the scheduler will think that all the memory
and cpu of the host is actually available to opennebula VMs, and you may
end overcommitting the host running your front-end VM. This may or may not
be a problem for you; if it is you can tune the scheduler configuration
(hypervisor_mem) [1].

> And.... can onenebula manage yourself (manage the vm installed in node)?
>
No, OpenNebula will report a "wild VM" inside the host. The VM will not be
destroyed or managed in any way, it will be simply ignored.

Regards

[1]
http://docs.opennebula.org/stable/administration/references/schg.html#configuration
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmartin at opennebula.org |
@OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>


On Tue, Dec 17, 2013 at 3:27 PM, Giancarlo De Filippis
<gdefilippis at ltbl.it>wrote:

>  Thanks very much for "rapid" answer.
>
> Is possible to have a doc for installation of front-end as VM in one host
> (already in one cluster???)
>
> And.... can onenebula manage yourself (manage the vm installed in node)?
>
> We have a shared filesystem with glusterfs
>
> ... sorry for my english!! ;)
>
>
>
> Il 17/12/2013 15:10 Steven Timm ha scritto:
>
> OpenNebula Frontend in a VM It can be done, but how well it performs
> depends on what data store you
> are using.  We do it in our development OpenNebula cloud but not
> in production yet because we are using a shared image store with
> GFS2 and CLVM and it is difficult with our current situation to
> get a VM to be a member of a cluster like that, particularly since
> our fibre channel host bus adapters don't support pass-through.
> We are exploring other options for our data store as we migrate to
> OpenNebula 4 series so we can make it work more smoothly.
>
> With NFS data store or ssh-based data store I expect you would not
> have any problems.
>
> So right now in the meantime our high availability OpenNebula head nodes
> run on bare metal, in an active-passive configuration,
> using Red Hat Clustering to manage the high availability of the service.
> HA is not perfect--if one head node goes down while a VM operation is in
> progress, the other one will not pick it up, but it's good enough for
> most thiings.
>
> There was a High Availability guide recently published.
>
> http://opennebula.org/documentation:archives:rel4.2:onehaWe were doing most of this stuff in version 3 before it was formalized in
> the guide.
>
> Steve Timm
>
>
> On Tue, 17 Dec 2013, Giancarlo De Filippis wrote:
>
> Hi all, i'd know if the front-end node can be installed in a VM (in HA
> mode) onto a compute-node? If yes .... there is some docs? Thanks to
> all.....
>
> ------------------------------------------------------------------
> Steven C. Timm, Ph.D  (630) 840-8525timm at fnal.gov  http://home.fnal.gov/~timm/
> Fermilab Scientific Computing Division, Scientific Computing Services Quad.
> Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
>
>
> --
>
> *Giancarlo De Filippis*
> LTBL S.r.L.
> Cell. +39 320 8155325
> Uff.  +39 02 89604424
> Fax  +39 02 89954500
> __________________
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131218/e50493a6/attachment-0002.htm>


More information about the Users mailing list