[one-users] Sharing compute nodes with other applications (i.e. HPC)

Carlos Martín Sánchez cmartin at opennebula.org
Fri Aug 2 02:24:15 PDT 2013


Hi,

The monitorization reports the total cpu and memory in the host, and then
opennebula assumes all of it is available for VMs. You can make static
adjustments in the scheduler configuration [1], or you could look into the
monitorization scripts and try to modify them to suit your needs [2].

A more simple approach would be to disable the hosts [3] when you plan to
use them for other jobs.

Regards

[1] http://opennebula.org/documentation:rel4.2:schg#configuration
[2] http://opennebula.org/documentation:rel4.2:img
[3]
http://opennebula.org/documentation:rel4.2:host_guide#enable_disable_and_flush

--
Join us at OpenNebulaConf2013 <http://opennebulaconf.com> in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmartin at opennebula.org |
@OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>


On Thu, Aug 1, 2013 at 5:59 PM, Dmitri Chebotarov <dchebota at gmu.edu> wrote:

>  Hi
>
>  I've noticed that ONED continuously monitors compute nodes for available
> resources (CPU/MEM).
> So I had this idea of sharing compute nodes between HPC cluster and
> OpenNebula.
> The HPC cluster is not necessary loaded 100% all the time and may have
> spare resources to host VMs.
> If I added those nodes to ONE, do think sharing resources would work?
> The idea is when HPC assigns jobs to compute/ONE node, scheduler will
> monitor the node and "see" that it doesn't have CPU/MEM resource available
> and won't use it for new VMs....
>
>
>  Thanks.
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130802/beb1b92d/attachment-0002.htm>


More information about the Users mailing list