[one-users] Using Requirements and Rank to create only one guest of type X per host
Karl Katzke
kkatzke at sentryds.com
Thu Oct 27 14:10:03 PDT 2011
Howdy! We’re currently implementing OpenNebula as a test to see if it fits our infrastructure needs.
We have a specific requirement for some guests to have direct access to a raid0 array on a host. We obviously only want one of these guests to be running on each host. (Yes, raid0. The work they’re doing needs fast disk access with bulletproof locking, but the results get shipped elsewhere when it’s done. It’s an ideal fit for a cloud since the ) I can’t tell from the requirements/rank manual page how to set a requirement that this disk space is not currently used by another VM.
Worse, some of our dom0/hosts have two of these resources, of a slightly different type, to allocate.
Note that the machines will all be named a particular way, so I could put in an exclusion that only select a host named like “dd” where a “gdd” machine is not running, for example, but I’m not sure how to do the second part of that.
We also want those hosts to start first when a host or the cluster comes back up. If I’m reading the documentation correctly, that means that we would use the rank algorithm somehow — but by my read of the documentation, that was only for host selection, not boot priority.
Manually coupling a particular VM to a host would also work, but I can’t figure out how to do that yet within the scope of the “VM Template” scheme either.
Does anyone have any suggestions on how to implement this resource constraint in “the OpenNebula way”?
Last but not least, I’d like to have a the SSH transition, the LVM transition manager, and the Shared transition manager all enabled. Do I simply uncomment all three in /etc/one/oned.conf, or do I need to use some other syntax? The manual is not clear on this.
Thanks,
Karl Katzke
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20111027/f3719181/attachment.htm>
More information about the Users
mailing list