[one-dev] Scheduler decisions regarding storage

Ruben S. Montero rsmontero at opennebula.org
Thu Jun 12 02:56:16 PDT 2014


Hi Stuart

That code, as you suggest in your first email, is actually in the
scheduler. You are probably interested in VirtualMachineXML class, specially

Computation on the storage needed by the VM[1]:
  * IT considers the DS and specific mechanisms of the CLONE and LN
operations (i.e. if the storage needed to CLONE and LN takes space from the
system_ds or the image ds)
  * The updated version of the get_requirements function is also in that
class [2]
  * Available capacity of the DS is inititalized here [3] (DatastoreXML
class)

The actual scheduling is do in two-phases:

  * Datastores are filtered based on capacity (using previous functions)
and requirements [4]
  * System DS + Host are jointly scheduled later, the DS part is [5]

[1]
https://github.com/OpenNebula/one/blob/master/src/scheduler/src/pool/VirtualMachineXML.cc#L306

[2]
https://github.com/OpenNebula/one/blob/master/src/scheduler/src/pool/VirtualMachineXML.cc#L272

[3]
https://github.com/OpenNebula/one/blob/master/src/scheduler/src/pool/DatastoreXML.cc#L35

[4]
https://github.com/OpenNebula/one/blob/master/src/scheduler/src/sched/Scheduler.cc#L751

[5]
https://github.com/OpenNebula/one/blob/master/src/scheduler/src/sched/Scheduler.cc#L1051



On Sun, Jun 8, 2014 at 10:24 AM, Stuart Longland <stuartl at vrt.com.au> wrote:

> On 05/06/14 12:32, Stuart Longland wrote:
> > What I observe though, is that the disk storage always returns 0.  My
> > plan was to expand on this by adding an extra parameter (int& cache),
> > and I'd use whatever code is present to calculate disk requirements to
> > figure out how much to cache and thus, calculate the cache output.
>
> I've gone digging further back into the git repository to see if there
> was previously code that calculated the disk requirements.
>
> It seems not.  The file in question was added back in 2008, "Initial
> commit of ONE code", and those lines are largely unchanged today
> according to `git blame`.  So I guess it was a place-holder that never
> got filled in, perhaps because it hasn't been needed until now.
>
> The data needed on the host is going to largely depend on what kind of
> datastore is being used.  Plain files using SSH for transfer is going to
> require space for full images on the hosts, whereas datastores using
> centralised storage won't require any (unless cached).
>
> I think it makes sense to have the system work out an estimate for how
> much storage might be needed on the host to run the VM.  Now it's a
> question of do I re-use the "disk" parameter, meaning required "local
> disk on host" including cache, or do I create another variable as I was
> planning?
>
> Regards,
> --
> Stuart Longland
> Systems Engineer
>      _ ___
> \  /|_) |                           T: +61 7 3535 9619
>  \/ | \ |     38b Douglas Street    F: +61 7 3535 9699
>    SYSTEMS    Milton QLD 4064       http://www.vrt.com.au
>
>
> _______________________________________________
> Dev mailing list
> Dev at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/dev-opennebula.org
>



-- 
-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/dev-opennebula.org/attachments/20140612/a94b101d/attachment.htm>


More information about the Dev mailing list