[one-users] Combining multiple storage backends within a single VM

Jaime Melis jmelis at opennebula.org
Mon Apr 18 01:55:46 PDT 2011


Hi Viven,

The use of different TM mechanism is quite interesting, as you propose
probably the best way to handle this is a TM clever enough to choose the
corresponding storage backend for  different disks. In fact, that is the
spirit of the example at [1], although simplier than the one you propose.

I'd say that the preferred alternatives are:

* Include the TM options in the Image template. Define an image "Debian base
system", with the proper TM options to handle it. The TM_OPTIONS will be
stored in the image definition and could be accessed trough the driver by
getting the VM id from the SRC_PATH and from there 'onevm show <id>', and
get the disk that match de disk_id (also from the SRC_PATH, note that disks
are named disk.<disk_id>) to get the IMAGE and the TM options from 'oneimage
show'.

* Use special URLs for your images so the TM knows how to handle them. For
example, iscsi_filer://debian. URLs are not modified by OpenNebula so you
get the that URL. In the TM you can have the TM options for different URLs

NOTE: There is a "bug" in 2.2 that removes the custom attributes (like
TM_OPTIONS you propose) [2]. This will be solved in 2.4. Storing the
TM_OPTIONS have some advantages as you do not have to include them in all
the templates that uses that OPTIONS. Also, if you want to change them you
only have to do it in the Image Pool.

I think that this approach is pretty close to the Storage Pool proposed in
your email.

However I think that two queries (onevm show + oneimage show) is a bit
cumbersome, my proposal here is to include a special Image attribute
(TM_OPTIONS or DISK_ATTRIBUTES) that will be copied form the IMAGE to the
DISK attribute in the VM template (when the VM is created), so a simple
onevm show will give you those values.

What do you think?

REFS
[1] http://www.opennebula.org/documentation:rel2.2:sd
[2] http://dev.opennebula.org/issues/559

Cheers,
Jaime

On Fri, Apr 15, 2011 at 7:15 PM, Vivien Bernet-Rollande <
vivien.bernet-rollande at nexen.alterway.fr> wrote:

> Hi.
> I'm new to the OpenNebula Project. I'm currently testing if it can do what
> we need. So far, it looks promising, but I have a problem.
>
> Currently, Open Nebula assumes there is a single type of storage for each
> hypervisor. It lets us write custom drivers, but only for one single storage
> type at a time.
>
> I have read
> http://lists.opennebula.org/htdig.cgi/users-opennebula.org/2011-April/004678.html,
> but this is not what I am trying to do.
>
> I want to mix several types of storage on a single virtual machine. For
> instance :
>  - swap on local disks
>  - data on NFS
> or :
>  - persistent data on shared block devices (iscsi, gluster, ceph, you name
> it)
>  - temporary OS on local storage.
>  - CD images on a shared filesystem (NFS)
> or even :
>  - local SSD for a fast cache
>  - local sata drive for the OS
>
> I'm totally fine with the idea of writing (and sharing) a custom transfer
> manager driver for this purpose. The question is : will it be enough, or
> will I need do add some way to dispatch calls to various
>
> I've come up with several ideas regarding how to deal with this from a
> configuration standpoint.
>
> The idea I like most is fairly straightforward : we specify some options
> for the TM driver . Of course, the TM has to be custom written and do all
> the work. For example :
>
> DISK = [ IMAGE = "Debian base system",
>          TM_OPTIONS = "storage=iscsi iscsi_type=solaris_zfs
> iscsi_filer=1.2.3.4 iscsi_volume=vm_images" ]
>
> With these options, the transfer manager could CLONE the image :
>  - check the size of the source image
>  - connect to the filer
>  - create a new LUN in the volume with the right size
>  - make sure the node sees the LUN
>  - create a link in /srv/cloud/one/5/images/disk.0 pointing to that lun
> (with some /dev/disk/by-path/ magic)
>  - copy the data from wherever the image is stored onto the LUN.
>
> With this in place, MKSWAP, MKIMAGE, LN and DELETE operations are
> straightforward.
>
> MV is a bit more complex if we want to be able to move data from one
> storage backend to another, but can probably be figured out. Then I need to
> figure where I want to store data, how, make sure everything interacts well.
> But that's just a matter of scripting (and testing), and I can handle it.
>
> The key operation is to have the TM_OPTIONS variable given to the transfer
> manager. I've checked, it's possible to add variables in the VM template,
> they are simply ignored.
>
> The best way I can think of is to have the transfer manager take that
> field, and add it an environment variable. This would be 100% compatible
> with existing drivers, and allow me to do what I need.
>
>
> The thing is, I'm not quite sure about how to implement this.
>
> If I'm correct, I can get the value for TM_OPTIONS easily in
> src/tm/TransferManager.cc with a simple disk->vector_value("TM_OPTIONS").
> The problem is the commands are then simply written to a file, and it's kind
> of hard to set an environment variable that would be forwarded to the
> driver's script.
>
> So I came up with another idea : instead of adding TM_OPTIONS as an
> environment variable, append it to the line in the command files. For
> instance, instead of :
>
> LN opennebula:/srv/cloud/images/debian.img 1.1.1.1:
> /var/lib/one/25/images/disk.0
>
> We would have :
>
> LN opennebula:/srv/cloud/images/debian.img 1.1.1.1:/var/lib/one/25/images/disk.0
> storage=iscsi iscsi_type=nexenta iscsi_filer=1.2.3.4 iscsi_volume=vm_images
>
> Then it would be the driver's work to parse the command line.
>
> Since most drivers use $1 and $2 to handle their parameters, this would be
> perfectly transparent to existing code. one_tm simply forwards '$*' to the
> right driver. However, it would enable building much more powerful transfer
> manager drivers.
>
> Oh, and "CONTEXT" could take a similar TM_OPTIONS argument.
>
> Now, a few more practical questions :
>  - is there interest for this feature from the community ?
>  - is the way I propose to add it to the VM template compliant to ONE's way
> of doing things ? Is another name preferred (rather than TM_OPTIONS) ?
>  - is appending those options at the end of the command the right way, or
> will it break something further down the line ?
>  - any chances to get this in 2.2 ? Or will I have to maintain my own
> patched version ? (I'm using the .deb you guys provide by the way).
>
>
> The other method I thought of is to have a StoragePool database object,
> with a set of options. One would register such a StoragePool in a similar
> way to Virtual Networks (ie, with a template and a command). The user could
> then say "I want this disk to be in storage pool X". This moves the
> responsability of managing storage from the TM driver to the core. It has
> some nice implications (for instance we could attach a default StoragePool
> to a cluster or a host, we could monitor available space in a StoragePool, a
> pool could be marked as persistant or not, etc.), but is much more complex
> to implement, and might break existing setups. Morever this would break
> existing drivers, because we need a way to transfer images between storage
> pools. But I like the idea of declaring "I have an NFS share here. Oh, and
> some AoE there, and SCSI other here. And all those machines have two 1T
> drives they can use", and having the VM select the type of storage they
> need.
>
> For me, the most important difference between the two solutions is that I
> can have the first one up and running in a matter of days, while the other
> might require several weeks. I will go and try the first one. I do share the
> second one with you, however, because I think it would be an interesting,
> even more extensible, long-term solution.
>
>
> --
> Vivien Bernet-Rollande
> Systems & Networking Engineer
> Alter Way Hosting
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
Jaime Melis, Cloud Technology Engineer/Researcher
Major Contributor
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110418/af4b4066/attachment-0003.htm>


More information about the Users mailing list