[one-users] sparse files as partitions in OpenNebula?

Ruben S. Montero rubensm at dacya.ucm.es
Fri Nov 12 09:15:44 PST 2010


Hi

Yes OpenNebula supports that functionality. You can define spare file
systems that can be created on the fly and then saved for later usage.
There are two aspects to be considered:

*Usage*

This can be implemented with the new image repository using a
DATABLOCK image, more information on this [1]. Also you can define a
FS image to be a plain FS that will be created the first time and then
can be reused, options for disks in [2]. If you use the image repo, be
sure to make the image persistent, and when using the second option
make sure to include SAVE=yes in the disk so you keep the changes.
(Note that SOURCE in the previous image definitions may point to a
block device)

*Implementation*

This depends on the storage architecture you are planning for your
cloud. The images can be backed by a device (e.g LVM, you need to make
the devices known to all the WNs) a shared FS (NFS is not an option
but may be Ceph, GlusterFS or Lustre works in your setup). Also you
can use qcow formats that will dynamically grow and may help to move
the disk around. OpenNebula 2.0 can be used to implement these
options. You may need to tune some of the drivers to match your system
tough.

Let me know if you need more help

Cheers

Ruben

[1] http://www.opennebula.org/documentation:rel2.0:img_guide
[2] http://www.opennebula.org/documentation:rel2.0:template#disks_section

On Thu, Nov 11, 2010 at 8:00 PM, Steven Timm <timm at fnal.gov> wrote:
>
> I have a user who has a need for 250GB of disk storage (eventually)
> that he would like to migrate around with his VM.  NFS isn't
> suitable for this application.  This is an application which will
> start with a file base and then gradually grow.  On Amazon this
> could be a use case for EBS but ONE doesn't have anything like that
> as far as I can tell.
>
> My question, can I create an opennebula template that calls out
> device "vdc" as a sparse file system eventually growable to 250 GB,
> and migrate that and save that as necessary?  If so, how?
> We are running opennebula 2.0 and using KVM as our hypervisor.
>
> Steve Timm
>
>
> --
> ------------------------------------------------------------------
> Steven C. Timm, Ph.D  (630) 840-8525
> timm at fnal.gov  http://home.fnal.gov/~timm/
> Fermilab Computing Division, Scientific Computing Facilities,
> Grid Facilities Department, FermiGrid Services Group, Assistant Group
> Leader.
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid

URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7



More information about the Users mailing list