[one-users] sparse files as partitions in OpenNebula?

Steven Timm timm at fnal.gov
Tue Nov 23 12:38:29 PST 2010


I am trying to create the example DATABLOCK volume
as shown on

http://www.opennebula.org/documentation:rel2.0:img_template

bash-3.2$ cat datablock_test.one
NAME          = "Experiment results"
TYPE          = DATABLOCK
# No PATH set, this image will start as a new empty disk
SIZE          = 2048
FSTYPE        = ext3
PUBLIC        = NO
DESCRIPTION   = "Storage for my Thesis experiments."

bash-3.2$ oneimage register datablock_test.one
Error: mkfs Image: in mkfs command.

What is missing?  Template reference page seems to
indicate that I need SOURCE or PATH required, but
what would be SOURCE if I just want to start with
a blank file system?

------------------------

Also I  have more questions on the qcow2 format:
a) we are running opennebula 2.0, haven't there been some
bug reports that qcow2 isn't working with latest opennebula.
b) It's my understanding that qcow2 format does quick copy
on write, how does that mesh with a image that is persistent--i.e
in normal circumstances only one system would be using it?

Does anyone know of a way to send a sparse image along with KVM
or can that only be done in Xen?
Steve


On Fri, 12 Nov 2010, Ruben S. Montero wrote:

> Hi
>
> Yes OpenNebula supports that functionality. You can define spare file
> systems that can be created on the fly and then saved for later usage.
> There are two aspects to be considered:
>
> *Usage*
>
> This can be implemented with the new image repository using a
> DATABLOCK image, more information on this [1]. Also you can define a
> FS image to be a plain FS that will be created the first time and then
> can be reused, options for disks in [2]. If you use the image repo, be
> sure to make the image persistent, and when using the second option
> make sure to include SAVE=yes in the disk so you keep the changes.
> (Note that SOURCE in the previous image definitions may point to a
> block device)
>
> *Implementation*
>
> This depends on the storage architecture you are planning for your
> cloud. The images can be backed by a device (e.g LVM, you need to make
> the devices known to all the WNs) a shared FS (NFS is not an option
> but may be Ceph, GlusterFS or Lustre works in your setup). Also you
> can use qcow formats that will dynamically grow and may help to move
> the disk around. OpenNebula 2.0 can be used to implement these
> options. You may need to tune some of the drivers to match your system
> tough.
>
> Let me know if you need more help
>
> Cheers
>
> Ruben
>
> [1] http://www.opennebula.org/documentation:rel2.0:img_guide
> [2] http://www.opennebula.org/documentation:rel2.0:template#disks_section
>
> On Thu, Nov 11, 2010 at 8:00 PM, Steven Timm <timm at fnal.gov> wrote:
>>
>> I have a user who has a need for 250GB of disk storage (eventually)
>> that he would like to migrate around with his VM.  NFS isn't
>> suitable for this application.  This is an application which will
>> start with a file base and then gradually grow.  On Amazon this
>> could be a use case for EBS but ONE doesn't have anything like that
>> as far as I can tell.
>>
>> My question, can I create an opennebula template that calls out
>> device "vdc" as a sparse file system eventually growable to 250 GB,
>> and migrate that and save that as necessary?  If so, how?
>> We are running opennebula 2.0 and using KVM as our hypervisor.
>>
>> Steve Timm
>>
>>
>> --
>> ------------------------------------------------------------------
>> Steven C. Timm, Ph.D  (630) 840-8525
>> timm at fnal.gov  http://home.fnal.gov/~timm/
>> Fermilab Computing Division, Scientific Computing Facilities,
>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>> Leader.
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
>
>

-- 
------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm at fnal.gov  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.



More information about the Users mailing list