[one-users] sparse files as partitions in OpenNebula?

Ruben S. Montero rubensm at dacya.ucm.es
Wed Nov 24 12:36:51 PST 2010


Hi

Yes in this case there should not be any path. Here it is working:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><
img> cat dblock.img
NAME          = "Experiment results"
TYPE          = DATABLOCK
# No PATH set, this image will start as a new empty disk
SIZE          = 2048
FSTYPE        = ext3
PUBLIC        = NO
DESCRIPTION   = "Storage for my Thesis experiments."

> oneimage create dblock.img


> oneimage create dblock.img
pc-ruben:img> oneimage list
  ID     USER                 NAME TYPE              REGTIME PUB PER STAT
 #VMS
...
   5    ruben   Experiment results   DB   Nov 24, 2010 20:30  No  No  rdy
  0

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<


It may be a ruby version issue, which one are you using?

About the other questions:

a.- qcow2 feature is already in the source repo, and will be included in the
upcoming OpenNebula 2.0.1.
b.- there is not a relationship between the persistent attribute and the
image format. When a image is declared as persistent only one VM is able to
use it to prevent data inconsistencies in the image.
c.- spare images: I think I do not really understand your use case. Would
not  a datablock do the job?

Cheers

Ruben

On Tue, Nov 23, 2010 at 9:38 PM, Steven Timm <timm at fnal.gov> wrote:

> I am trying to create the example DATABLOCK volume
> as shown on
>
> http://www.opennebula.org/documentation:rel2.0:img_template
>
> bash-3.2$ cat datablock_test.one
> NAME          = "Experiment results"
> TYPE          = DATABLOCK
> # No PATH set, this image will start as a new empty disk
> SIZE          = 2048
> FSTYPE        = ext3
> PUBLIC        = NO
> DESCRIPTION   = "Storage for my Thesis experiments."
>
> bash-3.2$ oneimage register datablock_test.one
> Error: mkfs Image: in mkfs command.
>
> What is missing?  Template reference page seems to
> indicate that I need SOURCE or PATH required, but
> what would be SOURCE if I just want to start with
> a blank file system?
>
> ------------------------
>
> Also I  have more questions on the qcow2 format:
> a) we are running opennebula 2.0, haven't there been some
> bug reports that qcow2 isn't working with latest opennebula.
> b) It's my understanding that qcow2 format does quick copy
> on write, how does that mesh with a image that is persistent--i.e
> in normal circumstances only one system would be using it?
>
> Does anyone know of a way to send a sparse image along with KVM
> or can that only be done in Xen?
> Steve
>
>
>
> On Fri, 12 Nov 2010, Ruben S. Montero wrote:
>
>  Hi
>>
>> Yes OpenNebula supports that functionality. You can define spare file
>> systems that can be created on the fly and then saved for later usage.
>> There are two aspects to be considered:
>>
>> *Usage*
>>
>> This can be implemented with the new image repository using a
>> DATABLOCK image, more information on this [1]. Also you can define a
>> FS image to be a plain FS that will be created the first time and then
>> can be reused, options for disks in [2]. If you use the image repo, be
>> sure to make the image persistent, and when using the second option
>> make sure to include SAVE=yes in the disk so you keep the changes.
>> (Note that SOURCE in the previous image definitions may point to a
>> block device)
>>
>> *Implementation*
>>
>> This depends on the storage architecture you are planning for your
>> cloud. The images can be backed by a device (e.g LVM, you need to make
>> the devices known to all the WNs) a shared FS (NFS is not an option
>> but may be Ceph, GlusterFS or Lustre works in your setup). Also you
>> can use qcow formats that will dynamically grow and may help to move
>> the disk around. OpenNebula 2.0 can be used to implement these
>> options. You may need to tune some of the drivers to match your system
>> tough.
>>
>> Let me know if you need more help
>>
>> Cheers
>>
>> Ruben
>>
>> [1] http://www.opennebula.org/documentation:rel2.0:img_guide
>> [2] http://www.opennebula.org/documentation:rel2.0:template#disks_section
>>
>> On Thu, Nov 11, 2010 at 8:00 PM, Steven Timm <timm at fnal.gov> wrote:
>>
>>>
>>> I have a user who has a need for 250GB of disk storage (eventually)
>>> that he would like to migrate around with his VM.  NFS isn't
>>> suitable for this application.  This is an application which will
>>> start with a file base and then gradually grow.  On Amazon this
>>> could be a use case for EBS but ONE doesn't have anything like that
>>> as far as I can tell.
>>>
>>> My question, can I create an opennebula template that calls out
>>> device "vdc" as a sparse file system eventually growable to 250 GB,
>>> and migrate that and save that as necessary?  If so, how?
>>> We are running opennebula 2.0 and using KVM as our hypervisor.
>>>
>>> Steve Timm
>>>
>>>
>>> --
>>> ------------------------------------------------------------------
>>> Steven C. Timm, Ph.D  (630) 840-8525
>>> timm at fnal.gov  http://home.fnal.gov/~timm/
>>> Fermilab Computing Division, Scientific Computing Facilities,
>>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>>> Leader.
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>>
>>
>>
> --
> ------------------------------------------------------------------
> Steven C. Timm, Ph.D  (630) 840-8525
> timm at fnal.gov  http://home.fnal.gov/~timm/
> Fermilab Computing Division, Scientific Computing Facilities,
> Grid Facilities Department, FermiGrid Services Group, Assistant Group
> Leader.
>
>


-- 
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid

URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20101124/a696b715/attachment-0003.htm>


More information about the Users mailing list