[one-users] sparse files as partitions in OpenNebula?
Steven Timm
timm at fnal.gov
Wed Nov 24 12:51:58 PST 2010
Two surprises here:
1) oneimage help doesn't list a "create" subcommand at all, just oneimage
register.
2) I made a fresh install of opennebula on a clean machine
with nothing else installed and I still get the same error I got before,
no matter if I use oneimage create or oneimage register.
bash-3.2$ oneimage create datablock_test.one
Error: mkfs Image: in mkfs command.
What is it trying to do at this stage?
Note that it *did* make a sparse data file in the image repo at about that
time but seems to have failed to make the file system inside of it.
ls -lh reports a file size of 2.1G, du -s reports a usage of just 16K.
We are using the stock ruby that comes with sci. linux 5.5
bash-3.2$ rpm -q ruby
ruby-1.8.5-5.el5_4.8.x86_64
Also note--if I need to create an OS image I can do it.
bash-3.2$ oneimage create kernel.img
bash-3.2$ oneimage list
ID USER NAME TYPE REGTIME PUB PER STAT
#VMS
4 timm Steve test OS OS Nov 24, 2010 20:46 Yes No rdy
0
bash-3.2$
On Wed, 24 Nov 2010, Ruben S. Montero wrote:
> Hi
>
> Yes in this case there should not be any path. Here it is working:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
> img> cat dblock.img
> NAME = "Experiment results"
> TYPE = DATABLOCK
> # No PATH set, this image will start as a new empty disk
> SIZE = 2048
> FSTYPE = ext3
> PUBLIC = NO
> DESCRIPTION = "Storage for my Thesis experiments."
>
>> oneimage create dblock.img
>
>
>> oneimage create dblock.img
> pc-ruben:img> oneimage list
> ID USER NAME TYPE REGTIME PUB PER STAT
> #VMS
> ...
> 5 ruben Experiment results DB Nov 24, 2010 20:30 No No rdy
> 0
>
> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
>
>
> It may be a ruby version issue, which one are you using?
>
> About the other questions:
>
> a.- qcow2 feature is already in the source repo, and will be included in the
> upcoming OpenNebula 2.0.1.
Can we use qcow2 and persistent at the same time?
> b.- there is not a relationship between the persistent attribute and the
> image format. When a image is declared as persistent only one VM is able to
> use it to prevent data inconsistencies in the image.
That is what we would want in this case.
> c.- spare images: I think I do not really understand your use case. Would
> not a datablock do the job?
A datablock would do the job if I could make it work.
Steve
>
> Cheers
>
> Ruben
>
> On Tue, Nov 23, 2010 at 9:38 PM, Steven Timm <timm at fnal.gov> wrote:
>
>> I am trying to create the example DATABLOCK volume
>> as shown on
>>
>> http://www.opennebula.org/documentation:rel2.0:img_template
>>
>> bash-3.2$ cat datablock_test.one
>> NAME = "Experiment results"
>> TYPE = DATABLOCK
>> # No PATH set, this image will start as a new empty disk
>> SIZE = 2048
>> FSTYPE = ext3
>> PUBLIC = NO
>> DESCRIPTION = "Storage for my Thesis experiments."
>>
>> bash-3.2$ oneimage register datablock_test.one
>> Error: mkfs Image: in mkfs command.
>>
>> What is missing? Template reference page seems to
>> indicate that I need SOURCE or PATH required, but
>> what would be SOURCE if I just want to start with
>> a blank file system?
>>
>> ------------------------
>>
>> Also I have more questions on the qcow2 format:
>> a) we are running opennebula 2.0, haven't there been some
>> bug reports that qcow2 isn't working with latest opennebula.
>> b) It's my understanding that qcow2 format does quick copy
>> on write, how does that mesh with a image that is persistent--i.e
>> in normal circumstances only one system would be using it?
>>
>> Does anyone know of a way to send a sparse image along with KVM
>> or can that only be done in Xen?
>> Steve
>>
>>
>>
>> On Fri, 12 Nov 2010, Ruben S. Montero wrote:
>>
>> Hi
>>>
>>> Yes OpenNebula supports that functionality. You can define spare file
>>> systems that can be created on the fly and then saved for later usage.
>>> There are two aspects to be considered:
>>>
>>> *Usage*
>>>
>>> This can be implemented with the new image repository using a
>>> DATABLOCK image, more information on this [1]. Also you can define a
>>> FS image to be a plain FS that will be created the first time and then
>>> can be reused, options for disks in [2]. If you use the image repo, be
>>> sure to make the image persistent, and when using the second option
>>> make sure to include SAVE=yes in the disk so you keep the changes.
>>> (Note that SOURCE in the previous image definitions may point to a
>>> block device)
>>>
>>> *Implementation*
>>>
>>> This depends on the storage architecture you are planning for your
>>> cloud. The images can be backed by a device (e.g LVM, you need to make
>>> the devices known to all the WNs) a shared FS (NFS is not an option
>>> but may be Ceph, GlusterFS or Lustre works in your setup). Also you
>>> can use qcow formats that will dynamically grow and may help to move
>>> the disk around. OpenNebula 2.0 can be used to implement these
>>> options. You may need to tune some of the drivers to match your system
>>> tough.
>>>
>>> Let me know if you need more help
>>>
>>> Cheers
>>>
>>> Ruben
>>>
>>> [1] http://www.opennebula.org/documentation:rel2.0:img_guide
>>> [2] http://www.opennebula.org/documentation:rel2.0:template#disks_section
>>>
>>> On Thu, Nov 11, 2010 at 8:00 PM, Steven Timm <timm at fnal.gov> wrote:
>>>
>>>>
>>>> I have a user who has a need for 250GB of disk storage (eventually)
>>>> that he would like to migrate around with his VM. NFS isn't
>>>> suitable for this application. This is an application which will
>>>> start with a file base and then gradually grow. On Amazon this
>>>> could be a use case for EBS but ONE doesn't have anything like that
>>>> as far as I can tell.
>>>>
>>>> My question, can I create an opennebula template that calls out
>>>> device "vdc" as a sparse file system eventually growable to 250 GB,
>>>> and migrate that and save that as necessary? If so, how?
>>>> We are running opennebula 2.0 and using KVM as our hypervisor.
>>>>
>>>> Steve Timm
>>>>
>>>>
>>>> --
>>>> ------------------------------------------------------------------
>>>> Steven C. Timm, Ph.D (630) 840-8525
>>>> timm at fnal.gov http://home.fnal.gov/~timm/
>>>> Fermilab Computing Division, Scientific Computing Facilities,
>>>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>>>> Leader.
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>
>>>>
>>>
>>>
>>>
>>>
>> --
>> ------------------------------------------------------------------
>> Steven C. Timm, Ph.D (630) 840-8525
>> timm at fnal.gov http://home.fnal.gov/~timm/
>> Fermilab Computing Division, Scientific Computing Facilities,
>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>> Leader.
>>
>>
>
>
>
--
------------------------------------------------------------------
Steven C. Timm, Ph.D (630) 840-8525
timm at fnal.gov http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.
More information about the Users
mailing list