[one-users] sparse files as partitions in OpenNebula?

Steven Timm timm at fnal.gov
Wed Nov 24 13:03:23 PST 2010


A followup--
when I try to run the mkfs command by hand on a copy of the sparse
file from the repository here is what I get:

/sbin/mkfs.ext3 ./testsparsefile
mke2fs 1.39 (29-May-2006)
./testsparsefile is not a block special device.
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262144 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
         32768, 98304, 163840, 229376, 294912

Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
bash-3.2$ ls -lrt
total 10349388
-rw-r--r-- 1 timm fnalgrid  2147483649 Nov 24 14:42 
c440fa087fda2ddfecee250da48659db675da0e5
-rw-rw---- 1 timm fnalgrid 10485760000 Nov 24 14:50 
7bf1f760a574f34a88213010ed76ef632ccffbea
-rw-r--r-- 1 timm fnalgrid  2147483649 Nov 24 14:59 testsparsefile
bash-3.2$ du -s ./testsparsefile
99352   ./testsparsefile
bash-3.2$ du -s ./c440fa087fda2ddfecee250da48659db675da0e5
16      ./c440fa087fda2ddfecee250da48659db675da0e5
bash-3.2$ file ./testsparsefile
./testsparsefile: Linux rev 1.0 ext3 filesystem data (large files)

So it could be that the script that is calling the mkfs command  is not 
expecting the y/n prompt and that is what is giving the error.  Adding
the -F option to the mkfs would fix that.  Where is the corresponding
ruby file that is calling the mkfs?

Steve Timm




On Wed, 24 Nov 2010, Steven Timm wrote:

>
> Two surprises here:
>
> 1) oneimage help doesn't list a "create" subcommand at all, just oneimage 
> register.
>
> 2) I made a fresh install of opennebula on a clean machine
> with nothing else installed and I still get the same error I got before,
> no matter if I use oneimage create or oneimage register.
>
> bash-3.2$ oneimage create datablock_test.one
> Error: mkfs Image: in mkfs command.
>
> What is it trying to do at this stage?
> Note that it *did* make a sparse data file in the image repo at about that 
> time but seems to have failed to make the file system inside of it.
> ls -lh reports a file size of 2.1G, du -s reports a usage of just 16K.
>
> We are using the stock ruby that comes with sci. linux 5.5
>
> bash-3.2$ rpm -q ruby
> ruby-1.8.5-5.el5_4.8.x86_64
>
> Also note--if I need to create an OS image I can do it.
>
> bash-3.2$ oneimage create kernel.img
> bash-3.2$ oneimage list
>  ID     USER                 NAME TYPE              REGTIME PUB PER STAT 
> #VMS
>   4     timm        Steve test OS   OS   Nov 24, 2010 20:46 Yes  No  rdy 0
> bash-3.2$
>
>
> On Wed, 24 Nov 2010, Ruben S. Montero wrote:
>
>> Hi
>> 
>> Yes in this case there should not be any path. Here it is working:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> <
>> img> cat dblock.img
>> NAME          = "Experiment results"
>> TYPE          = DATABLOCK
>> # No PATH set, this image will start as a new empty disk
>> SIZE          = 2048
>> FSTYPE        = ext3
>> PUBLIC        = NO
>> DESCRIPTION   = "Storage for my Thesis experiments."
>> 
>>> oneimage create dblock.img
>> 
>> 
>>> oneimage create dblock.img
>> pc-ruben:img> oneimage list
>>  ID     USER                 NAME TYPE              REGTIME PUB PER STAT
>> #VMS
>> ...
>>   5    ruben   Experiment results   DB   Nov 24, 2010 20:30  No  No  rdy
>>  0
>> 
>> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
>> 
>> 
>> It may be a ruby version issue, which one are you using?
>> 
>> About the other questions:
>> 
>> a.- qcow2 feature is already in the source repo, and will be included in 
>> the
>> upcoming OpenNebula 2.0.1.
> Can we use qcow2 and persistent at the same time?
>
>> b.- there is not a relationship between the persistent attribute and the
>> image format. When a image is declared as persistent only one VM is able to
>> use it to prevent data inconsistencies in the image.
>
> That is what we would want in this case.
>
>> c.- spare images: I think I do not really understand your use case. Would
>> not  a datablock do the job?
>
> A datablock would do the job if I could make it work.
>
> Steve
>
>
>> 
>> Cheers
>> 
>> Ruben
>> 
>> On Tue, Nov 23, 2010 at 9:38 PM, Steven Timm <timm at fnal.gov> wrote:
>> 
>>> I am trying to create the example DATABLOCK volume
>>> as shown on
>>> 
>>> http://www.opennebula.org/documentation:rel2.0:img_template
>>> 
>>> bash-3.2$ cat datablock_test.one
>>> NAME          = "Experiment results"
>>> TYPE          = DATABLOCK
>>> # No PATH set, this image will start as a new empty disk
>>> SIZE          = 2048
>>> FSTYPE        = ext3
>>> PUBLIC        = NO
>>> DESCRIPTION   = "Storage for my Thesis experiments."
>>> 
>>> bash-3.2$ oneimage register datablock_test.one
>>> Error: mkfs Image: in mkfs command.
>>> 
>>> What is missing?  Template reference page seems to
>>> indicate that I need SOURCE or PATH required, but
>>> what would be SOURCE if I just want to start with
>>> a blank file system?
>>> 
>>> ------------------------
>>> 
>>> Also I  have more questions on the qcow2 format:
>>> a) we are running opennebula 2.0, haven't there been some
>>> bug reports that qcow2 isn't working with latest opennebula.
>>> b) It's my understanding that qcow2 format does quick copy
>>> on write, how does that mesh with a image that is persistent--i.e
>>> in normal circumstances only one system would be using it?
>>> 
>>> Does anyone know of a way to send a sparse image along with KVM
>>> or can that only be done in Xen?
>>> Steve
>>> 
>>> 
>>> 
>>> On Fri, 12 Nov 2010, Ruben S. Montero wrote:
>>>
>>>  Hi
>>>> 
>>>> Yes OpenNebula supports that functionality. You can define spare file
>>>> systems that can be created on the fly and then saved for later usage.
>>>> There are two aspects to be considered:
>>>> 
>>>> *Usage*
>>>> 
>>>> This can be implemented with the new image repository using a
>>>> DATABLOCK image, more information on this [1]. Also you can define a
>>>> FS image to be a plain FS that will be created the first time and then
>>>> can be reused, options for disks in [2]. If you use the image repo, be
>>>> sure to make the image persistent, and when using the second option
>>>> make sure to include SAVE=yes in the disk so you keep the changes.
>>>> (Note that SOURCE in the previous image definitions may point to a
>>>> block device)
>>>> 
>>>> *Implementation*
>>>> 
>>>> This depends on the storage architecture you are planning for your
>>>> cloud. The images can be backed by a device (e.g LVM, you need to make
>>>> the devices known to all the WNs) a shared FS (NFS is not an option
>>>> but may be Ceph, GlusterFS or Lustre works in your setup). Also you
>>>> can use qcow formats that will dynamically grow and may help to move
>>>> the disk around. OpenNebula 2.0 can be used to implement these
>>>> options. You may need to tune some of the drivers to match your system
>>>> tough.
>>>> 
>>>> Let me know if you need more help
>>>> 
>>>> Cheers
>>>> 
>>>> Ruben
>>>> 
>>>> [1] http://www.opennebula.org/documentation:rel2.0:img_guide
>>>> [2] http://www.opennebula.org/documentation:rel2.0:template#disks_section
>>>> 
>>>> On Thu, Nov 11, 2010 at 8:00 PM, Steven Timm <timm at fnal.gov> wrote:
>>>> 
>>>>> 
>>>>> I have a user who has a need for 250GB of disk storage (eventually)
>>>>> that he would like to migrate around with his VM.  NFS isn't
>>>>> suitable for this application.  This is an application which will
>>>>> start with a file base and then gradually grow.  On Amazon this
>>>>> could be a use case for EBS but ONE doesn't have anything like that
>>>>> as far as I can tell.
>>>>> 
>>>>> My question, can I create an opennebula template that calls out
>>>>> device "vdc" as a sparse file system eventually growable to 250 GB,
>>>>> and migrate that and save that as necessary?  If so, how?
>>>>> We are running opennebula 2.0 and using KVM as our hypervisor.
>>>>> 
>>>>> Steve Timm
>>>>> 
>>>>> 
>>>>> --
>>>>> ------------------------------------------------------------------
>>>>> Steven C. Timm, Ph.D  (630) 840-8525
>>>>> timm at fnal.gov  http://home.fnal.gov/~timm/
>>>>> Fermilab Computing Division, Scientific Computing Facilities,
>>>>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>>>>> Leader.
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at lists.opennebula.org
>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> --
>>> ------------------------------------------------------------------
>>> Steven C. Timm, Ph.D  (630) 840-8525
>>> timm at fnal.gov  http://home.fnal.gov/~timm/
>>> Fermilab Computing Division, Scientific Computing Facilities,
>>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>>> Leader.
>>> 
>>> 
>> 
>> 
>> 
>
>

-- 
------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm at fnal.gov  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.



More information about the Users mailing list