[one-users] datablocks continued
Carsten.Friedrich at csiro.au
Carsten.Friedrich at csiro.au
Tue Feb 15 21:08:58 PST 2011
Hi Javier,
Thanks for that. I think this article needs to be updated to cater for the new image repository. Especially I think it needs to be extended to rebase images on saveas. If I understand things correctly, currently OpenNebula will allow a user to delete the base image as OpenNebula thinks the image is no longer in use once the VM is done. If images are not rebased they will no longer work if the base image is deleted.
Carsten
-----Original Message-----
From: Javier Fontan [mailto:jfontan at gmail.com]
Sent: Saturday, 4 December 2010 4:34
To: Friedrich, Carsten (ICT Centre, Acton)
Cc: users at lists.opennebula.org
Subject: Re: [one-users] datablocks continued
There is this howto in C12G pages that may help you implement it in
your infrastructure.
https://support.opennebula.pro/entries/348847-using-qcow-images
It will probably be actualized from time to time so it may be a good
idea to subscribe to the rss.
On Fri, Dec 3, 2010 at 12:14 AM, <Carsten.Friedrich at csiro.au> wrote:
> How do you use the copy-on-write feature with Nebula? I use qcow images, but, judging by the time it takes, I'm sure currently the images are just normally copied to /.../var/X/images/ rather than using the copy-on-write feature.
>
> Carsten
>
> -----Original Message-----
> From: users-bounces at lists.opennebula.org [mailto:users-bounces at lists.opennebula.org] On Behalf Of Javier Fontan
> Sent: Friday, 3 December 2010 8:25
> To: Steven Timm
> Cc: users at lists.opennebula.org
> Subject: Re: [one-users] datablocks continued
>
> Hello,
>
> Sparse files are supported by both ext* and nfs filesystems but
> unfortunately scp command (used with tm_ssh) does not know what sparse
> files are, this will make it copy the holes so the files in the end
> will be bigger and it will transfer all the null data.
>
> I do not recommend storing live images in nfs as it will become a
> bottleneck to image copying (prolog and epilog times) and IO.
>
> I see two ways to fix this. The easiest way is to modify the transfer
> scripts to use a sparse file aware tool to copy images. One of those
> is rsync and as it can use ssh communication method I think it is the
> easiest to implement. You will have to change "scp" calls to rsync
> calls with a special parameter so it deals with sparse files (--sparse
> or -S). This can be done in one place, in tm_common.sh file there is a
> list of commands that the scripts will use. The variable for SCP can
> be changed to the path of rsync add also add the sparse flag:
>
> SCP=/usr/bin/rsync --sparse
>
> Hopefully this will solve the problems with sparse files and speed
> image transfer.
>
> The other way to do this is using qcow images. This image format is
> not easily manageable with standard tools but provide some nice
> features like compression, copy on write and sparse images without
> needed filesystem support. Using this images will make scp behave
> correctly as it is not an sparse file for the filesystem. Qcow is
> really a nice format and it can be useful for more things other than
> the sparse files problem you had but maybe it will take more time to
> implement as you have to convert your images and modify your workflow.
>
> More info on the format can be found here: http://linux.die.net/man/1/qemu-img
>
> Bye
>
> On Tue, Nov 30, 2010 at 5:07 PM, Steven Timm <timm at fnal.gov> wrote:
>>
>> Thanks to help from opennebula staff I was able to create
>> a 250GB datablock in the image repository which initially
>> is a sparse file. i.e., ls returns a size of 250GB but
>> du returns a size of only a few megabytes, the actual space
>> used by the file system itself. the datablock is persistent and is
>> meant to be used only by one vm which will be operating all the time.
>>
>> However, I am now discovering that once I launch the vm which calls
>> this datablock, the act of sending it through the tm_ssh adapters
>> (i.e. transferring it via scp to the host node where the VM will run)
>> makes the file not be sparse anymore, and it takes just
>> as long to copy it there as if the file were not sparse.
>> Also I am guessing that once I stop or shutdown this VM, the saving
>> of the partition will take just as long.
>> Is there any
>> way to get around this? can datablocks live on NFS? Can the
>> whole image repository live on NFS? If so, can the host nodes
>> grab the images in question (OS or datablock) straight from the NFs server
>> without sending them through the head node?
>>
>> Steve Timm
>>
>>
>>
>> ------------------------------------------------------------------
>> Steven C. Timm, Ph.D (630) 840-8525
>> timm at fnal.gov http://home.fnal.gov/~timm/
>> Fermilab Computing Division, Scientific Computing Facilities,
>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>> Leader.
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
>
> --
> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
--
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
More information about the Users
mailing list