[one-users] datablocks continued

Javier Fontan jfontan at gmail.com
Fri Dec 3 09:35:58 PST 2010


I was not aware of the problems with rsync and null blocks of more
that 2Gb. I'll definitely make some tests in my machine to find out if
we also find this problem.

On Fri, Dec 3, 2010 at 9:45 AM,  <opennebula at nerling.ch> wrote:
> Good Day Steven.
> We had the same problem with raw sparse files, so we use compressed qcow2
> now.
> Rsync could handle sparce files much better than scp, we had the experience
> of rsync broking raw files wenn a session of null bytes greater than 2GB was
> present on the file - this would be the fall for a empty sparse file greater
> than 2GB. This bug has a patch since over one year, I have not retestet it
> althought.
>
> Zitat von Steven Timm <timm at fnal.gov>:
>
>>
>> Thanks to help from opennebula staff I was able to create
>> a 250GB datablock in the image repository which initially
>> is a sparse file.  i.e., ls returns a size of 250GB but
>> du returns a size of only a few megabytes, the actual space
>> used by the file system itself.  the datablock is persistent and is
>> meant to be used only by one vm which will be operating all the time.
>>
>> However, I am now discovering that once I launch the vm which calls
>> this datablock, the act of sending it through the tm_ssh adapters
>> (i.e. transferring it via scp to the host node where the VM will run)
>> makes the file not be sparse anymore, and it takes just
>> as long to copy it there as if the file were not sparse.
>> Also I am guessing that once I stop or shutdown this VM, the saving
>> of the partition will take just as long.
>> Is there any
>> way to get around this?  can datablocks live on NFS? Can the
>> whole image repository live on NFS?  If so, can the host nodes
>> grab the images in question (OS or datablock) straight from the NFs server
>> without sending them through the head node?
>>
>> Steve Timm
>>
>>
>>
>> ------------------------------------------------------------------
>> Steven C. Timm, Ph.D  (630) 840-8525
>> timm at fnal.gov  http://home.fnal.gov/~timm/
>> Fermilab Computing Division, Scientific Computing Facilities,
>> Grid Facilities Department, FermiGrid Services Group, Assistant Group
>> Leader.
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org



More information about the Users mailing list