[one-users] datablocks continued

Steven Timm timm at fnal.gov
Tue Nov 30 08:07:46 PST 2010

Thanks to help from opennebula staff I was able to create
a 250GB datablock in the image repository which initially
is a sparse file.  i.e., ls returns a size of 250GB but
du returns a size of only a few megabytes, the actual space
used by the file system itself.  the datablock is persistent and is
meant to be used only by one vm which will be operating all the time.

However, I am now discovering that once I launch the vm which calls
this datablock, the act of sending it through the tm_ssh adapters
(i.e. transferring it via scp to the host node where the VM will run)
makes the file not be sparse anymore, and it takes just
as long to copy it there as if the file were not sparse.
Also I am guessing that once I stop or shutdown this VM, the saving
of the partition will take just as long.
Is there any
way to get around this?  can datablocks live on NFS? Can the
whole image repository live on NFS?  If so, can the host nodes
grab the images in question (OS or datablock) straight from the NFs server 
without sending them through the head node?

Steve Timm

Steven C. Timm, Ph.D  (630) 840-8525
timm at fnal.gov  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.

More information about the Users mailing list