[one-users] Storage subsystem: which one?
Fabian Wenk
fabian at wenks.ch
Sun Oct 30 09:45:01 PDT 2011
Hello Humberto
Sorry for the delay.
On 18.10.2011 10:35, Humberto N. Castejon Martinez wrote:
> Thank you very much, Fabian and Carlos, for your help. Things are much more
> clear now, I think.
You're welcome.
> *Sharing the image repository.
> If I understood right, the aim with sharing the image repository between the
> front-end and the workers is to increase performance by reducing (or
> eliminating) the time needed to transfer an image from the repository to the
> worker that will run an instance of such image. I have, however, a
With the shared images folder you are able to distribute the
transfer over time, as the VM does only read (eg. transfer over
NFS and those over the network) the parts of the image on an as
needed basis. But from the performance point of view, NFS is most
often slower then access to the image on the local disk. This may
be different with other storage solutions, eg. with a distributed
FS over all the cluster nodes, or eg. an other backend storage
solution with iSCSI and 10 GBit/s Ethernet to the cluster nodes.
This stuff most often depends on the complete setup and network
infrastructure you have available. The best would be, if you can
do performance testing by yourself on your own site and
infrastructure to find the best solution depending on your
expectation and needs of the VM cluster.
> question/remark here. To really reduce or eliminate the transfer time, the
> image should already reside on the worker node or close to it. If the image
> resides on a central server (case of NFS, if I am not wrong) or on an
> external shared distributed storage space (case of MooseFS, GlusterFS,
> Lustre, and the like), there is still a need to transfer the image to the
> worker, right? In the case of a distributed storage solution like MooseFs,
> etc., the worker could itself be part of the distributed storage space. In
> that case, the image may already reside on the worker, although not
> necessarily, right? But using the worker as both a storage server and client
> may actually compromise performance, for what I have read.
With a distributed file system, it depends how this is working
with such stuff. An example (I do not have any experience with
it, but this is how I would expect the work of such a distributed
file system to be done):
In the example we would have cluster node 1 to 10, all set up
with eg. MooseFS. We also would have a permanent image, which is
located on the MooseFS storage (which for redundancy is
physically distributed over several cluster nodes, probably also
in parts). For the example, we assume that the image is
physically on node 3, 5 and 7. Now when you start a VM which will
use this image, the VM will be started on node 1, in the
beginning, it will read the image through MooseFS over the
network from one or more of the nodes 3, 5 or 7 and it can be
used immediately. Now I expect from MooseFS, that it will modify
the distributed file system in such a way, that over time the
image physically will be stored on node 1 and to do this in the
background. After a while the whole image should be available
from the local disk of node 1, and those have the same
performance as from a normal local disk.
If somebody has experience with such stuff, please tell if my
"idea" is right or wrong.
> Am I totally wrong with my thoughts here? If not, do we really increase
> transfer performance by sharing the image repository using, e.g. NFS? Are
> there any performance numbers for the different cases that could be shared?
>
> * Sharing the<VM_dir>.
> Sharing the<VM_dir> between the front-end and the workers is not really
> needed, but it is more of a convenient solution, right?
> Sharing the<VM_dir> between the workers themselves might be needed for
> live migration. I say "might" because i have just seen that, for example,
> with KVM we may perform live migrations without a shared storage [2]. Has
> anyone experimented with this?
I'm not sure, but I guess OpenNebula depends on a shared file
system for live migration, independent of the used Hypervisor.
Probably you could do live migration with KVM and local storage
when you are using KVM without OpenNebula.
> Regarding the documentation, Carlos, it looks fine. I would only suggest the
> possibility of documenting the 3rd case where the image repository is not
> shared but the<VM_dir> is shared.
I am not sure, but I think OpenNebula is currently not able to
handle this two differently, as it is defined per cluster node
with the 'onehost create ...' command where you define to use
tm_nfs or tm_ssh.
bye
Fabian
More information about the Users
mailing list