[one-users] Fwd: iSCSI multipath

Mihály Héder mihaly.heder at sztaki.mta.hu
Mon Jan 21 04:11:58 PST 2013


---------- Forwarded message ----------
From: Mihály Héder <merlin at sztaki.hu>
Date: 21 January 2013 12:34
Subject: Re: [one-users] iSCSI multipath
To: Miloš Kozák <milos.kozak at lejmr.com>
Cc: users <users at lists.opennebula.org>


Hi!

Last time we could test an Equalogic it did not have option for
create/configure Virtual Disks inside in it by an API, so I think the
iSCSI driver is not an alternative, as it would require a
configuration step per virtual machine on the storage.

However, you can use your storage just fine in a shared LVM scenario.
You need to consider two different things:
-the LVM metadata, and the actual VM data on the partitions. It is
true, that the concurrent modification of the metadata should be
avoided as in theory it can damage the whole virtual group. You could
use clvm which avoids that by clustered locking, and then every
participating machine can safely create/modify/delete LV-s. However,
in a nebula setup this is not necessary in every case: you can make
the LVM metadata read only on your host servers, and let only the
frontend modify it. Then it can use local locking that does not
require clvm.
-of course the host servers can write the data inside the partitions
regardless that the metadata is read-only for them. It should work
just fine as long as you don't start two VMs for one partition.

We are running this setup with a dual controller Dell MD3600 storage
without issues so far. Before that, we used to do the same with XEN
machines for years on an older EMC (that was before nebula). Now with
nebula we have been using a home-grown module for doing that, which I
can send you any time - we plan to submit that as a feature
enhancement anyway. Also, there seems to be a similar shared LVM
module in the nebula upstream which we could not get to work yet, but
did not try much.

The plus side of this setup is that you can make live migration work
nicely. There are two points to consider however: once you set the LVM
metadata read-only you wont be able to modify the local LVMs in your
servers, if there are any. Also, in older kernels, when you modified
the LVM on one machine the others did not get notified about the
changes, so you had to issue an lvs command. However in new kernels
this issue seems to be solved, the LVs get instantly updated. I don't
know when and what exactly changed though.

Cheers
Mihály Héder
MTA SZTAKI ITAK

On 18 January 2013 08:57, Miloš Kozák <milos.kozak at lejmr.com> wrote:
> Hi, I am setting up a small installation of opennebula with sharedstorage
> using iSCSI. THe storage is Equilogic EMC with two controllers. Nowadays we
> have only two host servers so we use backed direct connection between
> storage and each server, see attachment. For this purpose we set up
> dm-multipath. Cause in the future we want to add other servers and some
> other technology will be necessary in the network segment. Thesedays we try
> to make it as same as possible with future topology from protocols point of
> view.
>
> My question is related to the way how to define datastore, which driver and
> TM is the best and which?
>
> My primal objective is to avoid GFS2 or any other cluster filesystem I would
> prefer to keep datastore as block devices. Only option I see is to use LVM
> but I worry about concurent writes isn't it a problem? I was googling a bit
> and I found I would need to set up clvm - is it really necessary?
>
> Or is better to use iSCSI driver, drop the dm-multipath and hope?
>
> Thanks, Milos
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>


More information about the Users mailing list