[one-users] iSCSI multipath
Miloš Kozák
milos.kozak at lejmr.com
Mon Jan 21 09:37:19 PST 2013
Thank you. does it mean, that I can distribute metadata files located in
/etc/lvm on frontend onto other hosts and these hosts will see my
logical volumes? Is there any code in nebula which would provide it? Or
I need to update DS scripts to update/distribute LVM metadata among
servers?
Thanks, Milos
Dne 21.1.2013 18:29, Mihály Héder napsal(a):
> Hi,
>
> lvm metadata[1] is simply stored on the disk. In the setup we are
> discussing this happens to be a shared virtual disk on the storage,
> so any other hosts that are attaching the same virtual disk should see
> the changes as they happen, provided that they re-read the disk. This
> re-reading step is what you can trigger with lvscan, but nowadays that
> seems to be unnecessary. For us it works with Centos 6.3 so I guess Sc
> Linux should be fine as well.
>
> Cheers
> Mihály
>
>
> [1] http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_metadata.html
>
> On 21 January 2013 12:53, Miloš Kozák <milos.kozak at lejmr.com> wrote:
>> Hi,
>> thank you for great answer. As I wrote my objective is to avoid as much of
>> clustering sw (pacemaker,..) as possible, so clvm is one of these things I
>> feel bad about them in my configuration.. Therefore I would rather let
>> nebula manage LVM metadata in the first place as I you wrote. Only one last
>> thing I dont understand is a way nebula distributes LVM metadata?
>>
>> Is kernel in Scientific Linux 6.3 new enought to LVM issue you mentioned?
>>
>> Thanks Milos
>>
>>
>>
>>
>> Dne 21.1.2013 12:34, Mihály Héder napsal(a):
>>
>>> Hi!
>>>
>>> Last time we could test an Equalogic it did not have option for
>>> create/configure Virtual Disks inside in it by an API, so I think the
>>> iSCSI driver is not an alternative, as it would require a
>>> configuration step per virtual machine on the storage.
>>>
>>> However, you can use your storage just fine in a shared LVM scenario.
>>> You need to consider two different things:
>>> -the LVM metadata, and the actual VM data on the partitions. It is
>>> true, that the concurrent modification of the metadata should be
>>> avoided as in theory it can damage the whole virtual group. You could
>>> use clvm which avoids that by clustered locking, and then every
>>> participating machine can safely create/modify/delete LV-s. However,
>>> in a nebula setup this is not necessary in every case: you can make
>>> the LVM metadata read only on your host servers, and let only the
>>> frontend modify it. Then it can use local locking that does not
>>> require clvm.
>>> -of course the host servers can write the data inside the partitions
>>> regardless that the metadata is read-only for them. It should work
>>> just fine as long as you don't start two VMs for one partition.
>>>
>>> We are running this setup with a dual controller Dell MD3600 storage
>>> without issues so far. Before that, we used to do the same with XEN
>>> machines for years on an older EMC (that was before nebula). Now with
>>> nebula we have been using a home-grown module for doing that, which I
>>> can send you any time - we plan to submit that as a feature
>>> enhancement anyway. Also, there seems to be a similar shared LVM
>>> module in the nebula upstream which we could not get to work yet, but
>>> did not try much.
>>>
>>> The plus side of this setup is that you can make live migration work
>>> nicely. There are two points to consider however: once you set the LVM
>>> metadata read-only you wont be able to modify the local LVMs in your
>>> servers, if there are any. Also, in older kernels, when you modified
>>> the LVM on one machine the others did not get notified about the
>>> changes, so you had to issue an lvs command. However in new kernels
>>> this issue seems to be solved, the LVs get instantly updated. I don't
>>> know when and what exactly changed though.
>>>
>>> Cheers
>>> Mihály Héder
>>> MTA SZTAKI ITAK
>>>
>>> On 18 January 2013 08:57, Miloš Kozák <milos.kozak at lejmr.com> wrote:
>>>> Hi, I am setting up a small installation of opennebula with sharedstorage
>>>> using iSCSI. THe storage is Equilogic EMC with two controllers. Nowadays
>>>> we
>>>> have only two host servers so we use backed direct connection between
>>>> storage and each server, see attachment. For this purpose we set up
>>>> dm-multipath. Cause in the future we want to add other servers and some
>>>> other technology will be necessary in the network segment. Thesedays we
>>>> try
>>>> to make it as same as possible with future topology from protocols point
>>>> of
>>>> view.
>>>>
>>>> My question is related to the way how to define datastore, which driver
>>>> and
>>>> TM is the best and which?
>>>>
>>>> My primal objective is to avoid GFS2 or any other cluster filesystem I
>>>> would
>>>> prefer to keep datastore as block devices. Only option I see is to use
>>>> LVM
>>>> but I worry about concurent writes isn't it a problem? I was googling a
>>>> bit
>>>> and I found I would need to set up clvm - is it really necessary?
>>>>
>>>> Or is better to use iSCSI driver, drop the dm-multipath and hope?
>>>>
>>>> Thanks, Milos
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>
More information about the Users
mailing list