[one-dev] announcing addon plan: shared-lvm-single-lock

Javier Fontan jfontan at opennebula.org
Wed Dec 18 06:03:14 PST 2013


Hi Nicolas,

Sorry for the huge delay. Releasing a new version does not bring down the
amount of work :/

Do you want me to create the repo for the new addon? I can use the
name "addon-shared-lvm-single-lock"
and add your user "nagius" as admin to the repo.

Cheers


On Tue, Dec 3, 2013 at 6:16 PM, Nicolas AGIUS <nicolas.agius at lps-it.fr>wrote:

> Hi,
>
> Maybe, you should have a look at the CXM project [1] and the CXM driver
> [2].
> It's a little bit old now, but it has been designed to work with cLVM on
> XEN hosts, and provide load-balancing and automatic failover.
>
> It can also work without cLVM. In this setup, Opennebula is not part of
> the cLVM cluster, all Xen nodes are read-writing LVM metadatas. That's
> working good, but it's dangerous and slower.
>
> Cheers,
> Nicolas AGIUS
>
> [1] https://github.com/nagius/cxm
> [2] http://opennebula.org/software:ecosystem:cxm_drivers
>
>
> --------------------------------------------
> En date de : Mar 3.12.13, Jaime Melis <jmelis at opennebula.org> a écrit :
>
>  Objet: Re: [one-dev] announcing addon plan: shared-lvm-single-lock
>  À: "Mihály Héder" <mihaly.heder at sztaki.mta.hu>
>  Cc: dev at lists.opennebula.org, cc at hbit.sztaki.hu
>  Date: Mardi 3 décembre 2013, 12h24
>
>  Hi Mihály,
>  I'm sincerely looking forward to seeing this
>  addon and I think it would help to many OpenNebula
>  users.
>  Do you have anything already done? Maybe I could
>  take a quick look at it and give you some feedback?
>
>
>  As a general recommendation I'd like to point
>  out that it would be great if you could reuse code from
>  other DS and TM drivers.
>  cheers,
>  Jaime
>
>
>
>
>  On Wed, Nov 27, 2013
>  at 11:17 PM, Mihály Héder <mihaly.heder at sztaki.mta.hu>
>  wrote:
>
>
>  Hi,
>
>  I want to announce my plans to create
>  addon-shared-lvm-single-lock which would be the current
>  version of the patches detailed here:
> http://wiki.opennebula.org/shared_lvm
>
>
>
>
>  In a nutshell: the patch lets OpenNebula to use
>  commercial off the shelf SANs that are providing a few
>  pre-configured LUN-s as block devices over iSCSI, AoE or FC.
>
>
>  The block devices in question are mounted on all the
>  nodes and on the frontend, too. The device is split up to
>  logical volumes that are only active on the node that runs
>  the VM in question but are present everywhere. This allows
>  live migration that is facilitated by migration hooks which
>  activate and deactivate volumes as the instance travels.
>
>
>
>  This way, in certain cases the use of cLVM can be spared as
>  all nodes except the frontend can run with read-only lvm
>  metadata setting, meaning that local locking is sufficient
>  on the frontend. This is only as long as other storage
>  drivers don't try to modify any lvm vgs on the nodes -
>  they won't work because of the read-only metadata.
>
>
>
>
>  Please comment on the matter! Thanks!
>
>  Cheers
>  Mihály Héder
>  MTA SZTAKI HBIT
>
>
>  _______________________________________________
>
>  Dev mailing list
>
>  Dev at lists.opennebula.org
>
>  http://lists.opennebula.org/listinfo.cgi/dev-opennebula.org
>
>
>
>
>
>  --
>  Jaime Melis
>  Project Engineer
>  OpenNebula - Flexible Enterprise Cloud Made Simple
>  www.OpenNebula.org | jmelis at opennebula.org
>
>
>
>
>  -----La pièce jointe associée suit-----
>
>  _______________________________________________
>  Dev mailing list
>  Dev at lists.opennebula.org
>  http://lists.opennebula.org/listinfo.cgi/dev-opennebula.org
>
> _______________________________________________
> Dev mailing list
> Dev at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/dev-opennebula.org
>



-- 
Javier Fontán Muiños
Developer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | @OpenNebula | github.com/jfontan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/dev-opennebula.org/attachments/20131218/ea7bb2e5/attachment-0001.htm>


More information about the Dev mailing list