[one-users] TransportManager for SAN storage, VMs in clvm, templates in GFS2 volume (Users Digest, Vol 49, Issue 11)

Rolandas Naujikas rolandas.naujikas at mif.vu.lt
Sat Mar 3 22:12:36 PST 2012


Hi,

Did you test performance of LVM snapshots ?
LVM performance usually is OK, but with snapshots it could go 10x
decrease if your storage doesn't have many IOS (disc spindles of SSD).
Even with good IOS or SSD it goes down 2-3x down.
For reference http://www.nikhef.nl/~dennisvd/lvmcrap.html

Regards, Rolandas Naujikas

P.S. For better performance I would look for ZFS (or btrfs, but not yet
for production) and for that you can use Solaris/OpenSolaris or FreeBSD
(or its NAS solution FreeNAS). There should be solution
http://mperedim.wordpress.com/2010/09/26/opennebula-zfs-and-xen-part-1-get-going/

On 2012-03-03 13:10, jan horacek wrote:
> Hello,
> 
> i already published the updated driver. now all images not uploaded by
> administrator to the GFS2 volume are stored inside lvm volumes (inside
> clvm)
> 
> https://github.com/jhrcz/opennebula-tm-gfs2clvm/tree/v0.0.20120303.0
> 
> basic tests are already done, and everything seems to work fine ;o) it
> looks it could realy be added to one ecosystem ;o)
> 
> Regards,
> J.Horacek
> 
> On Thu, Mar 1, 2012 at 9:01 AM, jan horacek <jahor.jhr.cz at gmail.com> wrote:
>> Hi Steve,
>>
>> The complete writeup about seting all the things up "is on my todo".
>>
>> clvm is the minimal form on centos. Just a shared storage (sas
>> infrastructure exactly, but could be anything else like drbd
>> primary/primary, iscsi etc). The driver currently does not use
>> exclusive locks, but i'm tempted not to use it in close future. the
>> global volume group for all the cloud-related things is created as
>> clustered (--clustered yes).
>>
>> The GFS2 volume in one of the LV is for /var/lib/one on worker nodes,
>> context.sh contextualisation isos, vm checkpoints, deployment.X and
>> disk.X symlinks are here. All the files for oned are on the management
>> node, in /var/lib/one. This storage is NOT shared betwen management
>> node and worker nodes.
>>
>> i'm currently rewriting the driver and related (remotes/fs) to support
>> the next level for this setup - having all the images created
>> dynamicaly by one on clvm too.
>>
>> so in the VG, there is
>> ?* LV for gfs2
>> ?* LVs "lv-one-XXX-X" for nonpresistent, dynamicaly created volumes
>> ?* LVs "lv-oneimg-XXXXXXXXXXX for volumes created in one (by saveas,
>> cloning etc - replacement of "hash-like" named files)
>>
>> this brings possibility to use persistent volumes from lvm and not
>> gfs2 filesystem, future possibility for using snapshosts for cloning
>> (and even live snapshoting, without suspending the machine). Currently
>> it is able to create new volumes as a copy of volumes from machines in
>> suspend state, no need to wait for shutdown, just suspend it and
>> create copy. For my this gives me the proof, that something working is
>> cloned and checked and then the source could be suspended - no risk of
>> saveas failure with work losk.
>>
>> the last changes are not in my git repo yet, i hope the will be in
>> very short time.
>>
>> To your questions...
>>
>> ad 1... yes, the one management/head node is virtual machine in my
>> installation. It is on some other physical machine, not directly
>> connected in the cluster tools. this is why i challenged the
>> disconnection of the node from shared filesystem, to make safe fencing
>> in the cluster possible. all fencing could be directly to ipmi and no
>> need to use libvirt fencing for the management node (management node
>> shares the physical machine with some other critical systems not
>> realted in the cloud and so ipmi fencing is not the best way to fence
>> it)
>>
>> ad 2... some sort of description is in the text above. the driver in
>> the form of v0.0.20120221.0 as initialy pushed to github, has
>> master-templates (isos, hand made sys images) on the lvm, all created
>> volumes (files with "hash-like filename"), vm images are in the lvm,
>> so every time you deploy a machine, it dd's the from gfs2 to lvm.
>> persistent images are used from gfs2. but as i wrote above, this will
>> change, because i want to minize the usage of the gfs2 storage.
>>
>> i hope i answered the question sufficiently ;o)
>>
>> Regards,
>> Jan Horacek
>>
>> On Mon, Feb 27, 2012 at 11:37 PM, Steven Timm <timm at fnal.gov> wrote:
>>> This is very interesting work. ?Jan, do you have any write-up on how
>>> you were able to set up the gfs and clvm setup to work with this driver?
>>> It's a use case very similar to what we are considering for FermiCloud.
>>>
>>> Two other questions that weren't immediately obvious from looking
>>> at the code:
>>>
>>> 1) with this driver, could the OpenNebula head node itself be
>>> a (static) virtual machine? ?Looks like yes, but I want to be sure.
>>>
>>> 2) How is the notion of an image repository handled--does
>>> openNebula copy the OS image from a separate image repository
>>> every time the VM is instantiated, or is the repository defined
>>> to be the place that the OS image lives on disk?
>>>
>>> Steve Timm
>>>
>>>
>>>
>>>
>>>
>>> On Thu, 23 Feb 2012, Borja Sotomayor wrote:
>>>
>>>> Hi Jan,
>>>>
>>>>> I call this transfer manager drive **gfs2clvm** and made it (even in
>>>>> current?development?state - but most of the functions works
>>>>> already)?available?on?github:
>>>>> https://github.com/jhrcz/opennebula-tm-gfs2clvm
>>>>>
>>>>> if?anyone?is?interrested, wants?to?contribute and
>>>>> help,?please?contact?me.
>>>>
>>>>
>>>> A good way to get more people involved and interested would be to add
>>>> it to our ecosystem catalog:
>>>>
>>>> ? http://www.opennebula.org/software:ecosystem
>>>>
>>>> gfs2clvm definitely sounds like a good candidate for inclusion in the
>>>> ecosystem. If you are interested, you can find instructions here:
>>>>
>>>> ? http://www.opennebula.org/community:ecosystem
>>>>
>>>> You're also welcome to write about gfs2clvm on our blog
>>>> (http://blog.opennebula.org/). If you're interested, just drop me a
>>>> line off-list and I'll set you up with an account.
>>>>
>>>> Cheers!
>>>> --
>>>> Borja Sotomayor
>>>>
>>>> ?Researcher, Computation Institute
>>>> ?Lecturer, Department of Computer Science
>>>> ?University of Chicago
>>>> ?http://people.cs.uchicago.edu/~borja/
>>>>
>>>> ?Community Manager, OpenNebula project
>>>> ?http://www.opennebula.org/
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>
>>>
>>> ------------------------------------------------------------------
>>> Steven C. Timm, Ph.D ?(630) 840-8525
>>> timm at fnal.gov ?http://home.fnal.gov/~timm/
>>> Fermilab Computing Division, Scientific Computing Facilities,
>>> Grid Facilities Department, FermiGrid Services Group, Group Leader.
>>> Lead of FermiCloud project.
> 
> 
> 




More information about the Users mailing list