[one-users] Opennebula storage question

Roberto Sassu roberto.sassu at polito.it
Sat Jan 8 06:39:24 PST 2011


On Saturday, January 08, 2011 12:40:48 pm Stefan P wrote:
> On Fri, Jan 7, 2011 at 11:47 PM, Roberto Sassu <roberto.sassu at polito.it>wrote:
> 
> > I have some questions about the storage management.
> > First, even if the transfer manager driver is LVM, the
> > uploaded image is placed in a file in the directory
> > /srv/cloud/one/var/images (self contained installation).
> > Does it is possible to store uploaded images directly
> > in a LVM logical volume?
> >
> 

Hi Stefan

thanks for the code!
I'm able to move the image content to a LVM logical volume, but i need
to modify the host on which LVM commands are sent due to the fact
my configuration is a little different.

I created a LVM volume group over iSCSI in order to have the storage
shared among all nodes.
Then i created a LV called 'lv_one_shared' and formatted with GFS, that
contains the Opennebula self contained installation and another
containing one image template. When a new virtual machine is deployed,
the frontend creates a snapshot of the template, the LVM configuration
is updated on the target node and, finally, the latter starts using the
assigned logical volume.
However, there are some issues:
- i need to configure iSCSI exported volumes with the write cache set
  to off, to avoid corruption of LVM metadata;
- snapshots usage information is not available, because the monitoring
  is performed on different nodes;
- when a logical volume is deleted, i need to update manually the
  device mapper configuration on nodes different from the frontend,
  finding and removing unused entries.

I've done some tests (creation and migration of some virtual machines)
on a cloud with two nodes, and i didn't get any issue.
The code is not actually published and is in very early stage of
development, but i can send it on this mailing list for evaluation.


I also have one off topic question: i created an account on Amazon EC2
and i want to manage it with Opennebula. I downloaded the EC2 API,
created the certificate and enabled the drivers in the configuration file
'oned.conf'. Then i added the EC2 host with the command 'onehost
create' but i cannot get any information, like the available memory. Is it
possible to debug the connection with Amazon in order to detect
failures?


> Hey Roberto,
> 
> Not "directly", but the scenario you're describing is possible - for each
> image you want to be able
> to use that way, you need to create a logical volume and upload the image in
> there, on
> each computing node.
> 
> Then, in VM templates, use something like:
> 
> DISK = [ source = "/dev/vg/volumename", ... ]
> 
> Then tm_clone will indeed create a lvm snapshot from the pre-created image
> volume - but,
> to reiterate, the volume you're referencing in your template must exist on
> each node.
> 
> I have a (simple,relatively rough) python script that'll take care of this
> pre-deployment, you can find it here: https://gist.github.com/770767
> 
> Note - while testing this, I ran into an issue with etc/tm_lvm/tm_lvmrc not
> coming up with
> the right lvm volume names for some reason; it ended up generating the same
> name for all
> volumes, so creating multiple VMs failed (but creating only one worked). I
> had to change:
> 
> echo $1 |$SED -e 's%^.*/\([^/]*\)/images.*$%\1%'
> 
> to
> 
> echo $1 | rev | cut -d/ -f3 | rev
> 
> 
> I'd be interested to know if you need this change as well, so I can submit a
> bug.
> 

Yes, i encountered the same issue and i created a similar
workaround.

Last question, about your script: is it possible to modify the source
field of the registered image record, in order to point to the created
LVM logical volume?

Thanks

Roberto Sassu


> 
> Regards,
> Stefan Praszalowicz
> *
> *
> 



More information about the Users mailing list