[one-dev] OpenNebula LXC Addon

Valentin Bud valentin at databus.pro
Sat Oct 26 03:05:52 PDT 2013


Dear community,

I am writing this E-Mail to gather feedback and ideas about an
OpenNebula LXC addon.

I am sure that most of the people that read this list are familiar or at
least heard of LXC but a small introduction doesn't hurt anyone I guess.

LXC provides system-level virtualization not via a virtual machine, but
rather provides a contained virtual environment [1]. 

Also LXC is a userspace interface for the Linux kernel containment
features. A list of kernel features used by LXC contain processes can be
found on the Linux Containers website [2].

One can create a container using either LXC userspace tools or the
libvirt API. There are a number of lxc-* tools that drive the
creation/start/stop actions for a container. I find two articles quite
informative about what each tool does [3], [4].

Libvirt has a page dedicated to the LXC container driver [5]. After
reading this page one has sufficient knowledge to drive containers via 
the libvirt API.

The question that rises is how does LXC fit inside OpenNebula? I have
two ideas that are worth exploring and would like to get feedback and
ideas for improvement.

Before talking about them though I would like to let you people know what
work has been done in the past to provide an OpenNebula LXC driver
namely OneLXC [5]. The work has been done by China Mobile Research Institute
and can be found on cmri's one GitHub repository [6] on branch one-3.2.

I haven't tested the driver but it has a 'major' drawback, it 'touches'
the Core so if anyone wants to keep the OneLXC plugin in sync with the
latest of OpenNebula it has to either repackage the OpenNebula software
or install from sources.

I find it easier to install from packages rather than source. I also
don't want my production machines to have build tools installed.

I have been talking to Javier Fontan in private about an OpenNebula LXC
Addon and came to the conclusion that it's better to stay away from the
Core and use the XML representation of the VM [7]. The Virtual Machine
Manager Drivers (VMMs) generate the deployment file, which in KVM's case 
is the libvirt XML representation of the VM that is used to deploy the
VM on a given node using virsh(1).

If one sets the VMMs' type to xml the output looks like in the hastebin
snippet [8]. 

Using the output from the XML I see two possible solutions on how to
actually implement the driver.

One of them is to use lxc-* tools inside the action scripts that get
executed via one_vmm_exec and/or one_vmm_sh.

>From the perspective of the LXC userspace tools a container has a root
file system which can either be a directory on disk, by default located
in the /var/lib/lxc/<container>/rootfs, or some kind of volume. I have
read on the LXC user mailing list [9] about people using LXC successfully
with LVM and ZFS technologies. 

Each container has a configuration file by default located in the
/var/lib/lxc/<container>/config file. See lxc.conf(5) man page for your
specific distribution. 

Another concept in the LXC userspace tools that's worth mentioning are
the templates, shell scripts that are called to setup the rootfs of the
container. For example on Debian the templates use debootstrap to create
the rootfs. The templates kick in when using lxc-create(1).

Most of the templates also feature a cache mechanism so the next time 
one creates a container the scripts just copies a rootfs from cache to
the containers' rootfs directory or volume, in case of LVM.

The idea is really simple. Use ROOT sub-attribute of OS [10] to define the
rootfs directory for a new container or define another sub-attribute
called ROOTFS to be consistent with LXC terminology. Either way would
work though.

Create in the files datastore [11] a so called lxc profile which is no
more than a preconfigured lxc.conf(5) file that can be transported to
the remote host and placed in the corresponding location, eg.
/var/lib/lxc/one-200/config after filling the values with data gathered
from the XML output [8].

The container base image stored in a datastore in compressed form. I
think a specialized Transfer Manager [12] driver is also needed to decompress
the image under /var/lib/lxc/<container>/rootfs and link the system
datastore disk.* entries. The context disk would be bind mounted in
the container under /mnt so the contextualization mechanism works
without modifications.

There is also a drawback in this approach, LXC userspace tools cannot
add the virtual network interface, of type veth by default, to an Open 
vSwitch bridge.

Maybe it could work with brcompat but that has been removed from latest
Open vSwitch releases. A discussion about this can be found on the Open
vSwitch mailing list [13].

The second approach is to use the outputted XML from the Core and
generate a libvirt XML to be used by virsh -c lxc:/// to deploy the
container. The libvirt XML profile could be stored in a files datastore
just like in the previous approach. 

Using libvirt all the network drivers work including Open vSwitch. I
have already tested this so I talk from experience.

I find the libvirt approach to be much more flexible than the previous
one. One argument for this is the automated creation of /dev filesystems
by libvirt for the container and the modular networking.

Lengthy E-Mail, I know, but hey if you've reached so far it means you're
interested in using LXC together with OpenNebula and I kindly ask you
for feedback. Maybe you even want to take part in this venture :-).

There are also a lot of security considerations which I have not brought
in the discussion just yet. I have to do some more reading on this
topic. 

I will come back with more information and a POC as soon as I can.

In the mean time you guys can open up a repository for this. We could
call it addon-lxc.

[1]: http://en.wikipedia.org/wiki/LXC
[2]: http://linuxcontainers.org/
[3]: https://help.ubuntu.com/lts/serverguide/lxc.html
[4]: http://www.linux.org/threads/linux-containers-part-3-tools-of-the-trade.4402/
[5]: http://blog.opennebula.org/?p=3850
[6]: https://github.com/cmri/one/commit/892430ed912b4382972d1db9442494f743d8d2c3
[7]: http://opennebula.org/documentation:rel4.2:devel-vmm#deployment_file
[8]: http://hastebin.com/huxupapuya.xml
[9]: http://sourceforge.net/p/lxc/mailman/lxc-users/
[10]: http://opennebula.org/documentation:rel4.2:template#os_and_boot_options_section
[11]: http://opennebula.org/documentation:rel4.2:file_ds
[12]: http://opennebula.org/documentation:rel4.2:sd
[13]: http://www.mail-archive.com/discuss@openvswitch.org/msg05855.html

Good Will,

--
Valentin Bud
CEO
databus.pro - We build Clouds.
http://databus.pro | valentin at databus.pro | @valentinbud


More information about the Dev mailing list