[one-dev] OpenNebula LXC Addon
valentin at databus.pro
Sat Oct 26 03:05:52 PDT 2013
I am writing this E-Mail to gather feedback and ideas about an
OpenNebula LXC addon.
I am sure that most of the people that read this list are familiar or at
least heard of LXC but a small introduction doesn't hurt anyone I guess.
LXC provides system-level virtualization not via a virtual machine, but
rather provides a contained virtual environment .
Also LXC is a userspace interface for the Linux kernel containment
features. A list of kernel features used by LXC contain processes can be
found on the Linux Containers website .
One can create a container using either LXC userspace tools or the
libvirt API. There are a number of lxc-* tools that drive the
creation/start/stop actions for a container. I find two articles quite
informative about what each tool does , .
Libvirt has a page dedicated to the LXC container driver . After
reading this page one has sufficient knowledge to drive containers via
the libvirt API.
The question that rises is how does LXC fit inside OpenNebula? I have
two ideas that are worth exploring and would like to get feedback and
ideas for improvement.
Before talking about them though I would like to let you people know what
work has been done in the past to provide an OpenNebula LXC driver
namely OneLXC . The work has been done by China Mobile Research Institute
and can be found on cmri's one GitHub repository  on branch one-3.2.
I haven't tested the driver but it has a 'major' drawback, it 'touches'
the Core so if anyone wants to keep the OneLXC plugin in sync with the
latest of OpenNebula it has to either repackage the OpenNebula software
or install from sources.
I find it easier to install from packages rather than source. I also
don't want my production machines to have build tools installed.
I have been talking to Javier Fontan in private about an OpenNebula LXC
Addon and came to the conclusion that it's better to stay away from the
Core and use the XML representation of the VM . The Virtual Machine
Manager Drivers (VMMs) generate the deployment file, which in KVM's case
is the libvirt XML representation of the VM that is used to deploy the
VM on a given node using virsh(1).
If one sets the VMMs' type to xml the output looks like in the hastebin
Using the output from the XML I see two possible solutions on how to
actually implement the driver.
One of them is to use lxc-* tools inside the action scripts that get
executed via one_vmm_exec and/or one_vmm_sh.
>From the perspective of the LXC userspace tools a container has a root
file system which can either be a directory on disk, by default located
in the /var/lib/lxc/<container>/rootfs, or some kind of volume. I have
read on the LXC user mailing list  about people using LXC successfully
with LVM and ZFS technologies.
Each container has a configuration file by default located in the
/var/lib/lxc/<container>/config file. See lxc.conf(5) man page for your
Another concept in the LXC userspace tools that's worth mentioning are
the templates, shell scripts that are called to setup the rootfs of the
container. For example on Debian the templates use debootstrap to create
the rootfs. The templates kick in when using lxc-create(1).
Most of the templates also feature a cache mechanism so the next time
one creates a container the scripts just copies a rootfs from cache to
the containers' rootfs directory or volume, in case of LVM.
The idea is really simple. Use ROOT sub-attribute of OS  to define the
rootfs directory for a new container or define another sub-attribute
called ROOTFS to be consistent with LXC terminology. Either way would
Create in the files datastore  a so called lxc profile which is no
more than a preconfigured lxc.conf(5) file that can be transported to
the remote host and placed in the corresponding location, eg.
/var/lib/lxc/one-200/config after filling the values with data gathered
from the XML output .
The container base image stored in a datastore in compressed form. I
think a specialized Transfer Manager  driver is also needed to decompress
the image under /var/lib/lxc/<container>/rootfs and link the system
datastore disk.* entries. The context disk would be bind mounted in
the container under /mnt so the contextualization mechanism works
There is also a drawback in this approach, LXC userspace tools cannot
add the virtual network interface, of type veth by default, to an Open
Maybe it could work with brcompat but that has been removed from latest
Open vSwitch releases. A discussion about this can be found on the Open
vSwitch mailing list .
The second approach is to use the outputted XML from the Core and
generate a libvirt XML to be used by virsh -c lxc:/// to deploy the
container. The libvirt XML profile could be stored in a files datastore
just like in the previous approach.
Using libvirt all the network drivers work including Open vSwitch. I
have already tested this so I talk from experience.
I find the libvirt approach to be much more flexible than the previous
one. One argument for this is the automated creation of /dev filesystems
by libvirt for the container and the modular networking.
Lengthy E-Mail, I know, but hey if you've reached so far it means you're
interested in using LXC together with OpenNebula and I kindly ask you
for feedback. Maybe you even want to take part in this venture :-).
There are also a lot of security considerations which I have not brought
in the discussion just yet. I have to do some more reading on this
I will come back with more information and a POC as soon as I can.
In the mean time you guys can open up a repository for this. We could
call it addon-lxc.
databus.pro - We build Clouds.
http://databus.pro | valentin at databus.pro | @valentinbud
More information about the Dev