[one-users] Public IPs on an internal private network of an OpenNebula Cluster

Olivier Sallou olivier.sallou at irisa.fr
Wed Jul 4 07:49:32 PDT 2012


Le 7/4/12 4:36 PM, Patrizio Dazzi a écrit :
> Dear all OpenNebulers,
>
>     As a researcher of the HPCLab at ISTI-CNR I am working for the CONTRAIL
> EU project (http://www.contrail-project.eu/), which main aim is to
> conceive and develop
> an holistic system for building cloud federation that can be managed
> in an integrated and
> seamless way.
>
> For the reference implementation of CONTRAIL system, we decided to exploit
> OpenNebula as Provider-level IaaS. As a consequence, CNR and a few other
> project partners are setting up an OpenNebula Cloud each.
>
> Unfortunately, at CNR we are experiencing some issues related to OpenNebula
> configuration. These are mainly due to the lack of public IPs
> availability on our side and
> the consequent decision to reserve them for the VMs hence avoiding to
> assign to each
> Physical machine a public IP.
>
> Let me describe what's going on on our side.
>
> We have installed the tarball distribution of open nebula 3.4.1 for
> running virtual machines on a (kvm based) cluster made of 5 computers:
> a front-end machine and 4 slaves machines. Currently, the master has 2
> network interfaces configured whereas the slaves have only a single
> network interface configured each. All the nodes of the cluster are
> running Ubuntu server 12.04 64 bit.
>
> The slaves of the cluster are connected to the front-end via a gigabit
> switch. The front-end uses the second network interface to connect to
> Internet. Such front-end is the only machine having a public IP.
> Indeed, the internal network exploits a class of private IPs
> (192.168.100.X). The front-end iptables has been already properly
> configured to forward and masquerade the connections from the slaves
> to the internet. Indeed, we are able to connect to ubuntu update sites
> directly from the slaves.
>
> I also have a few public IPs that I would like to assign to certain
> Virtual Machines that will be run on the cluster.
>
> Unfortunately, the slaves are connected to a private network, hence
> their virtual bridges, as far as I know, can receive only packets sent
> to IPs having the same network address/mask. As a consequence
> assigning them a public IP would result in a useless operation because
> the packets won't be properly routed to the physical machine hosting
> such a public IP.
>
> Can you help me ? Do you have any suggestion ?
Can't you connect slaves to "public" network with, by default,  the
interface down.
Then, in your boot script, if the template assigns a public IP,
configure the public interface to use it and start the interface (of
course routing tables should also be modified to get direct access).

Olivier
>
> Best Regards,
> -- Patrizio
>
>
> Dr Patrizio Dazzi, Ph.D.
> HPC Lab @ ISTI-CNR, Via Moruzzi, 1 - 56126, Pisa, Italy
> Phone: +39 050 315 30 74  --  Fax: +39 050 315 20 40
>
> "Genius is one percent inspiration, ninety-nine percent perspiration"
> - Thomas Alva Edison
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>

-- 
Olivier Sallou
IRISA / University of Rennes 1
Campus de Beaulieu, 35000 RENNES - FRANCE
Tel: 02.99.84.71.95

gpg key id: 4096R/326D8438  (keyring.debian.org)
Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438






More information about the Users mailing list