[one-users] Problem with network bridge from VMs to physical network.

Jaime Melis jmelis at opennebula.org
Fri Aug 8 02:33:01 PDT 2014


Hi,

did you manage to figure this out?

otherwise, can you send us the output of "ip route" in the VM, Host 1 and
Term 1?

cheers,
Jaime


On Wed, Jul 30, 2014 at 6:30 PM, Diego M. <thedragonsreborn at hotmail.com>
wrote:

> Hi all,
> I'm trying to implement opennebula on my personal lab as we have some
> projects with a colleague and it is nice to have disposable VMs, and also
> we are taking the oportunity to learn about OpenNebula to keep up-to-date :)
>
> I would like to ask you a question I have, regarding networking, because
> I'm pretty sure that I'm missing something on the configurations but I
> cannot realize what.
>
> We have the following infrastructure:
>
>
>
>   And the problem is that from the clients on 192.168.7.0/24 subnet I can
> ping the VMs on 192.168.254.0/24, but the problem is that from the VM I
> can only ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the
> 192.168.7.0/24 subnet, and all the other clients from some reason are not
> reacheable.
>
>
> I'm for sure missing something somewhere, but I cannot figure what. I had
> already enabled the ip4 forwarding on Host1 for all interfaces, and the
> following are the contents of /etc/network/interfaces file:
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The primary network interface
> allow-hotplug eth0
> iface eth0 inet static
>         address 192.168.7.1
>         netmask 255.255.255.0
>         gateway 192.168.7.254
>
> auto Vbr0
> iface Vbr0 inet static
>         address 192.168.7.2
>         netmask 255.255.255.0
>         network 192.168.7.0
>         broadcast 192.168.7.255
>         gateway 192.168.7.254
>         bridge_ports eth1
>         bridge_fd 9
>         bridge_hello 2
>         bridge_maxage 12
>         bridge_maxwait 5
>         bridge_stp off
>
> auto Vbr0:1
> iface Vbr0:1 inet static
>         address 192.168.254.254
>         netmask 255.255.255.0
>         gateway 192.168.7.2
>
>
>
> And this is the vnet template I'm using for the VMs:
>
> onevnet show Public
>
> VIRTUAL NETWORK 48 INFORMATION
>
> ID             : 48
>
> NAME           : Public
>
> USER           : oneadmin
>
> GROUP          : users
>
> CLUSTER        : -
>
> TYPE           : RANGED
>
> BRIDGE         : Vbr0
>
> VLAN           : No
>
> USED LEASES    : 1
>
>
> PERMISSIONS
>
> OWNER          : um-
>
> GROUP          : u--
>
> OTHER          : ---
>
>
> VIRTUAL NETWORK TEMPLATE
>
> BRIDGE="Vbr0"
>
> DESCRIPTION=""
>
> DNS="192.168.7.254"
>
> GATEWAY="192.168.254.254"
>
> NETWORK_ADDRESS="192.168.254.0"
>
> NETWORK_MASK="255.255.255.0"
>
> PHYDEV=""
>
> VLAN="NO"
>
> VLAN_ID=""
>
>
> RANGE
>
> IP_START       : 192.168.254.1
>
> IP_END         : 192.168.254.253
>
>
> USED LEASES
>
> LEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1",
> IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]
>
>
> VIRTUAL MACHINES
>
>
>     ID USER     GROUP    NAME            STAT UCPU    UMEM HOST
>   TIME
>
>     92 admin    users    Debian 7.5 Base runn    0    256M HOMPLMPKRS   0d
> 11h23
>
>
> If someone realize what I'm doing wrong and could give me an advise?
> May also, it is not the best way to bridge the connection of the VMs to
> the physical network, but I did not found other way of doing it on the
> documentation, or at least I did not understood.
>
> More detailed information about the templates I'm using, below is the
> "public" network template(provides leases of 192.168.254.0/24), that is
> the one I want to bridge to the local network (192.168.7.0/24). And after
> the network template, the information of the VM template, where the NIC
> using"public" network template is assigned.
> oneadmin at HOMPLMPKRSV0001:/root$ onevnet list
>   ID USER         GROUP        NAME            CLUSTER      TYPE BRIDGE
> LEASES
>   47 oneadmin     users        Private         -               R Vbr0
>      1
>   48 oneadmin     users        Public          -               R Vbr0
>      1
> oneadmin at HOMPLMPKRSV0001:/root$ onevnet show 48
> VIRTUAL NETWORK 48 INFORMATION
> ID             : 48
> NAME           : Public
> USER           : oneadmin
> GROUP          : users
> CLUSTER        : -
> TYPE           : RANGED
> BRIDGE         : Vbr0
> VLAN           : No
> USED LEASES    : 1
>
> PERMISSIONS
> OWNER          : um-
> GROUP          : u--
> OTHER          : ---
>
> VIRTUAL NETWORK TEMPLATE
> BRIDGE="Vbr0"
> DESCRIPTION=""
> DNS="192.168.7.254"
> GATEWAY="192.168.254.254"
> NETWORK_ADDRESS="192.168.254.0"
> NETWORK_MASK="255.255.255.0"
> PHYDEV=""
> VLAN="NO"
> VLAN_ID=""
>
> RANGE
> IP_START       : 192.168.254.1
> IP_END         : 192.168.254.253
>
> USED LEASES
> LEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1",
> IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]
>
> VIRTUAL MACHINES
>
>     ID USER     GROUP    NAME            STAT UCPU    UMEM HOST
>   TIME
>     92 admin    users    Debian 7.5 Base runn    0    256M HOMPLMPKRS   1d
> 10h57
> oneadmin at HOMPLMPKRSV0001:/root$ onetemplate list
>   ID USER            GROUP           NAME
>  REGTIME
>   17 oneadmin        users           Test                         07/17
> 10:57:06
>   18 oneadmin        users           Debian 7.5 Base 256 MB       07/18
> 12:02:33
> oneadmin at HOMPLMPKRSV0001:/root$ onetemplate show 18
> TEMPLATE 18 INFORMATION
> ID             : 18
> NAME           : Debian 7.5 Base 256 MB
> USER           : oneadmin
> GROUP          : users
> REGISTER TIME  : 07/18 12:02:33
>
> PERMISSIONS
> OWNER          : um-
> GROUP          : u--
> OTHER          : ---
>
> TEMPLATE CONTENTS
> CONTEXT=[
>   HOSTNAME="$NAME",
>   NETWORK="YES" ]
> CPU="0.5"
> DISK=[
>   BUS="ide",
>   IMAGE="Debian 7.5",
>   IMAGE_UNAME="admin" ]
> MEMORY="256"
> NAME="Debian 7.5 Base 256 MB"
> NIC=[
>   NETWORK="Public",
>   NETWORK_UNAME="oneadmin" ]
> NIC=[
>   NETWORK="Private",
>   NETWORK_UNAME="oneadmin" ]
> OS=[
>   ARCH="x86_64",
>   BOOT="hd" ]
> RAW=[
>   TYPE="kvm" ]
> TEMPLATE_ID="18"
> VCPU="1"
>
> Thanks in advance and best regards!!
>
>
> _______________________________________________ Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> _______________________________________________ Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140808/eb8ff04a/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: InfraVMs.jpg
Type: image/jpeg
Size: 77876 bytes
Desc: not available
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140808/eb8ff04a/attachment-0001.jpg>


More information about the Users mailing list