[one-users] Problem with network bridge from VMs to physical network.
Jaime Melis
jmelis at opennebula.org
Wed Aug 13 01:10:44 PDT 2014
Great news!
I was a bit at loss :)
Looking forward to reading the answer
On Wed, Aug 13, 2014 at 1:29 AM, Diego M. <thedragonsreborn at hotmail.com>
wrote:
> Hi all,
> Thanks for the time used to read my messages. I found the source of my
> issues and I'm creating an entry on my blog to summarize the issue and the
> resolution so it can easily be found by other people with the same issue.
>
> I will share the link to the post here as soon as I finish with it.
>
> Best regards!
> Diego Marciano
> ------------------------------
> From: thedragonsreborn at hotmail.com
> To: jmelis at opennebula.org
> Date: Fri, 8 Aug 2014 09:11:03 -0300
> CC: users at lists.opennebula.org
>
> Subject: Re: [one-users] Problem with network bridge from VMs to physical
> network.
>
> Hi Jaime,
> Thanks for your reply!
>
> Still not, I'm trying to figure it out right now. The ip route output is
> the following:
> root at Host1:~# ip route show
> default via 192.168.7.254 dev eth0
> 192.168.7.0/24 dev eth0 proto kernel scope link src 192.168.7.1
> 192.168.7.0/24 dev Vbr0 proto kernel scope link src 192.168.7.2
> 192.168.254.0/24 dev Vbr0 proto kernel scope link src 192.168.254.254
>
> Term1 routes:
> C:\Users\User1>route PRINT
> IPv4 Route Table
> ===========================================================================
> Active Routes:
> Network Destination Netmask Gateway Interface
> 0.0.0.0 0.0.0.0 192.168.7.254 192.168.7.50
> 10.142.168.0 255.255.255.0 On-link 192.168.7.50
> ===========================================================================
>
> 192.168.7.254 routes:
> admin at Gateway:/tmp/home/root# ip route show
> 130.255.155.1 dev eth0 scope link
> 192.168.7.0/24 dev br0 proto kernel scope link src 192.168.7.254
> 130.255.155.0/24 dev eth0 proto kernel scope link src 130.255.155.33
> 192.168.254.0/24 via 192.168.7.2 dev br0 metric 1
> 127.0.0.0/8 dev lo scope link
> default via 130.255.155.1 dev eth0
>
> I'm thinking that this issue could be caused as I'm routing the traffic to
> the virtual machines from 192.168.7.254 (The gateway of my network) to Host1
> through 192.168.7.2, but actually the traffic comming from Host1 is
> through 192.168.7.1, maybe that could be causing my problem?
>
> ------------------------------
> From: jmelis at opennebula.org
> Date: Fri, 8 Aug 2014 11:33:01 +0200
> Subject: Re: [one-users] Problem with network bridge from VMs to physical
> network.
> To: thedragonsreborn at hotmail.com
> CC: users at lists.opennebula.org
>
> Hi,
>
> did you manage to figure this out?
>
> otherwise, can you send us the output of "ip route" in the VM, Host 1 and
> Term 1?
>
> cheers,
> Jaime
>
>
> On Wed, Jul 30, 2014 at 6:30 PM, Diego M. <thedragonsreborn at hotmail.com>
> wrote:
>
> Hi all,
> I'm trying to implement opennebula on my personal lab as we have some
> projects with a colleague and it is nice to have disposable VMs, and also
> we are taking the oportunity to learn about OpenNebula to keep up-to-date :)
>
> I would like to ask you a question I have, regarding networking, because
> I'm pretty sure that I'm missing something on the configurations but I
> cannot realize what.
>
> We have the following infrastructure:
>
>
>
> And the problem is that from the clients on 192.168.7.0/24 subnet I can
> ping the VMs on 192.168.254.0/24, but the problem is that from the VM I
> can only ping 192.168.7.1, 192.168.7.2, and 192.168.7.254 of the
> 192.168.7.0/24 subnet, and all the other clients from some reason are not
> reacheable.
>
>
> I'm for sure missing something somewhere, but I cannot figure what. I had
> already enabled the ip4 forwarding on Host1 for all interfaces, and the
> following are the contents of /etc/network/interfaces file:
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The primary network interface
> allow-hotplug eth0
> iface eth0 inet static
> address 192.168.7.1
> netmask 255.255.255.0
> gateway 192.168.7.254
>
> auto Vbr0
> iface Vbr0 inet static
> address 192.168.7.2
> netmask 255.255.255.0
> network 192.168.7.0
> broadcast 192.168.7.255
> gateway 192.168.7.254
> bridge_ports eth1
> bridge_fd 9
> bridge_hello 2
> bridge_maxage 12
> bridge_maxwait 5
> bridge_stp off
>
> auto Vbr0:1
> iface Vbr0:1 inet static
> address 192.168.254.254
> netmask 255.255.255.0
> gateway 192.168.7.2
>
>
>
> And this is the vnet template I'm using for the VMs:
>
> onevnet show Public
>
> VIRTUAL NETWORK 48 INFORMATION
>
> ID : 48
>
> NAME : Public
>
> USER : oneadmin
>
> GROUP : users
>
> CLUSTER : -
>
> TYPE : RANGED
>
> BRIDGE : Vbr0
>
> VLAN : No
>
> USED LEASES : 1
>
>
> PERMISSIONS
>
> OWNER : um-
>
> GROUP : u--
>
> OTHER : ---
>
>
> VIRTUAL NETWORK TEMPLATE
>
> BRIDGE="Vbr0"
>
> DESCRIPTION=""
>
> DNS="192.168.7.254"
>
> GATEWAY="192.168.254.254"
>
> NETWORK_ADDRESS="192.168.254.0"
>
> NETWORK_MASK="255.255.255.0"
>
> PHYDEV=""
>
> VLAN="NO"
>
> VLAN_ID=""
>
>
> RANGE
>
> IP_START : 192.168.254.1
>
> IP_END : 192.168.254.253
>
>
> USED LEASES
>
> LEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1",
> IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]
>
>
> VIRTUAL MACHINES
>
>
> ID USER GROUP NAME STAT UCPU UMEM HOST
> TIME
>
> 92 admin users Debian 7.5 Base runn 0 256M HOMPLMPKRS 0d
> 11h23
>
>
> If someone realize what I'm doing wrong and could give me an advise?
> May also, it is not the best way to bridge the connection of the VMs to
> the physical network, but I did not found other way of doing it on the
> documentation, or at least I did not understood.
>
> More detailed information about the templates I'm using, below is the
> "public" network template(provides leases of 192.168.254.0/24), that is
> the one I want to bridge to the local network (192.168.7.0/24). And after
> the network template, the information of the VM template, where the NIC
> using"public" network template is assigned.
> oneadmin at HOMPLMPKRSV0001:/root$ onevnet list
> ID USER GROUP NAME CLUSTER TYPE BRIDGE
> LEASES
> 47 oneadmin users Private - R Vbr0
> 1
> 48 oneadmin users Public - R Vbr0
> 1
> oneadmin at HOMPLMPKRSV0001:/root$ onevnet show 48
> VIRTUAL NETWORK 48 INFORMATION
> ID : 48
> NAME : Public
> USER : oneadmin
> GROUP : users
> CLUSTER : -
> TYPE : RANGED
> BRIDGE : Vbr0
> VLAN : No
> USED LEASES : 1
>
> PERMISSIONS
> OWNER : um-
> GROUP : u--
> OTHER : ---
>
> VIRTUAL NETWORK TEMPLATE
> BRIDGE="Vbr0"
> DESCRIPTION=""
> DNS="192.168.7.254"
> GATEWAY="192.168.254.254"
> NETWORK_ADDRESS="192.168.254.0"
> NETWORK_MASK="255.255.255.0"
> PHYDEV=""
> VLAN="NO"
> VLAN_ID=""
>
> RANGE
> IP_START : 192.168.254.1
> IP_END : 192.168.254.253
>
> USED LEASES
> LEASE=[ MAC="02:00:c0:a8:fe:01", IP="192.168.254.1",
> IP6_LINK="fe80::400:c0ff:fea8:fe01", USED="1", VID="92" ]
>
> VIRTUAL MACHINES
>
> ID USER GROUP NAME STAT UCPU UMEM HOST
> TIME
> 92 admin users Debian 7.5 Base runn 0 256M HOMPLMPKRS 1d
> 10h57
> oneadmin at HOMPLMPKRSV0001:/root$ onetemplate list
> ID USER GROUP NAME
> REGTIME
> 17 oneadmin users Test 07/17
> 10:57:06
> 18 oneadmin users Debian 7.5 Base 256 MB 07/18
> 12:02:33
> oneadmin at HOMPLMPKRSV0001:/root$ onetemplate show 18
> TEMPLATE 18 INFORMATION
> ID : 18
> NAME : Debian 7.5 Base 256 MB
> USER : oneadmin
> GROUP : users
> REGISTER TIME : 07/18 12:02:33
>
> PERMISSIONS
> OWNER : um-
> GROUP : u--
> OTHER : ---
>
> TEMPLATE CONTENTS
> CONTEXT=[
> HOSTNAME="$NAME",
> NETWORK="YES" ]
> CPU="0.5"
> DISK=[
> BUS="ide",
> IMAGE="Debian 7.5",
> IMAGE_UNAME="admin" ]
> MEMORY="256"
> NAME="Debian 7.5 Base 256 MB"
> NIC=[
> NETWORK="Public",
> NETWORK_UNAME="oneadmin" ]
> NIC=[
> NETWORK="Private",
> NETWORK_UNAME="oneadmin" ]
> OS=[
> ARCH="x86_64",
> BOOT="hd" ]
> RAW=[
> TYPE="kvm" ]
> TEMPLATE_ID="18"
> VCPU="1"
>
> Thanks in advance and best regards!!
>
>
> _______________________________________________ Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> _______________________________________________ Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> --
> Jaime Melis
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | jmelis at opennebula.org
>
> _______________________________________________ Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140813/07b6ae03/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: InfraVMs.jpg
Type: image/jpeg
Size: 77876 bytes
Desc: not available
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140813/07b6ae03/attachment-0001.jpg>
More information about the Users
mailing list