[one-users] virtual network cannot get out

Ionut Popovici ionut at hackaserver.com
Thu May 8 14:53:40 PDT 2014


Question din you activate ip_forward even if you have bridge you need 
ip_forward
On 5/8/2014 11:07 PM, Neil Schneider wrote:
> If I ping the gateway I see only
> ARP, Request who-has  from the gateway/firewall for the network with no
> answer
> and
> ICMP echo request from the vm with no response from the gateway/firewall.
>
> So the firewall can't see the vm and the vm can't get responses from the
> firewall.
>
> When I try to ping my workstation from the VM I don't see packets entering
> or leaving the em1 interface
>
> I do however see ARP, Request who-has 172.16.168.184 tell 172.16.168.154
> on vnet0 interface. 154 is the Virtual Machine, 184 is my workstation.
>
> So it's sending ARP requests out the vnet interface but other machines are
> ARPing to the em1 interface.
>
> Any clues how I fix this?
>
> Is this a problem with Open vswitch or an error created by Opennebula?
>
> On Wed, May 7, 2014 2:26 am, Leszek Master wrote:
>> Do you see any traffic from VM using tcpdump on em1?
>>
>>
>> 2014-05-07 1:21 GMT+02:00 Neil Schneider <neil at ifxonline.com>:
>>
>>> I've been trying to work through this problem for two days and haven't
>>> found the solution. I'm using opennebula to create virtual networks
>>> using
>>> openvswitch.
>>>
>>> [root at cloud1 ~]# ovs-vsctl show
>>> c6def17d-2cc6-499e-a461-af4fe9aab78a
>>>      Bridge management
>>>          Port management
>>>              Interface management
>>>                  type: internal
>>>          Port "vlan10"
>>>              tag: 10
>>>              Interface "vlan10"
>>>                  type: internal
>>>      Bridge public
>>>          Port "em1"
>>>              Interface "em1"
>>>          Port "vnet0"
>>>              Interface "vnet0"
>>>          Port public
>>>              Interface public
>>>                  type: internal
>>>      Bridge storage
>>>          Port storage
>>>              Interface storage
>>>                  type: internal
>>>          Port "vlan20"
>>>              tag: 20
>>>              Interface "vlan20"
>>>                  type: internal
>>>      ovs_version: "2.1.0"
>>>
>>>  From the opennebula server I can see this.
>>>
>>> onevnet list
>>>    ID USER         GROUP        NAME            CLUSTER      TYPE BRIDGE
>>> LEASES
>>>     0 oneadmin     oneadmin     management      ifx-produc      R
>>> manageme
>>>      0
>>>     1 oneadmin     oneadmin     storage         ifx-produc      R storage
>>>      0
>>>     6 oneadmin     oneadmin     public          ifx-produc      R public
>>>      1
>>>
>>> I've followed the instruction for configuring the hosting server so that
>>> oneadmin has rights to access /var/lib/one on the hosting server as well
>>> as sudo access to the scripts needed to create networks.
>>>
>>>
>>> I have all the changes recommended to allow oneadmin to execute commands
>>> through ssh to cloud1 the hosting server.
>>>
>>> oneadmin ALL=(ALL)      NOPASSWD: /usr/sbin/tgtadm, /sbin/lvcreate,
>>> /sbin/lvremove, /bin/dd, /usr/bin/ovs-vsctl, /usr/bin/ovs-ofctl,
>>> /usr/bin/ovs-dpctl, /sbin/iptables, /sbin/ebtables
>>>
>>> I can instantiate hosts from templates and everything works as expected.
>>> When I bring up a virtual host, it gets an IP from the dhcp server
>>> running
>>> in the network. Not from the virtual network. Sorry, I can't cut and
>>> paste
>>> that part, since the only way I can access the virtual machine is
>>> through
>>> either VNC in sunstone or with virt-manager.
>>>
>>> I have another server running ovswitch that works fine. The main
>>> difference is that I used virt-manager to create the hosts, instead of
>>> opennebula. Those five virtual servers connect fine.
>>>
>>> [root at cloud2 ~]# ovs-vsctl show
>>> aa56747f-d5a2-41b0-a998-48add3c62562
>>>      Bridge public
>>>          Port "vnet4"
>>>              Interface "vnet4"
>>>          Port "vnet0"
>>>              Interface "vnet0"
>>>          Port "vnet3"
>>>              Interface "vnet3"
>>>          Port public
>>>              Interface public
>>>                  type: internal
>>>          Port "em1"
>>>              Interface "em1"
>>>          Port "vnet1"
>>>              Interface "vnet1"
>>>          Port "vnet2"
>>>              Interface "vnet2"
>>>      ovs_version: "2.1.0"
>>>
>>>
>>> On cloud1 after the host gets it's IP address from the dhcp server
>>> running
>>> in our network, it can no longer connect to anything. I've checked
>>> iptables rules, flushed them for testing, just to make sure. Everything
>>> seems right, but the network isn't working.
>>>
>>> Sure would like to buy a clue. I've been searching the web for an answer
>>> or an idea what to do to diagnose it. I suspect what's happening is that
>>> opennebula/sunstone is not creating the interface properly. As I
>>> understand the ip should be assigned to the bridge, not the virtual
>>> interface.
>>>
>>> Sure could use some help. Even a pointer to a web site with the right
>>> answer would be appreciated. I haven't been able to find it myself.
>>>
>>> Sorry for cross posting, but I couldn't decide which list to post to, so
>>> I
>>> did both.
>>>
>>> --
>>> Neil Schneider                          pacneil_at_linuxgeek_dot_net
>>>
>>> This is your life. Do what you love, and do it often. If you don’t
>>> like
>>> something, change it. If you don’t like your job, quit. If you don’t
>>> have
>>> enough time, stop watching TV. If you are looking for the love of your
>>> life, stop; they will be waiting for you when you start doing things you
>>> love.”
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




More information about the Users mailing list