[one-users] virtual network cannot get out

Jaime Melis jmelis at opennebula.org
Wed Jun 4 01:47:44 PDT 2014


It does look in fact with a problem with the DHCP assigned IP.

Could you try with the standard ttylinux that we provide with
contextualization? If you get it to work with that we might be able to
narrow it down to the DHCP problem.


On Fri, May 16, 2014 at 8:46 PM, Neil Schneider <neil at ifxonline.com> wrote:

>
>
> On Tue, May 13, 2014 12:01 am, Valentin Bud wrote:
> > Hello Neil,
> >
> > Please see in line for answer and additional question that would help us
> > to nail down the problem.
> >
> > On Mon, May 12, 2014 at 9:42 PM, Neil Schneider <neil at ifxonline.com>
> > wrote:
> >
> >> More data points.
> >>
> >> I ran tcpdump on the host through the entire instantiation of a host and
> >> boot up. Although somehow it appeared to get an address from the DHCP
> >> server in the network, I saw no dhcp data. I could see arp conversations
> >> for other hosts on the network, so I would expect it to see everything,
> >> if
> >> it passed out of the host interface. The only outside connection through
> >> the bridge should be through em1 on the host.
> >>
> >>   tcpdump -nnvvXXSs 1514 -w cloud1.pcap
> >>
> >
> > I think the above would listen and capture packets on the first Ethernet
> > interface
> > *only*. I have tested it on my PC, a Debian Wheezy machine with 2
> Ethernet
> > cards
> > and that's what it does. I don't know what OS are you using so your
> > mileage
> > may vary.
>
> Using CentOS release 6.5. I reran tcpdump and specified the interface as
> the bridge device. Although I gathered 6.5Gb of data, the result was the
> same. Only saw dhcp client server connection, no further networking. I did
> manage to capture the dhcp exchange. And I was able to see the leases in
> the dhcp server. I may have just missed them on first pass. There are only
> a few packets out of all that data. I had to split the tcpdump file, to
> analyze it with wireshark.
>
>
>
> >> I used wireshark to look through the file, but did not see any DHCP
> >> conversation, in fact I saw no network traffic whatsoever from the
> >> virtual
> >> host. I'm beginning to think either I have the virtual network
> >> misconfigured or opennebula is doing something different than I expect.
> >>
> >
> > See my comment above about tcpdump. What bridge do you expect your
> > VM to connect to? Pass -i <bridge-name> to tcpdump and recheck.
>
> Did.
>
> >>
> >> onevnet list
> >>   ID USER         GROUP        NAME            CLUSTER      TYPE BRIDGE
> >> LEASES
> >>    0 oneadmin     oneadmin     management      ifx-produc      R
> >> manageme
> >>     0
> >>    1 oneadmin     oneadmin     storage         ifx-produc      R storage
> >>     0
> >>    6 oneadmin     oneadmin     public          ifx-produc      R public
> >>     0
> >>
> >>  onevnet show 0
> >> VIRTUAL NETWORK 0 INFORMATION
> >> ID             : 0
> >> NAME           : management
> >> USER           : oneadmin
> >> GROUP          : oneadmin
> >> CLUSTER        : ifx-production
> >> TYPE           : RANGED
> >> BRIDGE         : management
> >> VLAN           : Yes
> >> VLAN ID        : 10
> >> USED LEASES    : 0
> >>
> >> PERMISSIONS
> >> OWNER          : uma
> >> GROUP          : uma
> >> OTHER          : u--
> >>
> >> VIRTUAL NETWORK TEMPLATE
> >> NETWORK_ADDRESS="10.1.4.0"
> >> NETWORK_MASK="255.255.255.0"
> >>
> >> RANGE
> >> IP_START       : 10.1.4.30
> >> IP_END         : 10.1.4.40
> >>
> >> VIRTUAL MACHINES
> >>
> >>  onevnet show 1
> >> VIRTUAL NETWORK 1 INFORMATION
> >> ID             : 1
> >> NAME           : storage
> >> USER           : oneadmin
> >> GROUP          : oneadmin
> >> CLUSTER        : ifx-production
> >> TYPE           : RANGED
> >> BRIDGE         : storage
> >> VLAN           : Yes
> >> VLAN ID        : 20
> >> USED LEASES    : 0
> >>
> >> PERMISSIONS
> >> OWNER          : uma
> >> GROUP          : uma
> >> OTHER          : u--
> >>
> >> VIRTUAL NETWORK TEMPLATE
> >> NETWORK_ADDRESS="10.1.2.0"
> >> NETWORK_MASK="255.255.255.0"
> >>
> >> RANGE
> >> IP_START       : 10.1.2.30
> >> IP_END         : 10.1.2.40
> >>
> >> VIRTUAL MACHINES
> >>
> >>  onevnet show 6
> >> VIRTUAL NETWORK 6 INFORMATION
> >> ID             : 6
> >> NAME           : public
> >> USER           : oneadmin
> >> GROUP          : oneadmin
> >> CLUSTER        : ifx-production
> >> TYPE           : RANGED
> >> BRIDGE         : public
> >> VLAN           : No
> >> USED LEASES    : 0
> >>
> >> PERMISSIONS
> >> OWNER          : uma
> >> GROUP          : um-
> >> OTHER          : u--
> >>
> >> VIRTUAL NETWORK TEMPLATE
> >> DNS="172.16.168.1"
> >> GATEWAY="172.16.168.1"
> >> NETWORK_ADDRESS="172.16.168.0"
> >> NETWORK_MASK="255.255.255.0"
> >>
> >> RANGE
> >> IP_START       : 172.16.168.30
> >> IP_END         : 172.16.168.49
> >>
> >> VIRTUAL MACHINES
> >>
> >
> > As I can see on the above onevnet output none of your virtual network
> have
> > a VM connected to it. It should appear under VIRTUAL MACHINES section in
> > one of your vnets. Have you skipped that output or wasn't any VM running
> > at
> > the time you've printed the output.
>
> No vm was running. When one is running I see this.
>
>  ovs-vsctl list-ports public
> em1
> vnet0
>  ovs-vsctl iface-to-br vnet0
> public
>
>
>
> > A couple of more question regarding your setup follow.
> >
> > 1. Have you created the host on which you want to deploy VMs using
> onehost
> > create
> > with the proper network driver, ovswitch or ovswitch_brcompat. It could
> > help the output
> > of onehost list and onehost show <id>.
>
> onehost list
>   ID NAME            CLUSTER   RVM      ALLOCATED_CPU      ALLOCATED_MEM
> STAT
>    0 cloud1          ifx-produ   1    100 / 1200 (8%)    2G / 31.3G (6%) on
>
> onehost show 0
> HOST 0 INFORMATION
> ID                    : 0
> NAME                  : cloud1
> CLUSTER               : ifx-production
> STATE                 : MONITORED
> IM_MAD                : kvm
> VM_MAD                : kvm
> VN_MAD                : ovswitch
> LAST MONITORING TIME  : 05/16 10:57:34
>
> HOST SHARES
> TOTAL MEM             : 31.3G
> USED MEM (REAL)       : 2.1G
> USED MEM (ALLOCATED)  : 2G
> TOTAL CPU             : 1200
> USED CPU (REAL)       : 9
> USED CPU (ALLOCATED)  : 100
> RUNNING VMS           : 1
>
> LOCAL SYSTEM DATASTORE #102 CAPACITY
> TOTAL:                : 38.4G
> USED:                 : 10G
> FREE:                 : 6.5G
>
> MONITORING INFORMATION
> ARCH="x86_64"
> CPUSPEED="1899"
> HOSTNAME="cloud1"
> HYPERVISOR="kvm"
> MODELNAME="Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz"
> NETRX="0"
> NETTX="0"
> VERSION="4.4.1"
>
> VIRTUAL MACHINES
>
>     ID USER     GROUP    NAME            STAT UCPU    UMEM HOST
>  TIME
>     59 pacneil  oneadmin ifx3ws1         runn    1      2G cloud1       2d
> 23h52
>
> > 2. Do you want/need to use DHCP in the VM? How are you going to keep in
> > sync the
> > MAC - IP pair from OpenNebula with the one from your DHCP server?
>
> I'm still setting up and testing, trying to make sure everything is
> working before I put the systems in the data center. When the systems are
> in the rack we will use fixed addresses. Now I'm using dhcp because it's
> there and I know what addresses it will get.
>
> > 3. What OS are you running in the hosts, what about the VMS?
>
> Everything is CentOS 6.5.
>
> > 4. What are the logs of the VM, on the frontend, saying? You can find
> them
> > at /var/log/one/<VMID>.log.
>
> This is interesting. I have the configuration in OpenNebula to run dhcp on
> the bridge.
>
> onevnet show 6
> VIRTUAL NETWORK 6 INFORMATION
> ID             : 6
> NAME           : public
> USER           : oneadmin
> GROUP          : oneadmin
> CLUSTER        : ifx-production
> TYPE           : RANGED
> BRIDGE         : public
> VLAN           : No
> USED LEASES    : 1
>
> PERMISSIONS
> OWNER          : uma
> GROUP          : um-
> OTHER          : u--
>
> VIRTUAL NETWORK TEMPLATE
> DNS="172.16.168.1"
> GATEWAY="172.16.168.1"
> NETWORK_ADDRESS="172.16.168.0"
> NETWORK_MASK="255.255.255.0"
>
> RANGE
> IP_START       : 172.16.168.30
> IP_END         : 172.16.168.49
>
> USED LEASES
> LEASE=[ MAC="02:00:ac:10:a8:20", IP="172.16.168.32",
> IP6_LINK="fe80::400:acff:fe10:a820", USED="1", VID="59" ]
>
> VIRTUAL MACHINES
>
>     ID USER     GROUP    NAME            STAT UCPU    UMEM HOST
>  TIME
>     59 pacneil  oneadmin ifx3ws1         runn    1      2G cloud1       3d
> 00h04
>
> And the logs show that it's getting it's address as it should  from the
> bridge.
>
> The logs say it's happening.
>
> Tue May 13 11:07:35 2014 [VMM][I]: Successfully execute network driver
> operation: pre.
> Tue May 13 11:07:39 2014 [VMM][I]: ExitCode: 0
> Tue May 13 11:07:39 2014 [VMM][I]: Successfully execute virtualization
> driver operation: deploy.
> Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
> public
> in_port=14,arp,dl_src=02:00:ac:10:a8:20,priority=45000,actions=drop".
> Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
> public
>
> in_port=14,arp,dl_src=02:00:ac:10:a8:20,nw_src=172.16.168.32,priority=46000,actions=normal".
> Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
> public in_port=14,dl_src=02:00:ac:10:a8:20,priority=40000,actions=normal".
> Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow
> public in_port=14,priority=39000,actions=drop".
> Tue May 13 11:07:40 2014 [VMM][I]: ExitCode: 0
> Tue May 13 11:07:40 2014 [VMM][I]: Successfully execute network driver
> operation: post.
> Tue May 13 11:07:40 2014 [LCM][I]: New VM state is RUNNING
>
> However that's not what ifconfig on the virtual host shows. Can't cut and
> paste, because the only access I have to the VM is through virt-manager.
> Even VNC doesn't work in Sunstone for this host.
>
> inet addr:172.16.168.154 Bcast:172.16.168.255 Mask:255.255.255.0
>
> So one thinks it's getting an address from the public bridge and it's
> actually getting it from the network dhcp server where the host lives.
>
> > 5. What are the Open vSwitch logs saying on the hosts your VM get
> deployed
> > onto?
>
> 2014-05-13T18:07:37.150Z|00190|bridge|INFO|bridge public: added interface
> vnet0 on port 14
> 2014-05-13T18:07:49.990Z|00191|ofproto|INFO|public: 4 flow_mods 10 s ago
> (4 adds)
>
> It appears there that ONE thinks the VM is getting an address from ONE,
> while it's actually getting another from the external dhcp server. I'm not
> sure how this effects them internally, or how openvswitch will try to
> route them. I checked the routing table and there was no default route. I
> manually added it, but it made no difference, the VM still can't ping any
> other hosts on the network.
>
> I have instantiated hosts on an identical host cloud2 running openvswitch,
> but using virsh or virt-manager, and they start and run fine. They connect
> to the network and can route traffic fine. I have 6 VMs running on cloud2
> and have tested them and they all work fine. I can ssh to them, they
> network fine. When I instantiate a VM using ONE on cloud1, the host I'm
> working with, network fails to work.
>
> --
> Neil Schneider
> Systems Administrator
>
> IFX
> 12750 High Bluff Drive, Suite 460
> San Diego, CA 92130
>
> Phone 858-724-1024 | Fax 858-724-1043 | http://www.ifxonline.com
>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140604/07054084/attachment-0001.htm>


More information about the Users mailing list