[one-users] Strange Networking Problems
Markus Hubig
mhubig at imko.de
Thu Nov 15 05:45:25 PST 2012
Yes it doesn't matter if I ping the ip or the hostname.
puppet is an alias of cc00 and cc00 is the cloud controller (192.168.1.60).
jenkins is a vm (192.168.1.70)
Cheers, Markus
On Thu, Nov 15, 2012 at 2:31 PM, Campbell, Bill <
bcampbell at axcess-financial.com> wrote:
> Does this happen when pinging the IP addresses directly?
>
> To help troubleshoot, what are the puppet, jenkins, and cc00 hostnames
> pointed to? (cloud controller, hypervisor, etc.)
>
> ----- Original Message -----
> From: "Markus Hubig" <mhubig at imko.de>
> To: "OpenNebula Mailing List" <users at lists.opennebula.org>
> Sent: Thursday, November 15, 2012 7:44:30 AM
> Subject: [one-users] Strange Networking Problems
>
> Hi @all,
>
> I'm experencing a strage networking behavor here. Every time I login to a
> VM
> from my workstation, my SSH session freezes regularlily for a couple of
> seconds, the it starts to work again. Also if I try to ping something I
> have
> a huge package drop.
>
> | # ping puppet.imko
> | PING puppet.imko (192.168.1.60) 56(84) bytes of data.
> | From jenkins.imko (192.168.1.70) icmp_seq=1 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=2 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=3 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=4 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=5 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=6 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=7 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=8 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=9 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=10 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=11 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=12 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=13 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=14 Destination Host Unreachable
> | From jenkins.imko (192.168.1.70) icmp_seq=15 Destination Host Unreachable
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=16 ttl=64 time=1.26 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=17 ttl=64 time=0.504 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=18 ttl=64 time=0.487 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=19 ttl=64 time=0.464 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=20 ttl=64 time=0.541 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=21 ttl=64 time=0.498 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=22 ttl=64 time=0.440 ms
> | 64 bytes from cc00.imko (192.168.1.60): icmp_req=23 ttl=64 time=0.534 ms
>
> I suspect that I have some kind of a network missconfiguration, but I can't
> find the problem... ;-(
>
> I set up a CloudController running one with two network connections eth0
> and
> eth1. eth0 (192.168.1.60) is intended to connect to the company LAN and
> eth1
> (172.16.1.10) is the infrastrukture network intendet to connect to the
> computing nodes and the storage servers (glusterfs). The computing nodes
> and
> the glusterfs nodes have only ips in the 172.168.1.0/24 network to
> separate
> them from the company lan (192.168.1.0/24).
>
> The computing nodes have 4 network ports which I bond together to incrase
> the
> thouput (as seen on the image). bond0 is added to the bridge br0 and has
> no ip
> address, bond1 has an address from the 172.168.1.0/24 network.
>
> The ifconfig output on node01 shows that only bond1 has an ip address. And
> the
> routing table looks like this:
>
> | Destination Gateway Genmask Flags Metric Ref Use
> Iface
> | 0.0.0.0 172.16.1.10 0.0.0.0 UG 100 0 0
> bond1
> | 172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0
> bond1
>
> The /etc/network/interfaces on node01 looks like this:
>
> | # The loopback network interface
> | auto lo
> | iface lo inet loopback
> |
> | # The eth0 interface
> | auto eth0
> | iface eth0 inet manual
> | bond-master bond0
> |
> | # The eth1 interface
> | auto eth1
> | iface eth1 inet manual
> | bond-master bond0
> |
> | # The eth2 interface
> | auto eth2
> | iface eth2 inet manual
> | bond-master bond1
> |
> | # The eth3 interface
> | auto eth3
> | iface eth3 inet manual
> | bond-master bond1
> |
> | # The public network interface
> | auto bond0
> | iface bond0 inet manual
> | # Bind this interfaces
> | bond-slaves none
> | # Balance-RR configuration
> | bond_mode balance-rr
> | bond_miimon 100
> | bond_updelay 200
> | bond_downdelay 200
> |
> | # The storage network interface
> | auto bond1
> | iface bond1 inet static
> | # Static assign the IP, netmask, default gateway.
> | address 172.16.1.21
> | netmask 255.255.255.0
> | gateway 172.16.1.10
> | # Bind this interfaces
> | bond-slaves none
> | # Balance RR configuration
> | bond_mode balance-rr
> | bond_miimon 100
> | bond_updelay 200
> | bond_downdelay 200
> |
> | # The public bridge interface
> | auto br0
> | iface br0 inet static
> | # Static assign the IP, netmask, default gateway.
> | address 0.0.0.0
> | # Bind one or more interfaces to the bridge.
> | bridge_ports bond0
> | # Tune the bridge for a single interface.
> | bridge_stp off
> | bridge_fd 0
> | bridge_maxwait 0
>
>
>
> +------------------------------+
> | Cloud Controller |
> | |
> | eth0 eth1 |
> | 192.168.1.60 172.16.1.10 |
> +-------+------------+---------+
> | |
> +-------------+------------+-------------+ +---------------+
> | Switch +---+ Workstation |
> +--+----------+------------+----------+--+ | 192.168.1.101 |
> | | | | +---------------+
> | | | |
> | | | |
> +----+----------+------------+----------+----+
> | eth0 eth1 eth2 eth3 |
> | | | | | |
> | | | | | |
> | +--+----------+--+ +--+----------+--+ |
> | | bond0 | | bond1 | |
> | | (no ip) | | (172.168.1.20) | |
> | +--------+-------+ +----------------+ |
> | | |
> | +--------+-----------+ |
> | | | | +--------------+ |
> | | bond0 vnet0--+--+ eth0 (vm0) | |
> | | | | 192.168.1.20 | |
> | | | +--------------+ |
> | | | |
> | | br0 | +--------------+ |
> | | (no IP) vnet1--+--+ eth0 (vm1) | |
> | | | | 192.168.1.21 | |
> | | | +--------------+ |
> | +--------------------+ |
> | node01 |
> +--------------------------------------------+
>
> As far as I know the nodes don't need a special route for the VM Network.
> And maybe the problem is from the bonding configuration. Or maby I need to
> enable STP on the bridge ... but I don't unterstand the problem yet.
>
> I could really need some help here!
>
> Cheers, Markus
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
> NOTICE: Protect the information in this message in accordance with the
> company's security policies. If you received this message in error,
> immediately notify the sender and destroy all copies.
>
>
>
--
_______________________________________________________
IMKO Micromodultechnik GmbH
Markus Hubig
Development & Research
Im Stoeck 2
D-76275 Ettlingen / GERMANY
HR: HRB 360936 Amtsgericht Mannheim
President: Dipl.-Ing. (FH) Kurt Koehler
Tel: 0049-(0)7243-5921-26
Fax: 0049-(0)7243-5921-40
e-mail: mhubig at imko.de
internet: www.imko.de
_______________________________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20121115/7ff51519/attachment-0002.htm>
More information about the Users
mailing list