[one-users] FlatDHCP networking problem

Javier Alvarez javier.alvarez at bsc.es
Thu Apr 18 08:44:40 PDT 2013


Hello Steven,

That's what happens when you work with both cloud solutions and do not 
pay enough attention to which list you are posting...

Sorry for the inconvenience, and do as it never happened :)

Javier


On 18/04/13 17:05, Steven Timm wrote:
> You're using the wrong cloud.. You should be using OpenNebula
> and not OpenStack and then you would not be having these OpenStack 
> problems.  Probably better to get help on OpenStack on the OpenSTack 
> list.
>
> Steve Timm
>
>
> On Thu, 18 Apr 2013, Javier Alvarez wrote:
>
>> Hello all,
>>
>> Here it is my situation:
>>
>> I am trying to install Essex on a small cluster (3 nodes) running 
>> Debian.
>> There is a front-end node that has a public IP and then there are 2 
>> compute
>> nodes in a LAN. I cannot run nova-network on the front-end node 
>> because it
>> is overwritting the iptables there and some other services start to
>> misbehave, so I am trying a multi-host solution with nova-network 
>> running in
>> each compute node.
>>
>> The nova.conf I'm using in both compute nodes is the following:
>>
>> [DEFAULT]
>> logdir=/var/log/nova
>> state_path=/var/lib/nova
>> lock_path=/var/lock/nova
>> root_helper=sudo nova-rootwrap
>> auth_strategy=keystone
>> iscsi_helper=tgtadm
>> sql_connection=mysql://nova-common:password@172.16.8.1/nova
>> connection_type=libvirt
>> libvirt_type=kvm
>> my_ip=172.16.8.22
>> rabbit_host=172.16.8.1
>> glance_host=172.16.8.1
>> image_service=nova.image.glance.GlanceImageService
>> network_manager=nova.network.manager.FlatDHCPManager
>> fixed_range=192.168.100.0/24
>> flat_interface=eth1
>> public_interface=eth0
>> flat_network_bridge=br100
>> flat_network_dhcp_start=192.168.100.2
>> network_size=256
>> dhcpbridge_flagfile=/etc/nova/nova.conf
>> dhcpbridge=/usr/bin/nova-dhcpbridge
>> multi_host=True
>> send_arp_for_ha=true
>>
>> I have created a network with:
>>
>> nova-manage network create private --fixed_range_v4=192.168.100.0/24
>> --multi_host=T --bridge_interface=br100
>>
>> And I have set up eth1 with no IP and running in promisc mode. When I 
>> launch
>> an instance, ifconfig outputs the following:
>>
>>
>> br100     Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a7
>>           inet addr:192.168.100.3  Bcast:192.168.100.255 
>> Mask:255.255.255.0
>>           inet6 addr: fe80::7033:eeff:fe29:81ae/64 Scope:Link
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 txqueuelen:0
>>           RX bytes:0 (0.0 B)  TX bytes:90 (90.0 B)
>>
>> eth0      Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a6
>>           inet addr:172.16.8.22  Bcast:172.16.8.255 Mask:255.255.255.0
>>           inet6 addr: fe80::6ab5:99ff:fec2:7ba6/64 Scope:Link
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:4432580 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:4484811 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 txqueuelen:1000
>>           RX bytes:457880509 (436.6 MiB)  TX bytes:398588034 (380.1 MiB)
>>           Memory:fe860000-fe880000
>>
>> eth1      Link encap:Ethernet  HWaddr 68:b5:99:c2:7b:a7
>>           UP BROADCAST PROMISC MULTICAST  MTU:1500  Metric:1
>>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 txqueuelen:1000
>>           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>>           Memory:fe8e0000-fe900000
>>
>> lo        Link encap:Local Loopback
>>           inet addr:127.0.0.1  Mask:255.0.0.0
>>           inet6 addr: ::1/128 Scope:Host
>>           UP LOOPBACK RUNNING  MTU:16436  Metric:1
>>           RX packets:52577 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:52577 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 txqueuelen:0
>>           RX bytes:2737820 (2.6 MiB)  TX bytes:2737820 (2.6 MiB)
>>
>> vnet0     Link encap:Ethernet  HWaddr fe:16:3e:2d:40:3b
>>           inet6 addr: fe80::fc16:3eff:fe2d:403b/64 Scope:Link
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 txqueuelen:500
>>           RX bytes:0 (0.0 B)  TX bytes:370 (370.0 B)
>>
>> And brctl show:
>>
>> bridge name     bridge id               STP enabled interfaces
>> br100           8000.68b599c27ba7       no              eth1
>>
> ?? ?   vnet0
>>
>> Which looks fine to me. However, the VM log shows that it is unable 
>> to get
>> an IP through DHCP (despite dnsmasq is running):
>>
>> Starting network...
>> udhcpc (v1.18.5) started
>> Sending discover...
>> Sending discover...
>> Sending discover...
>> No lease, failing
>> WARN: /etc/rc3.d/S40-network failed
>>
>> What am I doing wrong? Any help would be very much appreciated.
>>
>> Thanks in advance,
>>
>> Javier
>>
>> -- 
>> Javier Álvarez Cid-Fuentes
>> Grid Computing and Clusters Group
>> Barcelona Supercomputing Center (BSC-CNS)
>> Tel. (+34) 93 413 72 46
>>
>>
>> WARNING / LEGAL TEXT: This message is intended only for the use of the
>> individual or entity to which it is addressed and may contain 
>> information
>> which is privileged, confidential, proprietary, or exempt from 
>> disclosure
>> under applicable law. If you are not the intended recipient or the 
>> person
>> responsible for delivering the message to the intended recipient, you 
>> are
>> strictly prohibited from disclosing, distributing, copying, or in any 
>> way
>> using this message. If you have received this communication in error, 
>> please
>> notify the sender and destroy and delete any copies you may have 
>> received.
>>
>> http://www.bsc.es/disclaimer
>>
>>
>
> ------------------------------------------------------------------
> Steven C. Timm, Ph.D  (630) 840-8525
> timm at fnal.gov  http://home.fnal.gov/~timm/
> Fermilab Computing Division, Scientific Computing Facilities,
> Grid Facilities Department, FermiGrid Services Group, Group Leader.
> Lead of FermiCloud project.


-- 
Javier Álvarez Cid-Fuentes
Grid Computing and Clusters Group
Barcelona Supercomputing Center (BSC-CNS)
Tel. (+34) 93 413 72 46


WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.

http://www.bsc.es/disclaimer



More information about the Users mailing list