<div dir="ltr">It does look in fact with a problem with the DHCP assigned IP.<div><br></div><div>Could you try with the standard ttylinux that we provide with contextualization? If you get it to work with that we might be able to narrow it down to the DHCP problem.</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, May 16, 2014 at 8:46 PM, Neil Schneider <span dir="ltr"><<a href="mailto:neil@ifxonline.com" target="_blank">neil@ifxonline.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class=""><br>
<br>
On Tue, May 13, 2014 12:01 am, Valentin Bud wrote:<br>
> Hello Neil,<br>
><br>
> Please see in line for answer and additional question that would help us<br>
> to nail down the problem.<br>
><br>
> On Mon, May 12, 2014 at 9:42 PM, Neil Schneider <<a href="mailto:neil@ifxonline.com">neil@ifxonline.com</a>><br>
> wrote:<br>
><br>
>> More data points.<br>
>><br>
>> I ran tcpdump on the host through the entire instantiation of a host and<br>
>> boot up. Although somehow it appeared to get an address from the DHCP<br>
>> server in the network, I saw no dhcp data. I could see arp conversations<br>
>> for other hosts on the network, so I would expect it to see everything,<br>
>> if<br>
>> it passed out of the host interface. The only outside connection through<br>
>> the bridge should be through em1 on the host.<br>
>><br>
>> tcpdump -nnvvXXSs 1514 -w cloud1.pcap<br>
>><br>
><br>
> I think the above would listen and capture packets on the first Ethernet<br>
> interface<br>
> *only*. I have tested it on my PC, a Debian Wheezy machine with 2 Ethernet<br>
> cards<br>
> and that's what it does. I don't know what OS are you using so your<br>
> mileage<br>
> may vary.<br>
<br>
</div>Using CentOS release 6.5. I reran tcpdump and specified the interface as<br>
the bridge device. Although I gathered 6.5Gb of data, the result was the<br>
same. Only saw dhcp client server connection, no further networking. I did<br>
manage to capture the dhcp exchange. And I was able to see the leases in<br>
the dhcp server. I may have just missed them on first pass. There are only<br>
a few packets out of all that data. I had to split the tcpdump file, to<br>
analyze it with wireshark.<br>
<div class=""><br>
<br>
<br>
>> I used wireshark to look through the file, but did not see any DHCP<br>
>> conversation, in fact I saw no network traffic whatsoever from the<br>
>> virtual<br>
>> host. I'm beginning to think either I have the virtual network<br>
>> misconfigured or opennebula is doing something different than I expect.<br>
>><br>
><br>
> See my comment above about tcpdump. What bridge do you expect your<br>
> VM to connect to? Pass -i <bridge-name> to tcpdump and recheck.<br>
<br>
</div>Did.<br>
<div><div class="h5"><br>
>><br>
>> onevnet list<br>
>> ID USER GROUP NAME CLUSTER TYPE BRIDGE<br>
>> LEASES<br>
>> 0 oneadmin oneadmin management ifx-produc R<br>
>> manageme<br>
>> 0<br>
>> 1 oneadmin oneadmin storage ifx-produc R storage<br>
>> 0<br>
>> 6 oneadmin oneadmin public ifx-produc R public<br>
>> 0<br>
>><br>
>> onevnet show 0<br>
>> VIRTUAL NETWORK 0 INFORMATION<br>
>> ID : 0<br>
>> NAME : management<br>
>> USER : oneadmin<br>
>> GROUP : oneadmin<br>
>> CLUSTER : ifx-production<br>
>> TYPE : RANGED<br>
>> BRIDGE : management<br>
>> VLAN : Yes<br>
>> VLAN ID : 10<br>
>> USED LEASES : 0<br>
>><br>
>> PERMISSIONS<br>
>> OWNER : uma<br>
>> GROUP : uma<br>
>> OTHER : u--<br>
>><br>
>> VIRTUAL NETWORK TEMPLATE<br>
>> NETWORK_ADDRESS="10.1.4.0"<br>
>> NETWORK_MASK="255.255.255.0"<br>
>><br>
>> RANGE<br>
>> IP_START : 10.1.4.30<br>
>> IP_END : 10.1.4.40<br>
>><br>
>> VIRTUAL MACHINES<br>
>><br>
>> onevnet show 1<br>
>> VIRTUAL NETWORK 1 INFORMATION<br>
>> ID : 1<br>
>> NAME : storage<br>
>> USER : oneadmin<br>
>> GROUP : oneadmin<br>
>> CLUSTER : ifx-production<br>
>> TYPE : RANGED<br>
>> BRIDGE : storage<br>
>> VLAN : Yes<br>
>> VLAN ID : 20<br>
>> USED LEASES : 0<br>
>><br>
>> PERMISSIONS<br>
>> OWNER : uma<br>
>> GROUP : uma<br>
>> OTHER : u--<br>
>><br>
>> VIRTUAL NETWORK TEMPLATE<br>
>> NETWORK_ADDRESS="10.1.2.0"<br>
>> NETWORK_MASK="255.255.255.0"<br>
>><br>
>> RANGE<br>
>> IP_START : 10.1.2.30<br>
>> IP_END : 10.1.2.40<br>
>><br>
>> VIRTUAL MACHINES<br>
>><br>
>> onevnet show 6<br>
>> VIRTUAL NETWORK 6 INFORMATION<br>
>> ID : 6<br>
>> NAME : public<br>
>> USER : oneadmin<br>
>> GROUP : oneadmin<br>
>> CLUSTER : ifx-production<br>
>> TYPE : RANGED<br>
>> BRIDGE : public<br>
>> VLAN : No<br>
>> USED LEASES : 0<br>
>><br>
>> PERMISSIONS<br>
>> OWNER : uma<br>
>> GROUP : um-<br>
>> OTHER : u--<br>
>><br>
>> VIRTUAL NETWORK TEMPLATE<br>
>> DNS="172.16.168.1"<br>
>> GATEWAY="172.16.168.1"<br>
>> NETWORK_ADDRESS="172.16.168.0"<br>
>> NETWORK_MASK="255.255.255.0"<br>
>><br>
>> RANGE<br>
>> IP_START : 172.16.168.30<br>
>> IP_END : 172.16.168.49<br>
>><br>
>> VIRTUAL MACHINES<br>
>><br>
><br>
> As I can see on the above onevnet output none of your virtual network have<br>
> a VM connected to it. It should appear under VIRTUAL MACHINES section in<br>
> one of your vnets. Have you skipped that output or wasn't any VM running<br>
> at<br>
> the time you've printed the output.<br>
<br>
</div></div>No vm was running. When one is running I see this.<br>
<br>
ovs-vsctl list-ports public<br>
em1<br>
vnet0<br>
ovs-vsctl iface-to-br vnet0<br>
public<br>
<div class=""><br>
<br>
<br>
> A couple of more question regarding your setup follow.<br>
><br>
> 1. Have you created the host on which you want to deploy VMs using onehost<br>
> create<br>
> with the proper network driver, ovswitch or ovswitch_brcompat. It could<br>
> help the output<br>
> of onehost list and onehost show <id>.<br>
<br>
</div>onehost list<br>
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT<br>
0 cloud1 ifx-produ 1 100 / 1200 (8%) 2G / 31.3G (6%) on<br>
<br>
onehost show 0<br>
HOST 0 INFORMATION<br>
ID : 0<br>
NAME : cloud1<br>
CLUSTER : ifx-production<br>
STATE : MONITORED<br>
IM_MAD : kvm<br>
VM_MAD : kvm<br>
VN_MAD : ovswitch<br>
LAST MONITORING TIME : 05/16 10:57:34<br>
<br>
HOST SHARES<br>
TOTAL MEM : 31.3G<br>
USED MEM (REAL) : 2.1G<br>
USED MEM (ALLOCATED) : 2G<br>
TOTAL CPU : 1200<br>
USED CPU (REAL) : 9<br>
USED CPU (ALLOCATED) : 100<br>
RUNNING VMS : 1<br>
<br>
LOCAL SYSTEM DATASTORE #102 CAPACITY<br>
TOTAL: : 38.4G<br>
USED: : 10G<br>
FREE: : 6.5G<br>
<br>
MONITORING INFORMATION<br>
ARCH="x86_64"<br>
CPUSPEED="1899"<br>
HOSTNAME="cloud1"<br>
HYPERVISOR="kvm"<br>
MODELNAME="Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz"<br>
NETRX="0"<br>
NETTX="0"<br>
VERSION="4.4.1"<br>
<br>
VIRTUAL MACHINES<br>
<br>
ID USER GROUP NAME STAT UCPU UMEM HOST<br>
TIME<br>
59 pacneil oneadmin ifx3ws1 runn 1 2G cloud1 2d<br>
23h52<br>
<div class=""><br>
> 2. Do you want/need to use DHCP in the VM? How are you going to keep in<br>
> sync the<br>
> MAC - IP pair from OpenNebula with the one from your DHCP server?<br>
<br>
</div>I'm still setting up and testing, trying to make sure everything is<br>
working before I put the systems in the data center. When the systems are<br>
in the rack we will use fixed addresses. Now I'm using dhcp because it's<br>
there and I know what addresses it will get.<br>
<br>
> 3. What OS are you running in the hosts, what about the VMS?<br>
<br>
Everything is CentOS 6.5.<br>
<div class=""><br>
> 4. What are the logs of the VM, on the frontend, saying? You can find them<br>
> at /var/log/one/<VMID>.log.<br>
<br>
</div>This is interesting. I have the configuration in OpenNebula to run dhcp on<br>
the bridge.<br>
<div class=""><br>
onevnet show 6<br>
VIRTUAL NETWORK 6 INFORMATION<br>
ID : 6<br>
NAME : public<br>
USER : oneadmin<br>
GROUP : oneadmin<br>
CLUSTER : ifx-production<br>
TYPE : RANGED<br>
BRIDGE : public<br>
VLAN : No<br>
USED LEASES : 1<br>
<br>
PERMISSIONS<br>
OWNER : uma<br>
GROUP : um-<br>
OTHER : u--<br>
<br>
VIRTUAL NETWORK TEMPLATE<br>
DNS="172.16.168.1"<br>
GATEWAY="172.16.168.1"<br>
NETWORK_ADDRESS="172.16.168.0"<br>
NETWORK_MASK="255.255.255.0"<br>
<br>
RANGE<br>
IP_START : 172.16.168.30<br>
IP_END : 172.16.168.49<br>
<br>
</div>USED LEASES<br>
LEASE=[ MAC="02:00:ac:10:a8:20", IP="172.16.168.32",<br>
IP6_LINK="fe80::400:acff:fe10:a820", USED="1", VID="59" ]<br>
<br>
VIRTUAL MACHINES<br>
<br>
ID USER GROUP NAME STAT UCPU UMEM HOST<br>
TIME<br>
59 pacneil oneadmin ifx3ws1 runn 1 2G cloud1 3d<br>
00h04<br>
<br>
And the logs show that it's getting it's address as it should from the<br>
bridge.<br>
<br>
The logs say it's happening.<br>
<br>
Tue May 13 11:07:35 2014 [VMM][I]: Successfully execute network driver<br>
operation: pre.<br>
Tue May 13 11:07:39 2014 [VMM][I]: ExitCode: 0<br>
Tue May 13 11:07:39 2014 [VMM][I]: Successfully execute virtualization<br>
driver operation: deploy.<br>
Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow<br>
public<br>
in_port=14,arp,dl_src=02:00:ac:10:a8:20,priority=45000,actions=drop".<br>
Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow<br>
public<br>
in_port=14,arp,dl_src=02:00:ac:10:a8:20,nw_src=172.16.168.32,priority=46000,actions=normal".<br>
Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow<br>
public in_port=14,dl_src=02:00:ac:10:a8:20,priority=40000,actions=normal".<br>
Tue May 13 11:07:40 2014 [VMM][I]: post: Executed "sudo ovs-ofctl add-flow<br>
public in_port=14,priority=39000,actions=drop".<br>
Tue May 13 11:07:40 2014 [VMM][I]: ExitCode: 0<br>
Tue May 13 11:07:40 2014 [VMM][I]: Successfully execute network driver<br>
operation: post.<br>
Tue May 13 11:07:40 2014 [LCM][I]: New VM state is RUNNING<br>
<br>
However that's not what ifconfig on the virtual host shows. Can't cut and<br>
paste, because the only access I have to the VM is through virt-manager.<br>
Even VNC doesn't work in Sunstone for this host.<br>
<br>
inet addr:172.16.168.154 Bcast:172.16.168.255 Mask:255.255.255.0<br>
<br>
So one thinks it's getting an address from the public bridge and it's<br>
actually getting it from the network dhcp server where the host lives.<br>
<div class=""><br>
> 5. What are the Open vSwitch logs saying on the hosts your VM get deployed<br>
> onto?<br>
<br>
</div>2014-05-13T18:07:37.150Z|00190|bridge|INFO|bridge public: added interface<br>
vnet0 on port 14<br>
2014-05-13T18:07:49.990Z|00191|ofproto|INFO|public: 4 flow_mods 10 s ago<br>
(4 adds)<br>
<br>
It appears there that ONE thinks the VM is getting an address from ONE,<br>
while it's actually getting another from the external dhcp server. I'm not<br>
sure how this effects them internally, or how openvswitch will try to<br>
route them. I checked the routing table and there was no default route. I<br>
manually added it, but it made no difference, the VM still can't ping any<br>
other hosts on the network.<br>
<br>
I have instantiated hosts on an identical host cloud2 running openvswitch,<br>
but using virsh or virt-manager, and they start and run fine. They connect<br>
to the network and can route traffic fine. I have 6 VMs running on cloud2<br>
and have tested them and they all work fine. I can ssh to them, they<br>
network fine. When I instantiate a VM using ONE on cloud1, the host I'm<br>
working with, network fails to work.<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Neil Schneider<br>
Systems Administrator<br>
<br>
IFX<br>
12750 High Bluff Drive, Suite 460<br>
San Diego, CA 92130<br>
<br>
Phone <a href="tel:858-724-1024" value="+18587241024">858-724-1024</a> | Fax <a href="tel:858-724-1043" value="+18587241043">858-724-1043</a> | <a href="http://www.ifxonline.com" target="_blank">http://www.ifxonline.com</a><br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div></div></blockquote></div><br></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div>Jaime Melis<br>Project Engineer<br>OpenNebula - Flexible Enterprise Cloud Made Simple<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a></div>
</div>