[one-users] newbie how-to configure network to connect to the vm

Nikolaj Majorov nikolaj at majorov.biz
Thu Nov 20 09:23:11 PST 2014


Hi,
many thanks Valentin ! Thanks Thomas !

Valentin  you are right ! changing prefix , allow me to connect to the guest vm…  Cool !

regards,

Nikolaj

> On 20.11.2014, at 17:56, Valentin Bud <valentin.bud at gmail.com> wrote:
> 
> Hello Nikolaj,
> 
> I think the problem is that your AR uses a wrong network prefix. virbr0 is configured
> with 192.168.122.0/24 <http://192.168.122.0/24> prefix and your definition in the AR is IP = 192.168.0.100.
> 
> Best,
> Valentin
> 
> On Thu, Nov 20, 2014 at 6:20 PM, Nikolaj Majorov <nikolaj at majorov.biz <mailto:nikolaj at majorov.biz>> wrote:
> hi Thomas,
> thanks for help.
> 
> 
> On the open nebula host server :
> 
> ——————————————-
> [root at CentOS-70-64-minimal ~]# ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000
>     link/ether 6c:62:6d:d9:09:05 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::6e62:6dff:fed9:905/64 scope link
>        valid_lft forever preferred_lft forever
> 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
>     link/ether 6c:62:6d:d9:09:05 brd ff:ff:ff:ff:ff:ff
>     inet 46.4.99.35 peer 46.4.99.33/32 <http://46.4.99.33/32> brd 46.4.99.35 scope global br0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::6e62:6dff:fed9:905/64 scope link
>        valid_lft forever preferred_lft forever
> 4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
>     link/ether fe:00:c0:a8:00:64 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.122.1/24 <http://192.168.122.1/24> brd 192.168.122.255 scope global virbr0
>        valid_lft forever preferred_lft forever
> 9: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500
>     link/ether fe:00:c0:a8:00:64 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc00:c0ff:fea8:64/64 scope link
>        valid_lft forever preferred_lft forever
> 10: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500
>     link/ether fe:00:c0:a8:00:65 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc00:c0ff:fea8:65/64 scope link
>        valid_lft forever preferred_lft forever
> 11: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN qlen 500
>     link/ether fe:00:c0:a8:00:66 brd ff:ff:ff:ff:ff:ff
>     inet6 fe80::fc00:c0ff:fea8:66/64 scope link
>        valid_lft forever preferred_lft forever
> 
> 
> [root at CentOS-70-64-minimal ~]# ip r
> default via 46.4.99.33 dev br0
> 46.4.99.33 dev br0  proto kernel  scope link  src 46.4.99.35
> 169.254.0.0/16 <http://169.254.0.0/16> dev br0  scope link  metric 1003
> 192.168.0.102 dev virbr0  scope link
> 192.168.122.0/24 <http://192.168.122.0/24> dev virbr0  proto kernel  scope link  src 192.168.122.1
> 
> [root at CentOS-70-64-minimal ~]# ip neigh show
> 192.168.0.102 dev virbr0 lladdr 02:00:c0:a8:00:66 STALE
> 192.168.0.101 dev br0 lladdr 02:00:c0:a8:00:65 STALE
> 192.168.122.56 dev virbr0 lladdr 02:00:c0:a8:00:64 STALE
> 192.168.0.100 dev virbr0  FAILED
> 192.168.0.100 dev br0  FAILED
> 46.4.99.33 dev br0 lladdr 00:26:88:75:e6:88 REACHABLE
> 
> [root at CentOS-70-64-minimal ~]# iptables -nvL
> Chain INPUT (policy ACCEPT 767K packets, 2018M bytes)
>  pkts bytes target     prot opt in     out     source               destination
>     0     0 ACCEPT     udp  --  virbr0 *       0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>            udp dpt:53
>     0     0 ACCEPT     tcp  --  virbr0 *       0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>            tcp dpt:53
>    17  5576 ACCEPT     udp  --  virbr0 *       0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>            udp dpt:67
>     0     0 ACCEPT     tcp  --  virbr0 *       0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>            tcp dpt:67
> 
> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>  pkts bytes target     prot opt in     out     source               destination
>     0     0 ACCEPT     all  --  *      virbr0  0.0.0.0/0 <http://0.0.0.0/0>            192.168.122.0/24 <http://192.168.122.0/24>     ctstate RELATED,ESTABLISHED
>     0     0 ACCEPT     all  --  virbr0 *       192.168.122.0/24 <http://192.168.122.0/24>     0.0.0.0/0 <http://0.0.0.0/0>
>     0     0 ACCEPT     all  --  virbr0 virbr0  0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>
>     0     0 REJECT     all  --  *      virbr0  0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>            reject-with icmp-port-unreachable
>     0     0 REJECT     all  --  virbr0 *       0.0.0.0/0 <http://0.0.0.0/0>            0.0.0.0/0 <http://0.0.0.0/0>            reject-with icmp-port-unreachable
> 
> Chain OUTPUT (policy ACCEPT 685K packets, 270M bytes)
>  pkts bytes target     prot opt in     out     source               destination
> 
> but the iptables seems not running:
> 
> [root at CentOS-70-64-minimal ~]# service iptables status
> Redirecting to /bin/systemctl status  iptables.service
> iptables.service - IPv4 firewall with iptables
>    Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)
>    Active: inactive (dead)
> 
> Nov 20 16:38:39 CentOS-70-64-minimal systemd[1]: Stopped IPv4 firewall with iptables.
> 
> 
> ————————
> 
> 
> 
> from within  vm (can connect only  to one with vnc so can’t simple cut and paste):
> 
> +++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> #ifconfig -a
> eth0: Link…
>          inet addr:192.168.0.102 Bcast:192.168.0.255 Mask: 255.255.255.0
> 
> 
> # route -n
>  Kernel IP routing table 
> Destingation G
> 
> Destingation            Gateway          Genmask                 Flags Metric Ref    Use Iface
> 192.168.0.0             0.0.0.0               255.255.255.0         U       0           0        0    eth0
> 0.0.0.0                     192.168.0.1       0.0.0.0                     UG    0           0        0    eth0
> 
> iptables ist not running….
> 
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> 
> 
> 
>> On 20.11.2014, at 16:50, Thomas Stein <himbeere at meine-oma.de <mailto:himbeere at meine-oma.de>> wrote:
>> 
>> 
>> Please post the output of 
>> 
>> ip a
>> ip r
>> ip neigh show
>> 
>> iptables -nvL
>> 
>> from the hardwarenode the vm is running on and from within the vm.
>> 
>> cheers
>> t.
>> 
>> 
>> On 20. November 2014 16:44:29 MEZ, Nikolaj Majorov <nikolaj at majorov.biz <mailto:nikolaj at majorov.biz>> wrote:
>>> HI,
>>> unfortunately I can’t connect … :-(
>>> 
>>> Here is my new configuration:
>>> 
>>> $cat mynetwork_virbr0.one
>>> NAME = "2private"
>>> 
>>> BRIDGE = virbr0
>>> 
>>> AR = [
>>>   TYPE = IP4,
>>>   IP = 192.168.0.100,
>>>   SIZE = 3
>>> ]
>>> 
>>> 
>>> 
>>> $ tracepath -n 192.168.0.101
>>> 1:  46.4.99.35                                            0.052ms pmtu
>>> 1500
>>> 1:  46.4.99.33                                            0.901ms
>>> 1:  46.4.99.33                                            0.956ms
>>> 2:  213.239.224.193                                       0.305ms
>>> 3:  213.239.245.125                                       0.453ms
>>> 4:  213.239.245.221                                       3.034ms
>>> 5:  no reply
>>> 6:  no reply
>>> 7:  no reply
>>> 8:  no reply
>>> 
>>> 
>>>> On 20.11.2014, at 15:00, Thomas Stein <himbeere at meine-oma.de <mailto:himbeere at meine-oma.de>> wrote:
>>>> 
>>>> On Thursday 20 November 2014 11:57:17 Nikolaj Majorov wrote:
>>>>> Hello,
>>>>> 
>>>>> 
>>>>> I install open nebula 4.8 on centos 7 as described in quickstarts
>>> and
>>>>> confiture my private network as it was showed in example:
>>>>> 
>>>>> NAME = "private"
>>>>> 
>>>>> BRIDGE = br0
>>>>> 
>>>>> AR = [
>>>>>   TYPE = IP4,
>>>>>   IP = 192.168.0.100,
>>>>>   SIZE = 3
>>>>> ]
>>>> 
>>>> I suppose you using virbr0 as bridge device. Can you try this?
>>>> 
>>>> t.
>>>> 
>>>>> Starting vm I have got IP 192.168.0.100, but  I can’t connect to the
>>> VM over
>>>>> ssh and even ping it. What should I do with network to access the VM
>>> ( add
>>>>> route/gateway  ) ? I’m really newbie in the networking  , so please
>>> give
>>>>> some hints ..
>>>>> 
>>>>> 
>>>>> So my network configuration is :
>>>>> 
>>>>> ifconfig -a
>>>>> 
>>>>> br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>>>>>       inet 46.4.99.35  netmask 255.255.255.255  broadcast
>>> 46.4.99.35
>>>>>       inet6 fe80::6e62:6dff:fed9:905  prefixlen 64  scopeid
>>> 0x20<link>
>>>>>       ether 6c:62:6d:d9:09:05  txqueuelen 0  (Ethernet)
>>>>>       RX packets 522518  bytes 1880517877 (1.7 GiB)
>>>>>       RX errors 0  dropped 0  overruns 0  frame 0
>>>>>       TX packets 444071  bytes 51737672 (49.3 MiB)
>>>>>       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>>>> 
>>>>> eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>>>>>       inet6 fe80::6e62:6dff:fed9:905  prefixlen 64  scopeid
>>> 0x20<link>
>>>>>       ether 6c:62:6d:d9:09:05  txqueuelen 1000  (Ethernet)
>>>>>       RX packets 1498963  bytes 1952763267 (1.8 GiB)
>>>>>       RX errors 0  dropped 0  overruns 0  frame 0
>>>>>       TX packets 444309  bytes 51763060 (49.3 MiB)
>>>>>       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>>>> 
>>>>> lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
>>>>>       inet 127.0.0.1  netmask 255.0.0.0
>>>>>       inet6 ::1  prefixlen 128  scopeid 0x10<host>
>>>>>       loop  txqueuelen 0  (Local Loopback)
>>>>>       RX packets 149945  bytes 166430929 (158.7 MiB)
>>>>>       RX errors 0  dropped 0  overruns 0  frame 0
>>>>>       TX packets 149945  bytes 166430929 (158.7 MiB)
>>>>>       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>>>> 
>>>>> virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>>>>>       inet 192.168.122.1  netmask 255.255.255.0  broadcast
>>> 192.168.122.255
>>>>> ether fe:00:c0:a8:7a:03  txqueuelen 0  (Ethernet)
>>>>>       RX packets 18  bytes 3711 (3.6 KiB)
>>>>>       RX errors 0  dropped 0  overruns 0  frame 0
>>>>>       TX packets 69  bytes 3818 (3.7 KiB)
>>>>>       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>>>> 
>>>>> vnet2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>>>>>       inet6 fe80::fc00:c0ff:fea8:7a03  prefixlen 64  scopeid
>>> 0x20<link>
>>>>>       ether fe:00:c0:a8:7a:03  txqueuelen 500  (Ethernet)
>>>>>       RX packets 25  bytes 4805 (4.6 KiB)
>>>>>       RX errors 0  dropped 0  overruns 0  frame 0
>>>>>       TX packets 1303  bytes 68138 (66.5 KiB)
>>>>>       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>>>>> 
>>>>> 
>>>>> and routing looks like this:
>>>>> 
>>>>> $ route -n
>>>>> Kernel IP routing table
>>>>> Destination     Gateway         Genmask         Flags Metric Ref   
>>> Use
>>>>> Iface 0.0.0.0         46.4.99.33      0.0.0.0         UG    0      0
>>> 
>>>>> 0 br0 46.4.99.33      0.0.0.0         255.255.255.255 UH    0      0
>>> 
>>>>> 0 br0 169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0
>>> 
>>>>> 0 br0 192.168.122.0   0.0.0.0         255.255.255.0   U     0      0
>>> 
>>>>> 0 virbr0
>>>>> 
>>>>> Many thanks for any hint !
>>>>> 
>>>>> 
>>>>> Kind regards ,
>>>>> 
>>>>> Nikolaj Majorov
>>>>> nikolaj at majorov.biz <mailto:nikolaj at majorov.biz>
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org> <mailto:Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>>
>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>>
>>>> 
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org> <mailto:Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>>
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>>
>>> 
>>> 
>>> ------------------------------------------------------------------------
>>> 
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
> 
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20141120/c3a03cf5/attachment-0001.htm>


More information about the Users mailing list