[one-users] Cannot connect to VM

Christophe Hamerling - Petals Link christophe.hamerling at petalslink.com
Wed Dec 15 07:47:50 PST 2010


I am wondering if the problem does not also comes from the virtual network
configuration.
Here is my configuration :

I am at home, my LAN address is 192.168.2.0/255.255.255.0 . GW is
192.168.2.1
My front address is 192.168.2.60 and the node one is 192.168.2.61
I tried to setup a bridge on the node, it seems to work (refer to my
previous mails).
I try to create a VM and to reach it at 192.168.2.71 but I am wondering now
if it is possible to access a VM network from my laptop for exemple? How can
I say to opennebula that I want to 'publish' VMs on my LAN? Is PUBLIC = YES
is enough in the network configuration file? Do I need to put the front as a
gateway somewhere, or another host?

Thanks a lot
Christophe


On Wed, Dec 15, 2010 at 3:59 PM, Christophe Hamerling - Petals Link <
christophe.hamerling at petalslink.com> wrote:

> Is there a link between the MAC address defined in the VM and with the ones
> listed with ifconfig on the node? I do not have any.
>
>
> On Wed, Dec 15, 2010 at 3:46 PM, Christophe Hamerling - Petals Link <
> christophe.hamerling at petalslink.com> wrote:
>
>> Is there a web page where the network configuration on front and nodes is
>> described? The only thing that I can find is at
>> http://marianmi.comp.nus.edu.sg/2010/08/opennebula-installation-and-configuration-guide.php
>>
>> I really think that it is a network configuration problem. Here is what I
>> did :
>>
>> Front : eth0 = 192.168.2.60
>>
>> On the Node : br0 = 192.168.2.61
>> The gateway defined to 192.168.2.60, not sure about that. Using my
>> internet box as gateway does not work too.
>> I defined a LEASES address out of my DHCP range.
>>
>> When the VM is starting, I can see on the node a new network interface
>> named vnet0 with the same mac address of my bridge. Is it ok?
>> Before the VM start, the br0 mac address is the same as eth0... Is this a
>> normal behaviour?
>>
>> Here is my ifconfig log
>> br0       Link encap:Ethernet  HWaddr 00:ff:da:72:02:21
>>           inet adr:192.168.2.61  Bcast:192.168.2.255  Masque:255.255.255.0
>>           adr inet6: fe80::a00:27ff:fed7:f0fd/64 Scope:Lien
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:787 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:500 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 lg file transmission:0
>>           RX bytes:126978 (124.0 KiB)  TX bytes:81984 (80.0 KiB)
>>
>> eth0      Link encap:Ethernet  HWaddr 08:00:27:d7:f0:fd
>>           adr inet6: fe80::a00:27ff:fed7:f0fd/64 Scope:Lien
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:1268 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:506 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 lg file transmission:1000
>>           RX bytes:188739 (184.3 KiB)  TX bytes:82452 (80.5 KiB)
>>
>> lo        Link encap:Boucle locale
>>           inet adr:127.0.0.1  Masque:255.0.0.0
>>           adr inet6: ::1/128 Scope:Hôte
>>           UP LOOPBACK RUNNING  MTU:16436  Metric:1
>>           RX packets:28 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 lg file transmission:0
>>           RX bytes:2156 (2.1 KiB)  TX bytes:2156 (2.1 KiB)
>>
>> vnet0     Link encap:Ethernet  HWaddr 00:ff:da:72:02:21
>>           adr inet6: fe80::2ff:daff:fe72:221/64 Scope:Lien
>>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>           TX packets:226 errors:0 dropped:0 overruns:0 carrier:0
>>           collisions:0 lg file transmission:500
>>           RX bytes:0 (0.0 B)  TX bytes:38791 (37.8 KiB)
>>
>>
>>
>>
>> On Wed, Dec 15, 2010 at 11:37 AM, Gian Uberto Lauri <saint at eng.it> wrote:
>>
>>>  "CH" == Christophe Hamerling <- Petals Link
>>>>>>>> <christophe.hamerling at petalslink.com>> writes:
>>>>>>>>
>>>>>>>
>>> CH> Ok, so I will check that the generated IP is the one expected from
>>> CH> the leases list based on this script algorithm.  Is the VM is
>>> CH> exploded somewhere at startup? If yes, where?
>>>
>>> No, it is not.
>>>
>>> A _new_, temporary "cd image" is created on the fly and mounted at the
>>> first boot, AFAIK.
>>>
>>>
>>> --
>>> ing. Gian Uberto Lauri
>>> Ricercatore / Reasearcher
>>> Divisione Ricerca ed Innovazione / Research & Innovation Division
>>> GianUberto.Lauri at eng.it
>>>
>>> Engineering Ingegneria Informatica spa
>>> Corso Stati Uniti 23/C, 35127 Padova (PD)
>>>
>>> Tel. +39-049.8283.538         | main(){printf(&unix["\021%six\012\0"],
>>> Fax  +39-049.8283.569             |    (unix)["have"]+"fun"-0x60);}
>>> Skype: gian.uberto.lauri          |          David Korn, AT&T Bell Labs
>>> http://www.eng.it                         |          ioccc best One
>>> Liner, 1987
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>
>>
>>
>> --
>> Christophe Hamerling
>> R&D Engineer & Project Leader
>> Petals Link - SOA open-source company
>> OW2 PEtALS SOA Suite Comitter
>> Skype : christophe.hamerling
>> Jabber : chamerling at jabber.org
>> Blog : http://chamerling.org
>>
>>
>
>
> --
> Christophe Hamerling
> R&D Engineer & Project Leader
> Petals Link - SOA open-source company
> OW2 PEtALS SOA Suite Comitter
> Skype : christophe.hamerling
> Jabber : chamerling at jabber.org
> Blog : http://chamerling.org
>
>


-- 
Christophe Hamerling
R&D Engineer & Project Leader
Petals Link - SOA open-source company
OW2 PEtALS SOA Suite Comitter
Skype : christophe.hamerling
Jabber : chamerling at jabber.org
Blog : http://chamerling.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20101215/7a33140b/attachment-0003.htm>


More information about the Users mailing list