[one-users] Need some explanation on vnets and network setup

Zaina AFOULKI zaina.afoulki at ensi-bourges.fr
Wed Apr 6 11:51:06 PDT 2011


Hi Jaime,

Thank you so much for the explanation. You've cleared up a lot of my
confusion. I feel the scenario you explained should be added to the
networking guide [1] to better illustrate how network isolation and the
management of MACs and IPs works.

In my setup, the hook is adding ebtables rules as expected and so I'll
be running some more tests... and will probably come back with a couple
more questions.

Thanks!

[1] http://opennebula.org/documentation:archives:rel2.0:nm

--
Zaina

On 04/06/2011 01:02 PM, Jaime Melis wrote:
> Hi Zaina,
> 
> Virtual network devices (tun/tap devices) are created by the hypervisors
> (libvirt-KVM, Xen...) before executing a VM. These tun/tap devices are
> actually processes which are connected both to the VM and to bridge in the
> host machine. It doesn't really matter what name they have, since the
> hypervisor is the one who keeps track of that. And it's expected for VMs to
> have tun/tap devices with the same name across different hosts.
> 
> Regarding the network isolation, consider the following scenario:
> 
> * net1: 192.168.1.0/24
> * net2: 192.168.2.0/24
> * Host1 - vm1_1 in net1
> * Host1 - vm1_2 in net2
> * Host2 - vm2_1 in net1
> * Host2 - vm2_2 in net2
> 
> Due to OpenNebula's management of IPs and MACs [1] we know beforehand that
> the MACs of the VMs will be:
> * vm1_1 and vm_2_1: network mac address = 02:00:C0:A8:01:*
> * vm1_2 and vm_2_2: network mac address = 02:00:C0:A8:02:*
> 
> If we apply the ebtables hook [2], we are actually enforcing these rules for
> *each* tap device in that host:
> 
> in_rule="FORWARD -s ! #{net_mac}/ff:ff:ff:ff:ff:00 -o #{tap} -j DROP"
> out_rule="FORWARD -s ! #{iface_mac} -i #{tap} -j DROP"
> 
> What the in_rule does, is to drop packet from a MAC address that doesn't
> match the networks MAC Address, whereas the out_rule prevents MAC spoofing.
> 
> To exemplify this, suppose that vm1_1 tries to ping vm1_2 or vm2_2:
> Both vm1_2 and vm2_2 will only accept packets comming from a MAC that
> matches 02:00:C0:A8:02:*, but vm1_1 has a MAC 02:00:C0:A8:01:*, therefore
> the packet will be dropped. If vm1_1 is aware of this rule and spoofs its
> MAC to become 02:00:C0:A8:02:*, the out_rule will stop all the packets from
> getting out of the tun/tap interface.
> 
> You may have noticed that it doesn't really matter what name do the tun/tap
> interfaces have.
> 
> If your ebtables hook is not working as expected, try executing it manually,
> see if it prints out anything to stderr. Also, send us your network
> templates and /var/lib/one/config.
> 
> [1] http://opennebula.org/documentation:rel2.2:vgg#fixed_virtual_networks
> [2]
> http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/share/hooks/ebtables-kvm
> 
> Cheers,
> Jaime
> 
> On Tue, Apr 5, 2011 at 1:16 PM, Zaina AFOULKI <zaina.afoulki at ensi-bourges.fr
>> wrote:
> 
>> Hi,
>>
>> I made a couple more tests trying desperately to understand how VNets
>> (and especially their isolation) work in OpenNebula (much of what I
>> mentioned in my earlier post is not accurate).
>>
>> If I start with an initial state where there are no VNets in any of the
>> nodes. Then request two VMs on two different VNets, OpenNebula adds the
>> same "vnet0" to each node and starts the VMs on them,  even though the
>> VNets are different. I think it makes more sense if the VNets had
>> different names, because I'm basing the firewall rules on these names.
>>
>> I am using the script provided in [2] that is supposed to isolate the
>> VNets using ebtables. I noticed that 2 VMs on different VNets are still
>> able to ping each other. (This should not be allowed).
>>
>> I would very much appreciate any ideas/thoughts on this.
>>
>> Zaina
>>
>>> Hi,
>>>
>>> I'm having some trouble understanding the networking setup of OpenNebula.
>>>
>>> I have two nodes connected by a bridge interface br0. I enabled
>>> contextualization using the vm-context script as explained in [1]. This
>> is
>>> the output of onevnet list:
>>>   ID USER     NAME        TYPE  BRIDGE P #LEASES
>>>   19 user1    network1    Fixed    br0 N       1
>>>   20 user2    network2    Fixed    br0 Y       1
>>>
>>> I noticed that whenever I launch a VM, OpenNebula adds a virtual network
>>> named vnet0, vnet1 etc... to the list of interfaces in the node. Why are
>>> the VNets named vnet0, vnet1, etc when they could keep the same name as
>>> already defined by the OpenNebula user?
>>>
>>> Why is there a need to add interfaces anyways? Why not let the VMs
>> connect
>>> to br0 directly?
>>> Is it necessary to create a different bridge for every VNet defined with
>>> the onevnet command?
>>>
>>> The vnet is created only on the node that the VM was launched on and not
>>> on the other nodes or the frontend. Why is this the case? Why not create
>> it
>>> on all nodes? I'm asking because I am using the script provided in [2] to
>>> isolate the VNets using ebtables: I don't understand why 2 VMs on
>> different
>>> VNets are unable to ping each other when they are on the same node,
>> whereas
>>> it is possible to do so when they are on different nodes?
>>>
>>> These are the ebtables rules created when a VM is launched on node1:
>>>    -s ! 2:0:ac:1e:8:0/ff:ff:ff:ff:ff:0 -o vnet0 -j DROP
>>>    -s ! 2:0:ac:1e:8:b -i vnet0 -j DROP
>>> Why are they based on MAC addresses and not IP addresses?
>>>
>>> Many thanks.
>>>
>>> Zaina
>>>
>>> [1] http://opennebula.org/documentation:rel2.0:cong
>>> [2] http://opennebula.org/documentation:rel2.0:nm
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
> 
> 
> 


-- 
Zaina



More information about the Users mailing list