[one-users] OpenvSwitch intergration

Shek Mohd Fahmi Abdul Latip fahmi.latip at mimos.my
Fri Sep 27 01:59:42 PDT 2013


Hi Bill,

Thanks for your respond. I’ve verified the speed as per advised, yes, it’s closed to 1GB-FD performance. Must be a cosmetic issue for virtio net driver from OpenvSwitch point of view.

Thanks again for sharing your experience.

Best regards,
.fahmie

From: Campbell, Bill [mailto:bcampbell at axcess-financial.com]
Sent: Thursday, September 26, 2013 9:02 PM
To: Shek Mohd Fahmi Abdul Latip
Cc: users at lists.opennebula.org; Hadi Noira Omar; Shahrin Mohamad Fuzi
Subject: Re: [one-users] OpenvSwitch intergration

If you are using virtio as the network adapter model for your VMs then you will see the interface speed reported as 10MB interfaces (virtio has no limit, and is reported falsely in most tools).

You can run a bandwidth test between 2 VMs (or from a VM to another system in your infrastructure) using a tool like iperf to validate.  Here's a good blog post that gives a brief overview on how to use it:

http://linux.cloudibee.com/2009/02/iperf-network-throughput-measurement-tool/

If using CentOS/RHEL, EPEL has this tool.  I believe it is in the Ubuntu/Debian default repositories.

________________________________
From: "Shek Mohd Fahmi Abdul Latip" <fahmi.latip at mimos.my<mailto:fahmi.latip at mimos.my>>
To: users at lists.opennebula.org<mailto:users at lists.opennebula.org>
Cc: "Hadi Noira Omar" <hadi.omar at mimos.my<mailto:hadi.omar at mimos.my>>, "Shahrin Mohamad Fuzi" <shahrin.fuzi at mimos.my<mailto:shahrin.fuzi at mimos.my>>
Sent: Thursday, September 26, 2013 6:52:15 AM
Subject: [one-users] OpenvSwitch intergration

Hi experts,

I’m integrating OpenvSwitch with Opennebula. The good news is, they work beautifully together ☺. Bad news is very less documentation available for OpenvSwitch. Then I have doubt on the new setup of OpenvSwitch. When I deploying VM through Opennebula, it created vnet as expected. Somehow the speed of it downgraded to 10MB-FD compare what I have on physical hardware, 1GB-FD. Could someone please advise me how to ensure that vnet speed same at physical network card? Here is my current configuration:

# ovs-vsctl show
8167368c-a8f8-41dd-beab-7984a3e63ddb
    Bridge "ovsbr0"
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "vnet1"
            tag: 407
            Interface "vnet1"
        Port "ovsbond0"
            Interface "eth1"
            Interface "eth0"
        Port "vnet0"
            tag: 264
            Interface "vnet0"
    ovs_version: "1.10.0"

# ovs-appctl bond/show
---- ovsbond0 ----
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: negotiated

slave eth0: enabled
        may_enable: true

slave eth1: enabled
        active slave
        may_enable: true

# ovs-ofctl show ovsbr0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000e8393524b2b6
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
1(eth0): addr:e8:39:35:24:b2:b6
     config:     0
     state:      0
     current:    1GB-FD COPPER AUTO_NEG
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG
     speed: 1000 Mbps now, 1000 Mbps max
2(eth1): addr:e8:39:xx:xx:xx:xx
     config:     0
     state:      0
     current:    1GB-FD COPPER AUTO_NEG
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD AUTO_NEG
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG
     speed: 1000 Mbps now, 1000 Mbps max
11(vnet0): addr:fe:00:xx:xx:xx:xx
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
12(vnet1): addr:fe:00:xx:xx:xx:xx
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
LOCAL(ovsbr0): addr:e8:39:xx:xx:xx:xx
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0


Best regards,
.fahmie




------------------------------------------------------------------
-
-
DISCLAIMER:

This e-mail (including any attachments) is for the addressee(s)
only and may contain confidential information. If you are not the
intended recipient, please note that any dealing, review,
distribution, printing, copying or use of this e-mail is strictly
prohibited. If you have received this email in error, please notify
the sender immediately and delete the original message.
MIMOS Berhad is a research and development institution under
the purview of the Malaysian Ministry of Science, Technology and
Innovation. Opinions, conclusions and other information in this e-
mail that do not relate to the official business of MIMOS Berhad
and/or its subsidiaries shall be understood as neither given nor
endorsed by MIMOS Berhad and/or its subsidiaries and neither
MIMOS Berhad nor its subsidiaries accepts responsibility for the
same. All liability arising from or in connection with computer
viruses and/or corrupted e-mails is excluded to the fullest extent
permitted by law.


_______________________________________________
Users mailing list
Users at lists.opennebula.org<mailto:Users at lists.opennebula.org>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130927/038b798c/attachment-0002.htm>


More information about the Users mailing list