Cloud Native应用交付

  • 首页
  • 关于本站
  • 个人介绍
  • Downloads
  • Repo
    • Github
    • Container
  • F5
    • F5 Python SDK
    • F5-container
    • F5-LBaaS
  • 社交
    • 联系我
    • 微信/微博
    • 公众号
    • 打赏赞助
行至水穷处 坐看云起时
Cloud Native Application Services: cnadn.net
  1. 首页
  2. 化云为雨/Openstack
  3. 正文

ovs patch port 与 linux veth方式下传输性能对比

2015年05月13日 20206点热度 0人点赞 0条评论

http://www.opencloudblog.com/?p=386

The Openvswitch installed on a Linux system provides the feature to configure multiple bridges. Multiple OVS bridges behave like independent local switches. This is the way OVS is providing switch virtualization.

It is possible to chain multiple OVS bridges on one system. Chaining multiple bridges is used by Openstack neutron, when using the default networking setup.

If Neutron is configured to use VXLAN or GRE tunnels, the integration bridge br-int is connected to the tunnel bridge br-tun. Neutron is using two Openvswitch patch ports to connect br-int and br-tun. If Neutron is configured to use Flat or Vlan networking, the integration bridge br-int is connected to the bridge br-eth1 (br-eth1 is used here as an example), with the physical NIC attached to it, using a Linux veth pair. This setup is shown in the following drawing.

Openstack-Node

One question is now: What is the performance loss, if multiple OVS bridges are chained using OVS patch ports or Linux veth pairs?

Testing method

In a previous article (Switching Performance – Connecting Linux Network Namespaces) I showed the performance of different methods to interconnect linux namespaces. The same procedure will be used to check the performance of chained Openvswitch bridges.

Two Linux network namespaces with attached ip interfaces will be used as the source and destination of traffic. The test tool used is iperf, using a different number of threads. TSO and other offloading features of the virtual nic will be switched on and off.  A setup with three OVS bridges is shown in the drawing below.

OVS-ChainingThe number of bridges is 1 to 9.

The OVS patch ports between the bridges are created using the following commands:

OVS bridge connection using OVS patch ports

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function f_create_ovs_chain_patch {
    local BRIDGEPREFIX=$1
    local NUMOFOVS=$2
    #
    # create the switches
    for I in $(seq 1 $NUMOFOVS)
    do
      BNAMEI="$BRIDGEPREFIX-$I"
      ovs-vsctl add-br $BNAMEI
      if [ $I -gt 1 ]
      then
        let K=I-1
        BNAMEK="$BRIDGEPREFIX-$K"
        PNAMEI="patch-CONNECT-$I$K"
        PNAMEK="patch-CONNECT-$K$I"
        # need to connect this bridge to the previous bridge
        ovs-vsctl add-port $BNAMEI $PNAMEI -- set Interface $PNAMEI type=patch options:peer=$PNAMEK
        ovs-vsctl add-port $BNAMEK $PNAMEK -- set Interface $PNAMEK type=patch options:peer=$PNAMEI
      fi
    done
}

 

The Linux veth pairs between the bridges are created using the following commands:

OVS bridge connection using veth pairs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
function f_create_ovs_chain_veth {
    local BRIDGEPREFIX=$1
    local NUMOFOVS=$2
    local VETHPREFIX="ovschain"
    echo "*** Creating interfaces"
#
# create the switches
    local I
    for I in $(seq 1 $NUMOFOVS)
    do
      BNAMEI="$BRIDGEPREFIX-$I"
      ovs-vsctl add-br $BNAMEI
      if [ $I -gt 1 ]
      then
        let K=I-1
        BNAMEK="$BRIDGEPREFIX-$K"
        PNAMEI="$VETHPREFIX-$I$K"
        PNAMEK="$VETHPREFIX-$K$I"
        # need to connect this bridge to the previous bridge
        ip link add $PNAMEI type veth peer name $PNAMEK
        ovs-vsctl add-port $BNAMEI $PNAMEI
        ovs-vsctl add-port $BNAMEK $PNAMEK
        ip link set dev $PNAMEI up
        ip link set dev $PNAMEK up
      fi
    done
}

 

It must be ensured, that there are no iptables rule active when running tests using veth pairs.

Results

The chart below shows the test results using my Linux desktop (i5-2550 CPU@3.30GHz, 32 GB RAM, Kernel 3.13, OVS 2.01, Ubuntu 14.04). The values shown for one OVS bridge are used as the baseline to compare to.

OVS-perf-1c-4t The chart shows, that the OVS using OVS patch ports behaves well. When one iperf thread is running in each namespace, this consumes 1.8 CPU cores, 1.0 for the sender side and 0.8 for the receiver side. The four cpu cores are running at 100% usage, when using four or more threads. The throughput is independent (within the measurement precision) from the number of chained OVS bridges, at least up to the tested 8.

Connecting OVS bridges using veth pairs behaves quite different. The numbers for two bridges are showing a performance loss of 10% when running with one or two iperf threads. When using more threads, the veths behave very bad. This might be a result of not having enough CPU cores, but compared to the OVS patch port, these numbers are very bad.

I’m looking for servers providing more CPU cores and sockets to run the tests on single and dual socket Xeons (16 or 20 cores including hyperthreading per socket). But I have no access to such systems.

Conclusion

My conclusion from the test is:

  • use OVS patch ports (in Openstack Juno the connection between br-int and the bridges for vlans is not longer using veths — very good!)
  • do not use Linux veth pairs

Openstack Neutron Juno is using the veths method for the Hybrid driver.

相关文章

  • F5 lbaas agent v1对vlan模式下始终配置为tagged vlan的问题解决-备忘
  • Openstack icehouse 配置vlan网络 备忘
  • heat template guide
  • openstack heat模板之配置基本LB到F5 BIGIP
  • DOCKER容器跨宿主机通信方法-(3):Openvswitch
本作品采用 知识共享署名-非商业性使用 4.0 国际许可协议 进行许可
标签: openvswitch veth
最后更新:2015年05月13日

纳米

linjing.io

打赏 点赞
< 上一篇
下一篇 >

文章评论

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
取消回复

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理。

页面AI聊天助手
文章目录
  • Testing method
  • Results
  • Conclusion

纳米

linjing.io

☁️迈向Cloud Native ADC ☁️

认证获得:
TOGAF: ID 152743
Kubernetes: CKA #664
Microsoft: MCSE MCDBA
Cisco: CCNP
Juniper: JNCIS
F5:
F5 Certified Solution Expert, Security
F5 Certified Technology Specialist, LTM/GTM/APM/ASM
F5 Certified BIG-IP Administrator
  • 点击查看本博技术要素列表
  • 归档
    分类
    • AI
    • Automation
    • Avi Networks
    • Cisco ACI
    • CISCO资源
    • F5 with ELK
    • F5-Tech tips
    • F5技术
    • Juniper
    • Linux
    • NGINX
    • SDN
    • ServiceMesh
    • WEB编程
    • WINDOWS相关
    • 业界文章
    • 交换机技术
    • 化云为雨/Openstack
    • 协议原理
    • 容器/k8s
    • 我的工作
    • 我的生活
    • 网站技术
    • 路由器技术
    • 项目案例
    标签聚合
    openstack docker neutron nginx envoy gtm F5 k8s irule flannel network bigip api istio DNS
    最近评论
    汤姆 发布于 8 个月前(09月10日) 嗨,楼主,里面的json怎么下载啊,怎么收费啊?
    汤姆 发布于 8 个月前(09月09日) 大佬,kib的页面可以分享下吗?谢谢
    zhangsha 发布于 1 年前(05月12日) 资料发给我下,谢谢纳米同志!!!!lyx895@qq.com
    李成才 发布于 1 年前(01月02日) 麻烦了,谢谢大佬
    纳米 发布于 1 年前(01月02日) 你好。是的,因为以前下载系统插件在一次升级后将所有的下载生成信息全弄丢了。所以不少文件无法下载。DN...
    浏览次数
    • Downloads - 183,772 views
    • 联系我 - 118,966 views
    • 迄今为止最全最深入的BIGIP-DNS/GTM原理及培训资料 - 116,509 views
    • Github - 103,659 views
    • F5常见log日志解释 - 79,774 views
    • 从传统ADC迈向CLOUD NATIVE ADC - 下载 - 74,623 views
    • Sniffer Pro 4 70 530抓包软件 中文版+视频教程 - 74,320 views
    • 迄今为止最全最深入的BIGIP-DNS/GTM原理及培训资料 - 67,770 views
    • 关于本站 - 60,909 views
    • 这篇文档您是否感兴趣 - 55,495 views
    链接表
    • F5SE创新
    • Jimmy Song‘s Blog
    • SDNlab
    • Service Mesh社区
    • 三斗室
    • 个人profile
    • 云原生社区

    COPYRIGHT © 2023 Cloud Native 应用交付. ALL RIGHTS RESERVED.

    Theme Kratos Made By Seaton Jiang

    京ICP备14048088号-1

    京公网安备 11010502041506号