Cloud Native应用交付

  • 首页
  • 关于本站
  • 个人介绍
  • Downloads
  • Repo
    • Github
    • Container
  • F5
    • F5 Python SDK
    • F5-container
    • F5-LBaaS
  • 社交
    • 联系我
    • 微信/微博
    • 公众号
    • 打赏赞助
行至水穷处 坐看云起时
Cloud Native Application Services: cnadn.net
  1. 首页
  2. 容器/k8s
  3. 正文

[转]Calico, Flannel, Weave and Docker Overlay Network

2018年12月29日 7304点热度 0人点赞 0条评论

http://chunqi.li/2015/11/15/Battlefield-Calico-Flannel-Weave-and-Docker-Overlay-Network/

From the previous posts, I have analysed 4 different Docker multi-host network solutions - Calico, Flannel, Weave and Docker Overlay Network. You can see more details on how to install, config and tune features of them from previous posts:

  • Calico: A Solution of Multi-host Network For Docker
  • Flannel for Docker Overlay Network
  • Weave: Network Management for Docker
  • Docker Multi-host Overlay Networking with Etcd

This post provides a battlefiled for these 4 Docker multi-host network solutions, including features and performances.

If you want to see the results directly, directly jump to the Conclusion chapter.

Docker Multi-host Networking Introduction

Docker kicked off with a simple single-host networking from the very beginning. Unfortunately, this prevents Docker clusters from scale out to multiple hosts. A number of projects put their focus on this problem such as Calico, Flannel and Weave, and also since Nov. 2015, Docker support the Multi-host Overlay Networking itself.

What these projects have in common is trying to control the container’s networking configurations, thus to capture and inject network packets. Consequently, every containers located on different hosts can get IPs in the same subnet and communicate with each other as if they are connected to the same L2 switch. In this way, containers could spread out on multiple hosts, even on multiple data centers.

While there are also a lot of differences between them from technical models, network topology and features. This post will mainly focus on the differences between Calico, Flannel, Weave and Docker Overlay Network, and you could choose the right solution which fits best to your requirements.

Battlefield Overview

According the features these Big Four support, I will compare them in the following aspects:

  • Network Model - What kind of network model are used to support multi-host network.
  • Application Isolation - Support what level and kind of application isolation of containers.
  • Name Service - DNS lookup with simple hostname or DNS rules.
  • Distributed Storage Requirements - Whether an external distributed storage is required, e.g. etcd or consul.
  • Encryption Channel - Whether data and infomation tranvers can put in an encryption channel.
  • Partially Connected Network Support - Whether the system can run on a partially connected host network.
  • Seperate vNIC for Container - Whether a seperate NIC is generated for container.
  • IP Overlap Support - Whether the same IP can be allocated to different containers.
  • Container Subnet Restriction - Whether container’s subnet should not be the same as host’s.
  • Protocol Support - What kind of Layer-3 or Layer-4 protocols are supported.

Now let’s see more details of these aspects on Calico, Flannel, Weave and Docker Overlay Network.

Network Model

Multi-host networking means aggregating containers on different hosts to a same virtual network, and also these networking providers (Calico, etc.) are organized as a clustering network, too. The cluster organizations are called network model in this post. Technically, these four solutions uses different network model to organize their own network topology.

Calico implements a pure Layer 3 approach to achieve a simpler, higher scaling, better performance and more efficient multi-host networking. So Calico can not be treated as an overlay network. The pure Layer 3 approach avoids the packet encapsulation associated with the Layer 2 solution which simplifies diagnostics, reduces transport overhead and improves performance. Calico also implements BGP protocl for routing combined with a pure IP network, thus allows Internet scaling for virtual networks.

Flannel has two different network model to choose. One is called UDP backend, which is a simple IP-over-IP solutions which uses a TUN device to encapsulate every IP fragment in a UDP packet, thus forming an overlay network; the other is a VxLAN backend, which is same as Docker Overlay Network. I have run a simple test for these two models, VxLAN is much more faster than UDP backend. The reason, I suggest, is that VxLAN is well supported by Linux Kernel, while UDP backend implements a pure software-layer encapsulation. Flannel requires a Etcd cluster to store the network configuration, allocate subnets and auxiliary data (such as host’s IP). And the packet routing also requires the cooperation of Etcd cluster. Besides, Flannel runs a seperate process flanneldon host environment to support packet switching. Apart from Docker, flannel can also used for traditional VMs.

Weave also has two different connection modes. One is called sleeve, which implements a UDP channel to tranverse IP packets from containers. The main differences between Weave sleeve mode and Flannel UDP backend mode is that, Weave will merge multiple container’s packet to one packet and transfer via UDP channel, so technically Weave sleeve mode will be a bit faster than Flannel UDP backend mode in most cases. The other connection mode of Weave is called fastdp mode, which also implements a VxLAN solutions. Though there’s no official documents clarifying the VxLAN usage, we still can found the usage of VxLAN from Weave codes. Weave runs a Docker container performing the same role as flanneld.

Docker Overlay Network implements a VxLAN-based solution with the help of libnetwork and libkv, and, of course, is integrated into Docker succesfully without any seperate process or containers.

So a brief conclusion of network model is in the following table:

CalicoFlannelWeaveDocker Overlay Network
Network ModelPure Layer-3 SolutionVxLAN or UDP ChannelVxLAN or UDP ChannelVxLAN

Application Isolation

Since containers are connected to each other, we need a method to put containers into different groups and isolate containers in different group.

Flannel, Weave and Docker Overlay Network uses the same application isolation schema - the traditional CIDR isolation. The traditional CIDR isolation uses netmask to identify different subnet, and machines in different subnet cannot talk to each other. For example, w1/w2/w3 has IP 192.168.0.2/24 192.168.0.3/24 and 192.168.1.2/24 seperately. w1 and w2 can talk to each other since they are in the same subnet 192.168.0.0/24, but w3 cannot talk to w1 and w2.

Calico implements another type of application isolation schema - profile. You can create a batch of profiles and append containers with Calico network into different profiles. Only containers in the same profile could talk to each other. Containers in differen profile cannot access to each other even though they are in the same CIDR subnet.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Application IsolationProfile SchemaCIDR SchemaCIDR SchemaCIDR Schema

Protocol Support

Since Calico is a pure Layer-3 solution, not all Layer-3 or Layer-4 protocols are supported. From the official github forum, developers of Calico declaims only TCP, UDP, ICMP ad ICMPv6 are supported by Calico. It does make sense that supporting other protocols are a bit harder in such a Layer-3 solution.

Other solutions support all protocols. It’s easy for them to achieve so because either udp encapsulation or VxLAN can support encapsulate L2 packets over L3. So it doesn’t matter what kind of protocol the packet holds.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Protocol SupportTCP, UDP, ICMP & ICMPv6ALLALLALL

Name Service

Weave supports a name service between containers. When you create a container, Weave will put it into a DNS name service with format {hostname}.weave.local. Thus you can access to any container with {hostname}.weave.local or simply use {hostname}. The suffix (weave.local) can be changed to other strings, and the DNS lookup service can also be turned off.

The others don’t have such feature.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Name ServiceNoNoYesNo

Distributed Storage Requirements

As to Calico, Flannel and Docker Overlay Network, a distributed storage such as Etcd and Consul is a requirement to change routing and host information. Docker Overlay Network can also cooperate with Docker Swarm’s discovery services to build a cluster.

Weave, however, doesn’t need a distributed storage because Weave itself has a node discovery service using Rumor Protocol. This design decouples with another distributed storage system while introduces complexity and consistency concern of IP allocations, as well as the IPAM performance when cluster grows larger.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Distributed Storage RequirementsYesYesNoYes

Encryption Channel

Flannel supports TLS encryption channel between Flannel and Etcd, as well as data path between Flannel peers. You can see more details on flanneld --help with -etcd-certfile and -remote-certfileparameters.

Weave can be configured to encrypt both control data passing over TCP connections and the payloads of UDP packets sent between peers. This is accomplished with the NaCl crypto libraries employing Curve25519, XSalsa20 and Poly1305 to encrypt and authenticate messages. Weave protects against injection and replay attacks for traffic forwarded between peers.

Calico and Docker Overlay Network doesn’t support any kinds of encryption method, neither Calico-Etcd channel nor data path between Calico peers. But Calico achieves best performance among these four solutions, so it’s better fit for an internal environment or if you don’t care about data safety.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Encryption ChannelNoTLSNaCl LibraryNo

Partially Connected Network Support

Weave can be deployed in a partially connected network, a brief example is as follows:

There are four peers with peer 1~3 connect with each other and peer 4 only connects to peer3. Weave can be deployed on peer 1~4. Any traffic from containers on peer 1 to containers on peer 4 will be traversed via peer 3.

This feature allows Weave connects hosts aparted by a firewall, thus connects hosts with internal IP address in different data centers.

Others don’t have such feature.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Partially Connected Network SupportNoNoYesNo

Seperate vNIC for Container

Since Weave and Docker Overlay Network create a bridged device and a veth inner containers, they create a seperate vNIC for containers. Routing table of container is also changed, thus bypass all packets of clustered network to this newly created NIC. Other connections, such as google.com, will route to the original vNIC.

Calico can use a unified vNIC for container because it’s a pure Layer-3 solution. Calico can configure NAT for out-going requests and forward subnet packages to other Calico peers. Calico can also use Docker bridged NIC for out-going requests with some manual configuration inner containers. In this way, you need to add -cap-add=Net_Admin parameter when execute docker run.

Flannel directly use docker local bridge docker0 to handle all the transportation, so containers with Flannel network will only see one vNIC inside.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Seperate vNIC for ContainerNoNoYesyes

IP Overlap Support

Technically, for VxLAN-based solutions, tenant networks can have overlapping internal IP address, though IP addresses assigned to hosts must be unique. According to VxLAN speculations, Weave, Flannel and Docker Overlay Network can support IP overlap for containers. But on my testing environment, I cannot configure any of these three support IP overlap. So I can only say they have potential to support IP overlap.

Calico cannot support IP overlap technically, but Calico official documents emphasize that they can put overlapping IPv4 containers’ packets on IPv6 network. Although this is an alternative solution for IPv4 network, I prefer to treate Calico not support IP overlap.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
IP Overlap SupportNoMaybeMaybeMaybe

Container Subnet Restriction

This section focus on whether container subnet can overlap with host network.

Flannel creates a real bridged network on the host with the subnet address, and use host Linux routing table to forward container packages to this bridge device. So container’s subnet of Flannel cannot be overlap with host network, or host’s routing table will be confused.

Calico is a pure Layer-3 implementation and packets from container to outter world will tranverse NAT table. So Calico also has such restriction that container subnet cannot overlap with host network.

Weave doesn’t use host routing table to differentiate packages from containers, but use the pcap feature to deliver packages to the right place. So Weave doesn’t need to obey the subnet restriction and it’s free to allocate container a same IP address as host. Besides you can also change IP configurations inner container and the container could be reached by the new IP.

Docker Overlay Network allows container and host in the same subnet and achieve the isolation between them. But Docker Overlay Network rely on etcd to record routing information, so changing container’s IP address manually will mess the routing process can lead container beyond reach.

Brief conclusion:

CalicoFlannelWeaveDocker Overlay Network
Container Subnet RestrictionNoNoYes, configurable after startYes, not configurable after start

Conclusion

So let’s give a final conclusion of all the aspects into one table. This table is one of the best references for you to choose a right multi-host networking solution.

CalicoFlannelWeaveDocker Overlay Network
Network ModelPure Layer-3 SolutionVxLAN or UDP ChannelVxLAN or UDP ChannelVxLAN
Application IsolationProfile SchemaCIDR SchemaCIDR SchemaCIDR Schema
Protocol SupportTCP, UDP, ICMP & ICMPv6ALLALLALL
Name ServiceNoNoYesNo
Distributed Storage RequirementsYesYesNoYes
Encryption ChannelNoTLSNaCl LibraryNo
Partially Connected Network SupportNoNoYesNo
Seperate vNIC for ContainerNoNoYesyes
IP Overlap SupportNoMaybeMaybeMaybe
Container Subnet RestrictionNoNoYes, configurable after startYes, not configurable after start

My future plan is to test the performance of these four multi-host network solutions. Since there are too many contents on this post, I will create a new post to show the details of performance test.

相关文章

  • kubeadm快速部署k8s 1.16.2
  • F5 CC 租户配置隔离配置方法
  • k8s利用F5实现租户流量隔离?
  • F5 k8s解决方案(1)- 基于 flannel vxlan模型的K8S解决方案
  • DOCKER容器跨宿主机通信方法(4):Flannel-Vxlan
本作品采用 知识共享署名-非商业性使用 4.0 国际许可协议 进行许可
标签: calico flannel k8s weave
最后更新:2018年12月29日

纳米

linjing.io

打赏 点赞
< 上一篇
下一篇 >

文章评论

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
取消回复

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理。

页面AI聊天助手
文章目录
  • Docker Multi-host Networking Introduction
  • Battlefield Overview
  • Network Model
  • Application Isolation
  • Protocol Support
  • Name Service
  • Distributed Storage Requirements
  • Encryption Channel
  • Partially Connected Network Support
  • Seperate vNIC for Container
  • IP Overlap Support
  • Container Subnet Restriction
  • Conclusion

纳米

linjing.io

☁️迈向Cloud Native ADC ☁️

认证获得:
TOGAF: ID 152743
Kubernetes: CKA #664
Microsoft: MCSE MCDBA
Cisco: CCNP
Juniper: JNCIS
F5:
F5 Certified Solution Expert, Security
F5 Certified Technology Specialist, LTM/GTM/APM/ASM
F5 Certified BIG-IP Administrator
  • 点击查看本博技术要素列表
  • 归档
    分类
    • AI
    • Automation
    • Avi Networks
    • Cisco ACI
    • CISCO资源
    • F5 with ELK
    • F5-Tech tips
    • F5技术
    • Juniper
    • Linux
    • NGINX
    • SDN
    • ServiceMesh
    • WEB编程
    • WINDOWS相关
    • 业界文章
    • 交换机技术
    • 化云为雨/Openstack
    • 协议原理
    • 容器/k8s
    • 我的工作
    • 我的生活
    • 网站技术
    • 路由器技术
    • 项目案例
    标签聚合
    nginx irule DNS gtm api istio network F5 flannel docker neutron openstack envoy bigip k8s
    最近评论
    汤姆 发布于 9 个月前(09月10日) 嗨,楼主,里面的json怎么下载啊,怎么收费啊?
    汤姆 发布于 9 个月前(09月09日) 大佬,kib的页面可以分享下吗?谢谢
    zhangsha 发布于 1 年前(05月12日) 资料发给我下,谢谢纳米同志!!!!lyx895@qq.com
    李成才 发布于 1 年前(01月02日) 麻烦了,谢谢大佬
    纳米 发布于 1 年前(01月02日) 你好。是的,因为以前下载系统插件在一次升级后将所有的下载生成信息全弄丢了。所以不少文件无法下载。DN...
    浏览次数
    • Downloads - 184,603 views
    • 联系我 - 118,966 views
    • 迄今为止最全最深入的BIGIP-DNS/GTM原理及培训资料 - 117,961 views
    • Github - 104,600 views
    • F5常见log日志解释 - 80,103 views
    • 从传统ADC迈向CLOUD NATIVE ADC - 下载 - 76,057 views
    • Sniffer Pro 4 70 530抓包软件 中文版+视频教程 - 74,320 views
    • 迄今为止最全最深入的BIGIP-DNS/GTM原理及培训资料 - 67,770 views
    • 关于本站 - 61,446 views
    • 这篇文档您是否感兴趣 - 55,760 views
    链接表
    • F5SE创新
    • Jimmy Song‘s Blog
    • SDNlab
    • Service Mesh社区
    • 三斗室
    • 个人profile
    • 云原生社区

    COPYRIGHT © 2023 Cloud Native 应用交付. ALL RIGHTS RESERVED.

    Theme Kratos Made By Seaton Jiang

    京ICP备14048088号-1

    京公网安备 11010502041506号