方案思路:
BIGIP通过与node节点构建vxlan网络实现F5与pod网络的直接通信。
前置要求:
1.BIGIP有一个网络须与k8s节点在一个二层上,在这个二层之上构建vxlan,这不是BIGIP的限制,而是flanne vxlan网络模型的设计要求。
2.BIGIP须与k8s中node节点的internalIP构建vxlan,即flannel的vxlan vtep接口要是k8s集群本身所用的接口。如果不是同一接口,可采用https://github.com/myf5/k8s-bigip-ctlr 补丁即可,编译好的docker镜像在 myf5/k8s-bigip-ctlr:1.7.1
配置步骤:
- F5上配置vxlan profile
- 配置vxlan tunnel
- 配置vxlan tunnel self IP (选择一个flanne管理的网络,且该网络与node节点不要重复,这里的tunnel selfip 等同于node节点上的flannel.1接口地址,一般为16位掩码)--这是由于BIGIP并不会为每个node节点flannel.1接口的的/24子网添加具体的路由,因此设置/16掩码可以让系统产生如下路由条目:
10.244.0.0/16 dev flannel_vxlan proto kernel scope link src 10.244.10.10
如果使用/24掩码的话将会导致BIGIP不知道如何去往各个node节点的flannel.1接口 - 使用kubectl 将bigip作为一个伪node节点加入到k8s集群中,节点中使用annotation指明flannel所需配置
- 部署bigip-ctrl(cc)
其它提示:
- 在将bigip作为k8s node加入时候,需要指明tunnel的MAC地址以及tunnel的vtep地址
- cc配置中需指明vxlan tunnel名称
- cc会自动的增加k8s各个节点vtep的静态FDB表以及各个pod的静态arp表(如果是clusterip模式发布的话)
- HA,N+M的场景,BIGIP需要与独立的CC配合,每个BIGIP独立更新,不依赖BIGIP自身的同步,HA场景示例将在单独博文中讲解,本例只考虑单机情形
具体配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
myf5@(v13-common)(cfg-sync Not All Devices Synced)(Active)(/Common)(tmos)# list net self net self float_self_flannel_vxlan { address 10.244.244.3/16 allow-service { default } floating enabled traffic-group traffic-group-1 unit 1 vlan flannel_vxlan } net self self_flannel_vxlan { address 10.244.244.2/16 allow-service { default } traffic-group traffic-group-local-only vlan flannel_vxlan } net self cc_vtep { address 172.16.10.203/24 allow-service { default } traffic-group traffic-group-local-only vlan bigip_cc_vlan } myf5@(v13-common)(cfg-sync Not All Devices Synced)(Active)(/Common)(tmos)# list net tunnels vxlan fl-vxlan net tunnels vxlan fl-vxlan { app-service none flooding-type none port otv } myf5@(v13-common)(cfg-sync Not All Devices Synced)(Active)(/Common)(tmos)# list net tunnels tunnel flannel_vxlan net tunnels tunnel flannel_vxlan { if-index 240 key 1 local-address 172.16.10.203 profile fl-vxlan } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
[root@k8s-master f5-k8s]# cat dummy-flannel-node-bigip.yaml apiVersion: v1 kind: Node metadata: name: bigip annotations: # Provide the MAC address of the BIG-IP VXLAN tunnel flannel.alpha.coreos.com/backend-data: '{"VtepMAC":"00:50:56:b3:2e:29"}' flannel.alpha.coreos.com/backend-type: "vxlan" flannel.alpha.coreos.com/kube-subnet-manager: "true" # Provide the IP address you assigned as the BIG-IP VTEP flannel.alpha.coreos.com/public-ip: 172.16.10.203 spec: # Define the flannel subnet you want to assign to the BIG-IP device. # Be sure this subnet does not collide with any other Nodes' subnets. podCIDR: 10.244.244.0/24 [root@k8s-master f5-k8s]# cat bigip-ctlr-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-bigip-ctlr-deployment namespace: kube-system spec: # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: name: k8s-bigip-ctlr labels: app: k8s-bigip-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=172.16.40.202", "--bigip-partition=k8s", "--pool-member-type=cluster", "--flannel-name=/Common/flannel_vxlan", ] imagePullSecrets: # Secret containing the BIG-IP system login credentials - name: bigip-login |
部署一个F5 configmap实现服务的注入
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
[root@k8s-master f5-k8s]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.250.0.1 <none> 443/TCP 134d nginx-deploy-svc ClusterIP 10.250.0.75 <none> 80/TCP 3d nginx-stateful ClusterIP 10.250.0.188 <none> 80/TCP 3d [root@k8s-master f5-k8s]# cat f5-vs.yaml kind: ConfigMap apiVersion: v1 metadata: name: nginx-deploy-svc.vs labels: f5type: virtual-server data: # See the f5-schema table for schema-controller compatibility # https://clouddocs.f5.com/containers/latest/releases_and_versioning.html#f5-schema schema: "f5schemadb://bigip-virtual-server_v0.1.7.json" data: | { "virtualServer": { "backend": { "servicePort": 80, "serviceName": "nginx-deploy-svc", "healthMonitors": [{ "interval": 15, "protocol": "http", "send": "GET / HTTP/1.1\r\nConnection: close\r\nHost: 1.1.1.1\r\n\r\n", "recv": "cka", "timeout": 30 }] }, "frontend": { "virtualAddress": { "port": 80, "bindAddr": "172.16.40.80" }, "partition": "k8s", "balance": "least-connections-member", "mode": "http" } } } |
部署后,F5上相关arp及FDB表输出
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
myf5@(v13-common)(cfg-sync Not All Devices Synced)(Active)(/Common)(tmos)# show net fdb ------------------------------------------------------------------- Net::FDB Tunnel Mac Address Member Dynamic ------------------------------------------------------------------- flannel_vxlan b2:81:75:6f:1e:1e endpoint:172.16.10.201%0 no flannel_vxlan e6:5e:0c:16:38:42 endpoint:172.16.10.202%0 no myf5@(v13-common)(cfg-sync Not All Devices Synced)(Active)(/Common)(tmos)# show net arp --------------------------------------------------------------------------------------------------------- Net::Arp Name Address HWaddress Vlan Expire-in-sec Status --------------------------------------------------------------------------------------------------------- /Common/k8s-10.244.0.63 10.244.0.63 b2:81:75:6f:1e:1e - - static /Common/k8s-10.244.1.11 10.244.1.11 e6:5e:0c:16:38:42 - - static 172.16.10.201 172.16.10.201 00:50:56:b3:46:85 /Common/bigip_cc_vlan 214 resolved 172.16.10.202 172.16.10.202 00:50:56:b3:0c:71 /Common/bigip_cc_vlan 222 resolved 172.16.30.203 172.16.30.203 00:50:56:b3:03:ff /Common/ext_vlan 240 resolved 172.16.40.198 172.16.40.198 00:50:56:b3:1b:9b /Common/int_vlan 220 resolved |
F5配置注入结果
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
ltm monitor http cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc_0_http { adaptive disabled defaults-from /Common/http destination *:* interval 15 ip-dscp 0 partition k8s recv cka recv-disable none send "GET / HTTP/1.1 Connection: close Host: 1.1.1.1 " time-until-up 0 timeout 30 } ltm node 10.244.0.63%0 { address 10.244.0.63 partition k8s } ltm node 10.244.1.11%0 { address 10.244.1.11 partition k8s } ltm pool cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc { load-balancing-mode least-connections-member members { 10.244.0.63%0:http { address 10.244.0.63 session monitor-enabled state up } 10.244.1.11%0:http { address 10.244.1.11 session monitor-enabled state up } } metadata { user_agent { value k8s-bigip-ctlr-1.7.1-n1279-465125010 } } monitor cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc_0_http partition k8s } ltm virtual default_nginx-deploy-svc.vs { destination 172.16.40.80%0:http ip-protocol tcp mask 255.255.255.255 metadata { user_agent { value k8s-bigip-ctlr-1.7.1-n1279-465125010 } } partition k8s pool cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc profiles { /Common/http { } /Common/tcp { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 29 } |
Pool的健康状态为up,说明vxlan网络工作已正常,F5已经可以直接与pod通信
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
myf5@(v13-common)(cfg-sync Not All Devices Synced)(Active)(/k8s)(tmos)# show ltm pool cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc --------------------------------------------------------------------- Ltm::Pool: cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc --------------------------------------------------------------------- Status Availability : available State : enabled Reason : The pool is available Monitor : cfgmap_default_nginx-deploy-svc.vs_nginx-deploy-svc_0_http Minimum Active Members : 0 Current Active Members : 2 Available Members : 2 Total Members : 2 Total Requests : 6 Current Sessions : 0 |
文章评论