Cloud Native应用交付
  • 首页
  • 关于本站
  • 个人介绍
  • Downloads
  • Repo
    • Github
    • Container
  • F5
    • F5 Python SDK
    • F5-container
    • F5-LBaaS
  • 社交
    • 联系我
    • 微信/微博
    • 公众号
    • 打赏赞助
行至水穷处 坐看云起时
☁️We are in new App Mesh era: imesh.club ☁️
  1. 首页
  2. F5-Tech tips
  3. 正文

How to build Nginx Plus as k8s Ingress controller and run with F5 CIS together

2019年11月21日 2287点热度 1人点赞 0条评论
  1. GET nginx-repo.crt and nginx-repo.key, put them into the root of kubernetets-ingress/
  2. Docker login your docker hub with: docker login . or using private repo.
  3. follow https://github.com/nginxinc/kubernetes-ingress/blob/master/build/README.md, and build the docker image. In my lab, it is:
    make PREFIX=myf5/nginx-plus-ingress-opentracing DOCKERFILE=DockerfileWithOpentracingForPlus GENERATE_DEFAULT_CERT_AND_KEY=1

  4. Follow https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md to Create namespace, serviceacount, and defualt ssl cert/key for ingress
    [root@k8s-master-v1-16 common]# pwd
    /root/selab/kubernetes-ingress/deployments/common

  5. Create a default blank configmap for nginx configurations. This means there is no customization for nginx now.

1
2
3
4
5
6
7
8
[root@k8s-master-v1-16 common]# cat nginx-config.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
 

  1. Create CRD resources for nginx extend usage. Note, to use NGINX CRD, you must have option “-enable-custom-resources” to be enabled when you deploy nginx controller. We will highlight this in afterward step
    NOTE: NGINX CRD only support k8s v1.11+ version.

1
2
3
4
[root@k8s-master-v1-16 common]# kubectl create -f custom-resource-definitions.yaml
customresourcedefinition.apiextensions.k8s.io/virtualservers.k8s.nginx.org created
customresourcedefinition.apiextensions.k8s.io/virtualserverroutes.k8s.nginx.org created
 

  1. Create RBAC for ingress controller

1
2
3
4
5
6
[root@k8s-master-v1-16 common]# pwd
/root/selab/kubernetes-ingress/deployments/common
[root@k8s-master-v1-16 common]# kubectl create -f ../rbac/rbac.yaml
clusterrole.rbac.authorization.k8s.io/nginx-ingress created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
 

  1. Now, we start deploy nginx controller pods. You can deploy as daemonset, this means there is nginx controller pod on each work node and each node can be ingress traffic entry.
    If you select deployment, you can decide to deploy nginx controller numbers and into which nodes, for example, you can deploy into specific node that only work as traffic entry. T
    These dedicate nodes will be as ingress EDGE.

  2. I will deploy Ingress controller on node 1, so Label node 1 as ingress edge first.
    [root@k8s-master-v1-16 deployment]# kubectl label node k8s-node1-v1-16.lab.f5se.io Ingressedege-
    node/k8s-node1-v1-16.lab.f5se.io labeled

1
2
3
4
5
6
7
[root@k8s-master-v1-16 deployment]# kubectl get node --show-labels
NAME                           STATUS     ROLES    AGE   VERSION   LABELS
bigip                          NotReady   <none>   45h             <none>
k8s-master-v1-16.lab.f5se.io   Ready      master   21d   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-v1-16.lab.f5se.io,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1-v1-16.lab.f5se.io    Ready      <none>   21d   v1.16.2   Ingressedege=entry,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1-v1-16.lab.f5se.io,kubernetes.io/os=linux
k8s-node2-v1-16.lab.f5se.io    Ready      <none>   21d   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2-v1-16.lab.f5se.io,kubernetes.io/os=linux
 

  1. Edit deployment yaml. I enable custom-resource option and enable prometheus related setting, also I set node affinity to make sure the nginx controller only being deployed on node1.

1
2
3
4
5
6
7
[root@k8s-master-v1-16 deployment]# kubectl create -f myf5-nginx-plus-ingress.yaml
deployment.apps/nginx-ingress created
[root@k8s-master-v1-16 deployment]# kubectl get pods -o wide -n nginx-ingress
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE                          NOMINATED NODE   READINESS GATES
nginx-ingress-7dc76bb47-4j82w   1/1     Running   0          81s   10.244.1.19   k8s-node1-v1-16.lab.f5se.io   <none>           <none>
 
 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
[root@k8s-master-v1-16 deployment]# cat myf5-nginx-plus-ingress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9113"
    spec:
      serviceAccountName: nginx-ingress
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: Ingressedege
                operator: In
                values:
                - entry
      containers:
      - image: myf5/nginx-plus-ingress-opentracing:edge
        imagePullPolicy: IfNotPresent
        name: nginx-plus-ingress
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
        - name: prometheus
          containerPort: 9113
        - name: apiport
          containerPort: 8888
          hostPort: 8888
        securityContext:
          allowPrivilegeEscalation: true
          runAsUser: 101 #nginx
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-plus
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
          - -nginx-status
          - -nginx-status-allow-cidrs=172.16.0.0/16,192.168.1.0/24
          - -nginx-status-port=8888
         #- -v=3 # Enables extensive logging. Useful for troubleshooting.
         #- -report-ingress-status
         #- -external-service=nginx-ingress
         #- -enable-leader-election
          - -enable-prometheus-metrics
          - -enable-custom-resources
 
 

Now you can access the node1 hostIP:8888, you can see the NGINX Plus's dashboard page.

  1. Deploy the classic cafe application, follow https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/README.md
    Access the application http://cafe.example.com/coffee, you will get similar output

1
2
3
4
5
6
Server address: 10.244.2.11:80
Server name: coffee-8c8ff9b4f-kgtwh
Date: 21/Nov/2019:05:13:37 +0000
URI: /coffee
Request ID: f21201a6e57e043429c7f37715874d8c
 

  1. check Ingress status:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master-v1-16 complete-example]# kubectl describe ingress
Name:             cafe-ingress
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  cafe-secret terminates cafe.example.com
Rules:
  Host              Path  Backends
  ----              ----  --------
  cafe.example.com  
                    /tea      tea-svc:80 (10.244.2.12:80)
                    /coffee   coffee-svc:80 (10.244.2.11:80)
Annotations:
Events:
  Type    Reason              Age                    From                      Message
  ----    ------              ----                   ----                      -------
  Normal  AddedOrUpdated      9m35s (x3 over 9m46s)  nginx-ingress-controller  Configuration for default/cafe-ingress was added or updated
  Normal  EndpointsNotFound   9m35s                  k8s-bigip-ctlr            Endpoints for service 'default/coffee-svc' not found!
  Normal  ResourceConfigured  9s (x11 over 9m35s)    k8s-bigip-ctlr            Created a ResourceConfig 'ingress__80' for the Ingress.
  Normal  ResourceConfigured  9s (x9 over 9m35s)     k8s-bigip-ctlr            Created a ResourceConfig 'ingress__443' for the Ingress.
 

At this time, I have F5 CIS in the cluster. I did not add any F5-CIS-Ingress annotations to the cafe-ingress resource, the CIS creates pool/ltm policy without vs, this is just Unattached pools.

  1. Add F5 CIS Ingress annotations to the Cafe-ingress, we try use a shared-vs-ip for all Ingress resources.
    To use shared-vs-ip for ingress on BIGIP, first need to change f5-bigip-ctlr (cc)'s deployment setting, add below:

1
2
            "--default-ingress-ip=172.16.100.196",
 

Then, add annotations to the cafe-ingress resource. In the health check part, must make sure the path same to the Ingress rule's path, otherwise the f5-bigip-ctlr will not create the monitor. In my test, the f5-bigip-ctlr can not create ssl profile base on the tls.secret setting, dont know why.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@k8s-master-v1-16 complete-example]# cat f5-cafe-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    virtual-server.f5.com/ip: "controller-default"
    virtual-server.f5.com/http-port: "443"
    virtual-server.f5.com/partition: "k8s"
    virtual-server.f5.com/balance: "round-robin"
    virtual-server.f5.com/health: |
      [
        {
          "path":     "cafe.example.com/coffee",
          "send":     "GET /coffee HTTP/1.0",
          "recv":     "coffee",
          "interval": 5,
          "timeout":  10
        }, {
          "path":     "cafe.example.com/tea",
          "send":     "GET /tea HTTP/1.0",
          "recv":     "tea",
          "interval": 5,
          "timeout":  10
        }
      ]
    ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/allow-http: "false"
spec:
  tls:
  - hosts:
    - cafe.example.com
    secretName: cafe-secret
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80
 
 

Since we made a shared-vs setting, if we create a new ingress resource, the new ingress rule will be a new ltm policy on the same vs.
Create a new ingress:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: f5demoapp-ingress
  annotations:
    virtual-server.f5.com/ip: "controller-default"
    virtual-server.f5.com/http-port: "443"
    virtual-server.f5.com/partition: "k8s"
    virtual-server.f5.com/balance: "round-robin"
spec:
  rules:
  - host: f5demoapp.com
    http:
      paths:
      - path: /
        backend:
          serviceName: f5demoapp-svc
          servicePort: 80  
 
Check BIGIP, you will get:
root@(v14-1)(cfg-sync Standalone)(Active)(/k8s)(tmos)# list ltm policy ingress_172-16-100-196_443
ltm policy ingress_172-16-100-196_443 {
    controls { forwarding }
    last-modified 2019-11-21:16:29:45
    partition k8s
    requires { http }
    rules {
        ingress_cafe.example.com_coffee_ingress_default_coffee-svc {
            actions {
                0 {
                    forward
                    select
                    pool ingress_default_coffee-svc
                }
            }
            conditions {
                0 {
                    http-host
                    host
                    values { cafe.example.com }
                }
                1 {
                    http-uri
                    path-segment
                    index 1
                    values { coffee }
                }
            }
            ordinal 1
        }
        ingress_cafe.example.com_tea_ingress_default_tea-svc {
            actions {
                0 {
                    forward
                    select
                    pool ingress_default_tea-svc
                }
            }
            conditions {
                0 {
                    http-host
                    host
                    values { cafe.example.com }
                }
                1 {
                    http-uri
                    path-segment
                    index 1
                    values { tea }
                }
            }
        }
        ingress_f5demoapp.com__ingress_default_f5demoapp-svc {
            actions {
                0 {
                    forward
                    select
                    pool ingress_default_f5demoapp-svc
                }
            }
            conditions {
                0 {
                    http-host
                    host
                    values { f5demoapp.com }
                }
            }
            ordinal 2
        }
    }
    status legacy
    strategy /Common/first-match
}
 
 

Check ingress resource:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
[root@k8s-master-v1-16 complete-example]# kubectl describe ingress
Name:             cafe-ingress
Namespace:        default
Address:          172.16.100.196
Default backend:  default-http-backend:80 (<none>)
TLS:
  cafe-secret terminates cafe.example.com
Rules:
  Host              Path  Backends
  ----              ----  --------
  cafe.example.com  
                    /tea      tea-svc:80 (10.244.2.12:80)
                    /coffee   coffee-svc:80 (10.244.2.11:80)
Annotations:
  virtual-server.f5.com/partition:                   k8s
  ingress.kubernetes.io/allow-http:                  false
  ingress.kubernetes.io/ssl-redirect:                true
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/allow-http":"false","ingress.kubernetes.io/ssl-redirect":"true","virtual-server.f5.com/balance":"round-robin","virtual-server.f5.com/health":"[\n  {\n    \"path\":     \"cafe.example.com/coffee\",\n    \"send\":     \"GET /coffee HTTP/1.0\",\n    \"recv\":     \"coffee\",\n    \"interval\": 5,\n    \"timeout\":  10\n  }, {\n    \"path\":     \"cafe.example.com/tea\",\n    \"send\":     \"GET /tea HTTP/1.0\",\n    \"recv\":     \"tea\",\n    \"interval\": 5,\n    \"timeout\":  10\n  }\n]\n","virtual-server.f5.com/http-port":"443","virtual-server.f5.com/ip":"controller-default","virtual-server.f5.com/partition":"k8s"},"name":"cafe-ingress","namespace":"default"},"spec":{"rules":[{"host":"cafe.example.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}],"tls":[{"hosts":["cafe.example.com"],"secretName":"cafe-secret"}]}}
 
  virtual-server.f5.com/balance:  round-robin
  virtual-server.f5.com/health:   [
  {
    "path":     "cafe.example.com/coffee",
    "send":     "GET /coffee HTTP/1.0",
    "recv":     "coffee",
    "interval": 5,
    "timeout":  10
  }, {
    "path":     "cafe.example.com/tea",
    "send":     "GET /tea HTTP/1.0",
    "recv":     "tea",
    "interval": 5,
    "timeout":  10
  }
]
 
  virtual-server.f5.com/http-port:  443
  virtual-server.f5.com/ip:         controller-default
Events:
  Type    Reason              Age                   From                      Message
  ----    ------              ----                  ----                      -------
  Normal  MonitorRuleNotUsed  60m (x287 over 67m)   k8s-bigip-ctlr            Health Monitor path '' does not match any Ingress paths.
  Normal  AddedOrUpdated      38m (x13 over 3h30m)  nginx-ingress-controller  Configuration for default/cafe-ingress was added or updated
  Normal  ResourceConfigured  6s (x580 over 67m)    k8s-bigip-ctlr            Created a ResourceConfig 'ingress_172-16-100-196_443' for the Ingress.
 
 
Name:             f5demoapp-ingress
Namespace:        default
Address:          172.16.100.196
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host           Path  Backends
  ----           ----  --------
  f5demoapp.com  
                 /   f5demoapp-svc:80 (10.244.2.6:80)
Annotations:
  virtual-server.f5.com/balance:    round-robin
  virtual-server.f5.com/http-port:  443
  virtual-server.f5.com/ip:         controller-default
  virtual-server.f5.com/partition:  k8s
Events:
  Type    Reason          Age   From                      Message
  ----    ------          ----  ----                      -------
  Normal  AddedOrUpdated  10m   nginx-ingress-controller  Configuration for default/f5demoapp-ingress was added or updated
 
 

  1. Next, pls follow https://github.com/nginxinc/kubernetes-ingress/tree/master/examples to understand all examples and advance CRD examples.
本作品采用 知识共享署名 4.0 国际许可协议 进行许可
标签: cis IC ingress k8s kic nginx
最后更新:2019年11月21日

纳米

http://linjing.io

打赏 点赞
< 上一篇
下一篇 >

文章评论

取消回复

纳米

http://linjing.io

☁️迈向Cloud Native ADC ☁️

认证获得:
Kubernetes: CKA #664
Microsoft: MCSE MCDBA
Cisco: CCNP
Juniper: JNCIS
F5:
F5 Certified Solution Expert, Security
F5 Certified Technology Specialist, LTM/GTM/APM/ASM
F5 Certified BIG-IP Administrator
  • 点击查看本博技术要素列表
  • 分类目录
    • Avi Networks (3)
    • Cisco ACI (1)
    • CISCO资源 (21)
    • F5 with ELK (8)
    • F5-Tech tips (38)
    • F5技术 (203)
    • Juniper (4)
    • Linux (7)
    • Nginx (18)
    • SDN (4)
    • ServiceMesh (19)
    • WEB编程 (8)
    • WINDOWS相关 (7)
    • 业界文章 (18)
    • 交换机技术 (20)
    • 化云为雨/Openstack (35)
    • 协议原理 (52)
    • 容器/k8s (64)
    • 我的工作 (19)
    • 我的生活 (70)
    • 网站技术 (19)
    • 路由器技术 (80)
    • 项目案例 (28)
    文章归档
    标签聚合
    F5 k8s openstack nginx istio DNS envoy gtm docker network flannel api irule bigip neutron cc kubernetes ELK vxlan BGP dhcp VPN IPSec lbaas ingress ingress controller nginx plus sidecar IPSec VPN NAT sql
    最新 热点 随机
    最新 热点 随机
    Say hello for 2021 二进制flannel部署,非cni网络模式下与k8s CIS结合方案 又是一年国庆 Service Account Token Volume Projection Istio ingressgateway 静态TLS证书加载与SDS发现方式配置区别 Istio里Gateway的port定义与实际ingressgateway的listener端口关系及规则 Helm 3 部署NGINX Ingress Controller 应用交付老兵眼中的Envoy, 云原生时代下的思考 Istio sidecar iptables以及流量控制分析 Istio 熔断策略及envoy配置
    Say hello for 2021
    F5 管理接口路由与TMM路由之间关系 [summary]缺省路由在各种协议下的发布总结 同步系统和硬件时间 博客新域名新RSS地址 Openstack L3-VXLAN网络与 F5 LBaaS 协作之 多 外部BIGIP(ICEHOUSE)及同一外部F5支持多租户测试 CISCO交换机POST过程排错 642-811 满分PASS 【原创】用CISCO VPN-Client4.01连接VPN-SERVER配置 openstack L3-GRE 网络结构分析记录 (Icehouse) 第二篇 istio sidecar envoy 无额外策略时的配置逻辑
    链接表
    • Jimmy Song‘s Blog
    • SDNap
    • SDNlab
    • SDN论坛
    • Service Mesh社区
    • 三斗室
    • 个人profile

    COPYRIGHT © 2020 Cloud Native应用交付. ALL RIGHTS RESERVED.

    THEME KRATOS MADE BY VTROIS

    京ICP备14048088号-1

    京公网安备 11010502041506号

    [ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin