Cloud Native应用交付

  • 首页
  • 关于本站
  • 个人介绍
  • Downloads
  • Repo
    • Github
    • Container
  • F5
    • F5 Python SDK
    • F5-container
    • F5-LBaaS
  • 社交
    • 联系我
    • 微信/微博
    • 公众号
    • 打赏赞助
行至水穷处 坐看云起时
Cloud Native Application Services: cnadn.net
  1. 首页
  2. F5-Tech tips
  3. 正文

How to build Nginx Plus as k8s Ingress controller and run with F5 CIS together

2019年11月21日 9126点热度 1人点赞 0条评论
  1. GET nginx-repo.crt and nginx-repo.key, put them into the root of kubernetets-ingress/
  2. Docker login your docker hub with: docker login . or using private repo.
  3. follow https://github.com/nginxinc/kubernetes-ingress/blob/master/build/README.md, and build the docker image. In my lab, it is:
    make PREFIX=myf5/nginx-plus-ingress-opentracing DOCKERFILE=DockerfileWithOpentracingForPlus GENERATE_DEFAULT_CERT_AND_KEY=1

  4. Follow https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md to Create namespace, serviceacount, and defualt ssl cert/key for ingress
    [root@k8s-master-v1-16 common]# pwd
    /root/selab/kubernetes-ingress/deployments/common

  5. Create a default blank configmap for nginx configurations. This means there is no customization for nginx now.

1
2
3
4
5
6
7
8
[root@k8s-master-v1-16 common]# cat nginx-config.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: nginx-ingress
data:
 

  1. Create CRD resources for nginx extend usage. Note, to use NGINX CRD, you must have option “-enable-custom-resources” to be enabled when you deploy nginx controller. We will highlight this in afterward step
    NOTE: NGINX CRD only support k8s v1.11+ version.

1
2
3
4
[root@k8s-master-v1-16 common]# kubectl create -f custom-resource-definitions.yaml
customresourcedefinition.apiextensions.k8s.io/virtualservers.k8s.nginx.org created
customresourcedefinition.apiextensions.k8s.io/virtualserverroutes.k8s.nginx.org created
 

  1. Create RBAC for ingress controller

1
2
3
4
5
6
[root@k8s-master-v1-16 common]# pwd
/root/selab/kubernetes-ingress/deployments/common
[root@k8s-master-v1-16 common]# kubectl create -f ../rbac/rbac.yaml
clusterrole.rbac.authorization.k8s.io/nginx-ingress created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
 

  1. Now, we start deploy nginx controller pods. You can deploy as daemonset, this means there is nginx controller pod on each work node and each node can be ingress traffic entry.
    If you select deployment, you can decide to deploy nginx controller numbers and into which nodes, for example, you can deploy into specific node that only work as traffic entry. T
    These dedicate nodes will be as ingress EDGE.

  2. I will deploy Ingress controller on node 1, so Label node 1 as ingress edge first.
    [root@k8s-master-v1-16 deployment]# kubectl label node k8s-node1-v1-16.lab.f5se.io Ingressedege-
    node/k8s-node1-v1-16.lab.f5se.io labeled

1
2
3
4
5
6
7
[root@k8s-master-v1-16 deployment]# kubectl get node --show-labels
NAME                           STATUS     ROLES    AGE   VERSION   LABELS
bigip                          NotReady   <none>   45h             <none>
k8s-master-v1-16.lab.f5se.io   Ready      master   21d   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-v1-16.lab.f5se.io,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1-v1-16.lab.f5se.io    Ready      <none>   21d   v1.16.2   Ingressedege=entry,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1-v1-16.lab.f5se.io,kubernetes.io/os=linux
k8s-node2-v1-16.lab.f5se.io    Ready      <none>   21d   v1.16.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2-v1-16.lab.f5se.io,kubernetes.io/os=linux
 

  1. Edit deployment yaml. I enable custom-resource option and enable prometheus related setting, also I set node affinity to make sure the nginx controller only being deployed on node1.

1
2
3
4
5
6
7
[root@k8s-master-v1-16 deployment]# kubectl create -f myf5-nginx-plus-ingress.yaml
deployment.apps/nginx-ingress created
[root@k8s-master-v1-16 deployment]# kubectl get pods -o wide -n nginx-ingress
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE                          NOMINATED NODE   READINESS GATES
nginx-ingress-7dc76bb47-4j82w   1/1     Running   0          81s   10.244.1.19   k8s-node1-v1-16.lab.f5se.io   <none>           <none>
 
 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
[root@k8s-master-v1-16 deployment]# cat myf5-nginx-plus-ingress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress
  namespace: nginx-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-ingress
  template:
    metadata:
      labels:
        app: nginx-ingress
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9113"
    spec:
      serviceAccountName: nginx-ingress
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: Ingressedege
                operator: In
                values:
                - entry
      containers:
      - image: myf5/nginx-plus-ingress-opentracing:edge
        imagePullPolicy: IfNotPresent
        name: nginx-plus-ingress
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
        - name: prometheus
          containerPort: 9113
        - name: apiport
          containerPort: 8888
          hostPort: 8888
        securityContext:
          allowPrivilegeEscalation: true
          runAsUser: 101 #nginx
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        args:
          - -nginx-plus
          - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
          - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
          - -nginx-status
          - -nginx-status-allow-cidrs=172.16.0.0/16,192.168.1.0/24
          - -nginx-status-port=8888
         #- -v=3 # Enables extensive logging. Useful for troubleshooting.
         #- -report-ingress-status
         #- -external-service=nginx-ingress
         #- -enable-leader-election
          - -enable-prometheus-metrics
          - -enable-custom-resources
 
 

Now you can access the node1 hostIP:8888, you can see the NGINX Plus's dashboard page.

  1. Deploy the classic cafe application, follow https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/README.md
    Access the application http://cafe.example.com/coffee, you will get similar output

1
2
3
4
5
6
Server address: 10.244.2.11:80
Server name: coffee-8c8ff9b4f-kgtwh
Date: 21/Nov/2019:05:13:37 +0000
URI: /coffee
Request ID: f21201a6e57e043429c7f37715874d8c
 

  1. check Ingress status:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master-v1-16 complete-example]# kubectl describe ingress
Name:             cafe-ingress
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  cafe-secret terminates cafe.example.com
Rules:
  Host              Path  Backends
  ----              ----  --------
  cafe.example.com  
                    /tea      tea-svc:80 (10.244.2.12:80)
                    /coffee   coffee-svc:80 (10.244.2.11:80)
Annotations:
Events:
  Type    Reason              Age                    From                      Message
  ----    ------              ----                   ----                      -------
  Normal  AddedOrUpdated      9m35s (x3 over 9m46s)  nginx-ingress-controller  Configuration for default/cafe-ingress was added or updated
  Normal  EndpointsNotFound   9m35s                  k8s-bigip-ctlr            Endpoints for service 'default/coffee-svc' not found!
  Normal  ResourceConfigured  9s (x11 over 9m35s)    k8s-bigip-ctlr            Created a ResourceConfig 'ingress__80' for the Ingress.
  Normal  ResourceConfigured  9s (x9 over 9m35s)     k8s-bigip-ctlr            Created a ResourceConfig 'ingress__443' for the Ingress.
 

At this time, I have F5 CIS in the cluster. I did not add any F5-CIS-Ingress annotations to the cafe-ingress resource, the CIS creates pool/ltm policy without vs, this is just Unattached pools.

  1. Add F5 CIS Ingress annotations to the Cafe-ingress, we try use a shared-vs-ip for all Ingress resources.
    To use shared-vs-ip for ingress on BIGIP, first need to change f5-bigip-ctlr (cc)'s deployment setting, add below:

1
2
            "--default-ingress-ip=172.16.100.196",
 

Then, add annotations to the cafe-ingress resource. In the health check part, must make sure the path same to the Ingress rule's path, otherwise the f5-bigip-ctlr will not create the monitor. In my test, the f5-bigip-ctlr can not create ssl profile base on the tls.secret setting, dont know why.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@k8s-master-v1-16 complete-example]# cat f5-cafe-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    virtual-server.f5.com/ip: "controller-default"
    virtual-server.f5.com/http-port: "443"
    virtual-server.f5.com/partition: "k8s"
    virtual-server.f5.com/balance: "round-robin"
    virtual-server.f5.com/health: |
      [
        {
          "path":     "cafe.example.com/coffee",
          "send":     "GET /coffee HTTP/1.0",
          "recv":     "coffee",
          "interval": 5,
          "timeout":  10
        }, {
          "path":     "cafe.example.com/tea",
          "send":     "GET /tea HTTP/1.0",
          "recv":     "tea",
          "interval": 5,
          "timeout":  10
        }
      ]
    ingress.kubernetes.io/ssl-redirect: "true"
    ingress.kubernetes.io/allow-http: "false"
spec:
  tls:
  - hosts:
    - cafe.example.com
    secretName: cafe-secret
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80
 
 

Since we made a shared-vs setting, if we create a new ingress resource, the new ingress rule will be a new ltm policy on the same vs.
Create a new ingress:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: f5demoapp-ingress
  annotations:
    virtual-server.f5.com/ip: "controller-default"
    virtual-server.f5.com/http-port: "443"
    virtual-server.f5.com/partition: "k8s"
    virtual-server.f5.com/balance: "round-robin"
spec:
  rules:
  - host: f5demoapp.com
    http:
      paths:
      - path: /
        backend:
          serviceName: f5demoapp-svc
          servicePort: 80  
 
Check BIGIP, you will get:
root@(v14-1)(cfg-sync Standalone)(Active)(/k8s)(tmos)# list ltm policy ingress_172-16-100-196_443
ltm policy ingress_172-16-100-196_443 {
    controls { forwarding }
    last-modified 2019-11-21:16:29:45
    partition k8s
    requires { http }
    rules {
        ingress_cafe.example.com_coffee_ingress_default_coffee-svc {
            actions {
                0 {
                    forward
                    select
                    pool ingress_default_coffee-svc
                }
            }
            conditions {
                0 {
                    http-host
                    host
                    values { cafe.example.com }
                }
                1 {
                    http-uri
                    path-segment
                    index 1
                    values { coffee }
                }
            }
            ordinal 1
        }
        ingress_cafe.example.com_tea_ingress_default_tea-svc {
            actions {
                0 {
                    forward
                    select
                    pool ingress_default_tea-svc
                }
            }
            conditions {
                0 {
                    http-host
                    host
                    values { cafe.example.com }
                }
                1 {
                    http-uri
                    path-segment
                    index 1
                    values { tea }
                }
            }
        }
        ingress_f5demoapp.com__ingress_default_f5demoapp-svc {
            actions {
                0 {
                    forward
                    select
                    pool ingress_default_f5demoapp-svc
                }
            }
            conditions {
                0 {
                    http-host
                    host
                    values { f5demoapp.com }
                }
            }
            ordinal 2
        }
    }
    status legacy
    strategy /Common/first-match
}
 
 

Check ingress resource:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
[root@k8s-master-v1-16 complete-example]# kubectl describe ingress
Name:             cafe-ingress
Namespace:        default
Address:          172.16.100.196
Default backend:  default-http-backend:80 (<none>)
TLS:
  cafe-secret terminates cafe.example.com
Rules:
  Host              Path  Backends
  ----              ----  --------
  cafe.example.com  
                    /tea      tea-svc:80 (10.244.2.12:80)
                    /coffee   coffee-svc:80 (10.244.2.11:80)
Annotations:
  virtual-server.f5.com/partition:                   k8s
  ingress.kubernetes.io/allow-http:                  false
  ingress.kubernetes.io/ssl-redirect:                true
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/allow-http":"false","ingress.kubernetes.io/ssl-redirect":"true","virtual-server.f5.com/balance":"round-robin","virtual-server.f5.com/health":"[\n  {\n    \"path\":     \"cafe.example.com/coffee\",\n    \"send\":     \"GET /coffee HTTP/1.0\",\n    \"recv\":     \"coffee\",\n    \"interval\": 5,\n    \"timeout\":  10\n  }, {\n    \"path\":     \"cafe.example.com/tea\",\n    \"send\":     \"GET /tea HTTP/1.0\",\n    \"recv\":     \"tea\",\n    \"interval\": 5,\n    \"timeout\":  10\n  }\n]\n","virtual-server.f5.com/http-port":"443","virtual-server.f5.com/ip":"controller-default","virtual-server.f5.com/partition":"k8s"},"name":"cafe-ingress","namespace":"default"},"spec":{"rules":[{"host":"cafe.example.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}],"tls":[{"hosts":["cafe.example.com"],"secretName":"cafe-secret"}]}}
 
  virtual-server.f5.com/balance:  round-robin
  virtual-server.f5.com/health:   [
  {
    "path":     "cafe.example.com/coffee",
    "send":     "GET /coffee HTTP/1.0",
    "recv":     "coffee",
    "interval": 5,
    "timeout":  10
  }, {
    "path":     "cafe.example.com/tea",
    "send":     "GET /tea HTTP/1.0",
    "recv":     "tea",
    "interval": 5,
    "timeout":  10
  }
]
 
  virtual-server.f5.com/http-port:  443
  virtual-server.f5.com/ip:         controller-default
Events:
  Type    Reason              Age                   From                      Message
  ----    ------              ----                  ----                      -------
  Normal  MonitorRuleNotUsed  60m (x287 over 67m)   k8s-bigip-ctlr            Health Monitor path '' does not match any Ingress paths.
  Normal  AddedOrUpdated      38m (x13 over 3h30m)  nginx-ingress-controller  Configuration for default/cafe-ingress was added or updated
  Normal  ResourceConfigured  6s (x580 over 67m)    k8s-bigip-ctlr            Created a ResourceConfig 'ingress_172-16-100-196_443' for the Ingress.
 
 
Name:             f5demoapp-ingress
Namespace:        default
Address:          172.16.100.196
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host           Path  Backends
  ----           ----  --------
  f5demoapp.com  
                 /   f5demoapp-svc:80 (10.244.2.6:80)
Annotations:
  virtual-server.f5.com/balance:    round-robin
  virtual-server.f5.com/http-port:  443
  virtual-server.f5.com/ip:         controller-default
  virtual-server.f5.com/partition:  k8s
Events:
  Type    Reason          Age   From                      Message
  ----    ------          ----  ----                      -------
  Normal  AddedOrUpdated  10m   nginx-ingress-controller  Configuration for default/f5demoapp-ingress was added or updated
 
 

  1. Next, pls follow https://github.com/nginxinc/kubernetes-ingress/tree/master/examples to understand all examples and advance CRD examples.

相关文章

  • F5 BIG-IP链接Istio 增强入口服务能力
  • Prometheus metrics of F5 CIS/CC
  • 如何为CIS发现的服务提供备份访问服务
  • K8s nginx ingress controller 原理及测试
  • K8s ingress中nginx可实现功能案例
本作品采用 知识共享署名-非商业性使用 4.0 国际许可协议 进行许可
标签: cis IC ingress k8s kic nginx
最后更新:2019年11月21日

纳米

linjing.io

打赏 点赞
< 上一篇
下一篇 >

文章评论

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
取消回复

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理。

页面AI聊天助手

纳米

linjing.io

☁️迈向Cloud Native ADC ☁️

认证获得:
TOGAF: ID 152743
Kubernetes: CKA #664
Microsoft: MCSE MCDBA
Cisco: CCNP
Juniper: JNCIS
F5:
F5 Certified Solution Expert, Security
F5 Certified Technology Specialist, LTM/GTM/APM/ASM
F5 Certified BIG-IP Administrator
  • 点击查看本博技术要素列表
  • 归档
    分类
    • AI
    • Automation
    • Avi Networks
    • Cisco ACI
    • CISCO资源
    • F5 with ELK
    • F5-Tech tips
    • F5技术
    • Juniper
    • Linux
    • NGINX
    • SDN
    • ServiceMesh
    • WEB编程
    • WINDOWS相关
    • 业界文章
    • 交换机技术
    • 化云为雨/Openstack
    • 协议原理
    • 容器/k8s
    • 我的工作
    • 我的生活
    • 网站技术
    • 路由器技术
    • 项目案例
    标签聚合
    docker neutron network istio irule flannel openstack envoy nginx gtm bigip F5 api DNS k8s
    最近评论
    汤姆 发布于 8 个月前(09月10日) 嗨,楼主,里面的json怎么下载啊,怎么收费啊?
    汤姆 发布于 8 个月前(09月09日) 大佬,kib的页面可以分享下吗?谢谢
    zhangsha 发布于 1 年前(05月12日) 资料发给我下,谢谢纳米同志!!!!lyx895@qq.com
    李成才 发布于 1 年前(01月02日) 麻烦了,谢谢大佬
    纳米 发布于 1 年前(01月02日) 你好。是的,因为以前下载系统插件在一次升级后将所有的下载生成信息全弄丢了。所以不少文件无法下载。DN...
    浏览次数
    • Downloads - 183,754 views
    • 联系我 - 118,966 views
    • 迄今为止最全最深入的BIGIP-DNS/GTM原理及培训资料 - 116,437 views
    • Github - 103,641 views
    • F5常见log日志解释 - 79,768 views
    • 从传统ADC迈向CLOUD NATIVE ADC - 下载 - 74,618 views
    • Sniffer Pro 4 70 530抓包软件 中文版+视频教程 - 74,320 views
    • 迄今为止最全最深入的BIGIP-DNS/GTM原理及培训资料 - 67,770 views
    • 关于本站 - 60,884 views
    • 这篇文档您是否感兴趣 - 55,491 views
    链接表
    • F5SE创新
    • Jimmy Song‘s Blog
    • SDNlab
    • Service Mesh社区
    • 三斗室
    • 个人profile
    • 云原生社区

    COPYRIGHT © 2023 Cloud Native 应用交付. ALL RIGHTS RESERVED.

    Theme Kratos Made By Seaton Jiang

    京ICP备14048088号-1

    京公网安备 11010502041506号