- GET nginx-repo.crt and nginx-repo.key, put them into the root of kubernetets-ingress/
- Docker login your docker hub with: docker login . or using private repo.
- follow https://github.com/nginxinc/kubernetes-ingress/blob/master/build/README.md, and build the docker image. In my lab, it is:
make PREFIX=myf5/nginx-plus-ingress-opentracing DOCKERFILE=DockerfileWithOpentracingForPlus GENERATE_DEFAULT_CERT_AND_KEY=1 -
Follow https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md to Create namespace, serviceacount, and defualt ssl cert/key for ingress
[root@k8s-master-v1-16 common]# pwd
/root/selab/kubernetes-ingress/deployments/common -
Create a default blank configmap for nginx configurations. This means there is no customization for nginx now.
1 2 3 4 5 6 7 8 |
[root@k8s-master-v1-16 common]# cat nginx-config.yaml kind: ConfigMap apiVersion: v1 metadata: name: nginx-config namespace: nginx-ingress data: |
- Create CRD resources for nginx extend usage. Note, to use NGINX CRD, you must have option “-enable-custom-resources” to be enabled when you deploy nginx controller. We will highlight this in afterward step
NOTE: NGINX CRD only support k8s v1.11+ version.
1 2 3 4 |
[root@k8s-master-v1-16 common]# kubectl create -f custom-resource-definitions.yaml customresourcedefinition.apiextensions.k8s.io/virtualservers.k8s.nginx.org created customresourcedefinition.apiextensions.k8s.io/virtualserverroutes.k8s.nginx.org created |
- Create RBAC for ingress controller
1 2 3 4 5 6 |
[root@k8s-master-v1-16 common]# pwd /root/selab/kubernetes-ingress/deployments/common [root@k8s-master-v1-16 common]# kubectl create -f ../rbac/rbac.yaml clusterrole.rbac.authorization.k8s.io/nginx-ingress created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created |
-
Now, we start deploy nginx controller pods. You can deploy as daemonset, this means there is nginx controller pod on each work node and each node can be ingress traffic entry.
If you select deployment, you can decide to deploy nginx controller numbers and into which nodes, for example, you can deploy into specific node that only work as traffic entry. T
These dedicate nodes will be as ingress EDGE. -
I will deploy Ingress controller on node 1, so Label node 1 as ingress edge first.
[root@k8s-master-v1-16 deployment]# kubectl label node k8s-node1-v1-16.lab.f5se.io Ingressedege-
node/k8s-node1-v1-16.lab.f5se.io labeled
1 2 3 4 5 6 7 |
[root@k8s-master-v1-16 deployment]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS bigip NotReady <none> 45h <none> k8s-master-v1-16.lab.f5se.io Ready master 21d v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master-v1-16.lab.f5se.io,kubernetes.io/os=linux,node-role.kubernetes.io/master= k8s-node1-v1-16.lab.f5se.io Ready <none> 21d v1.16.2 Ingressedege=entry,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1-v1-16.lab.f5se.io,kubernetes.io/os=linux k8s-node2-v1-16.lab.f5se.io Ready <none> 21d v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2-v1-16.lab.f5se.io,kubernetes.io/os=linux |
- Edit deployment yaml. I enable custom-resource option and enable prometheus related setting, also I set node affinity to make sure the nginx controller only being deployed on node1.
1 2 3 4 5 6 7 |
[root@k8s-master-v1-16 deployment]# kubectl create -f myf5-nginx-plus-ingress.yaml deployment.apps/nginx-ingress created [root@k8s-master-v1-16 deployment]# kubectl get pods -o wide -n nginx-ingress NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-7dc76bb47-4j82w 1/1 Running 0 81s 10.244.1.19 k8s-node1-v1-16.lab.f5se.io <none> <none> |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
[root@k8s-master-v1-16 deployment]# cat myf5-nginx-plus-ingress.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 1 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress annotations: prometheus.io/scrape: "true" prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: Ingressedege operator: In values: - entry containers: - image: myf5/nginx-plus-ingress-opentracing:edge imagePullPolicy: IfNotPresent name: nginx-plus-ingress ports: - name: http containerPort: 80 hostPort: 80 - name: https containerPort: 443 hostPort: 443 - name: prometheus containerPort: 9113 - name: apiport containerPort: 8888 hostPort: 8888 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -nginx-status - -nginx-status-allow-cidrs=172.16.0.0/16,192.168.1.0/24 - -nginx-status-port=8888 #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election - -enable-prometheus-metrics - -enable-custom-resources |
Now you can access the node1 hostIP:8888, you can see the NGINX Plus's dashboard page.
- Deploy the classic cafe application, follow https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/README.md
Access the application http://cafe.example.com/coffee, you will get similar output
1 2 3 4 5 6 |
Server address: 10.244.2.11:80 Server name: coffee-8c8ff9b4f-kgtwh Date: 21/Nov/2019:05:13:37 +0000 URI: /coffee Request ID: f21201a6e57e043429c7f37715874d8c |
- check Ingress status:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
[root@k8s-master-v1-16 complete-example]# kubectl describe ingress Name: cafe-ingress Namespace: default Address: Default backend: default-http-backend:80 (<none>) TLS: cafe-secret terminates cafe.example.com Rules: Host Path Backends ---- ---- -------- cafe.example.com /tea tea-svc:80 (10.244.2.12:80) /coffee coffee-svc:80 (10.244.2.11:80) Annotations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal AddedOrUpdated 9m35s (x3 over 9m46s) nginx-ingress-controller Configuration for default/cafe-ingress was added or updated Normal EndpointsNotFound 9m35s k8s-bigip-ctlr Endpoints for service 'default/coffee-svc' not found! Normal ResourceConfigured 9s (x11 over 9m35s) k8s-bigip-ctlr Created a ResourceConfig 'ingress__80' for the Ingress. Normal ResourceConfigured 9s (x9 over 9m35s) k8s-bigip-ctlr Created a ResourceConfig 'ingress__443' for the Ingress. |
At this time, I have F5 CIS in the cluster. I did not add any F5-CIS-Ingress annotations to the cafe-ingress resource, the CIS creates pool/ltm policy without vs, this is just Unattached pools.
- Add F5 CIS Ingress annotations to the Cafe-ingress, we try use a shared-vs-ip for all Ingress resources.
To use shared-vs-ip for ingress on BIGIP, first need to change f5-bigip-ctlr (cc)'s deployment setting, add below:
1 2 |
"--default-ingress-ip=172.16.100.196", |
Then, add annotations to the cafe-ingress resource. In the health check part, must make sure the path same to the Ingress rule's path, otherwise the f5-bigip-ctlr will not create the monitor. In my test, the f5-bigip-ctlr can not create ssl profile base on the tls.secret setting, dont know why.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
[root@k8s-master-v1-16 complete-example]# cat f5-cafe-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress annotations: virtual-server.f5.com/ip: "controller-default" virtual-server.f5.com/http-port: "443" virtual-server.f5.com/partition: "k8s" virtual-server.f5.com/balance: "round-robin" virtual-server.f5.com/health: | [ { "path": "cafe.example.com/coffee", "send": "GET /coffee HTTP/1.0", "recv": "coffee", "interval": 5, "timeout": 10 }, { "path": "cafe.example.com/tea", "send": "GET /tea HTTP/1.0", "recv": "tea", "interval": 5, "timeout": 10 } ] ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/allow-http: "false" spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80 |
Since we made a shared-vs setting, if we create a new ingress resource, the new ingress rule will be a new ltm policy on the same vs.
Create a new ingress:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: f5demoapp-ingress annotations: virtual-server.f5.com/ip: "controller-default" virtual-server.f5.com/http-port: "443" virtual-server.f5.com/partition: "k8s" virtual-server.f5.com/balance: "round-robin" spec: rules: - host: f5demoapp.com http: paths: - path: / backend: serviceName: f5demoapp-svc servicePort: 80 Check BIGIP, you will get: root@(v14-1)(cfg-sync Standalone)(Active)(/k8s)(tmos)# list ltm policy ingress_172-16-100-196_443 ltm policy ingress_172-16-100-196_443 { controls { forwarding } last-modified 2019-11-21:16:29:45 partition k8s requires { http } rules { ingress_cafe.example.com_coffee_ingress_default_coffee-svc { actions { 0 { forward select pool ingress_default_coffee-svc } } conditions { 0 { http-host host values { cafe.example.com } } 1 { http-uri path-segment index 1 values { coffee } } } ordinal 1 } ingress_cafe.example.com_tea_ingress_default_tea-svc { actions { 0 { forward select pool ingress_default_tea-svc } } conditions { 0 { http-host host values { cafe.example.com } } 1 { http-uri path-segment index 1 values { tea } } } } ingress_f5demoapp.com__ingress_default_f5demoapp-svc { actions { 0 { forward select pool ingress_default_f5demoapp-svc } } conditions { 0 { http-host host values { f5demoapp.com } } } ordinal 2 } } status legacy strategy /Common/first-match } |
Check ingress resource:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
[root@k8s-master-v1-16 complete-example]# kubectl describe ingress Name: cafe-ingress Namespace: default Address: 172.16.100.196 Default backend: default-http-backend:80 (<none>) TLS: cafe-secret terminates cafe.example.com Rules: Host Path Backends ---- ---- -------- cafe.example.com /tea tea-svc:80 (10.244.2.12:80) /coffee coffee-svc:80 (10.244.2.11:80) Annotations: virtual-server.f5.com/partition: k8s ingress.kubernetes.io/allow-http: false ingress.kubernetes.io/ssl-redirect: true kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/allow-http":"false","ingress.kubernetes.io/ssl-redirect":"true","virtual-server.f5.com/balance":"round-robin","virtual-server.f5.com/health":"[\n {\n \"path\": \"cafe.example.com/coffee\",\n \"send\": \"GET /coffee HTTP/1.0\",\n \"recv\": \"coffee\",\n \"interval\": 5,\n \"timeout\": 10\n }, {\n \"path\": \"cafe.example.com/tea\",\n \"send\": \"GET /tea HTTP/1.0\",\n \"recv\": \"tea\",\n \"interval\": 5,\n \"timeout\": 10\n }\n]\n","virtual-server.f5.com/http-port":"443","virtual-server.f5.com/ip":"controller-default","virtual-server.f5.com/partition":"k8s"},"name":"cafe-ingress","namespace":"default"},"spec":{"rules":[{"host":"cafe.example.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}],"tls":[{"hosts":["cafe.example.com"],"secretName":"cafe-secret"}]}} virtual-server.f5.com/balance: round-robin virtual-server.f5.com/health: [ { "path": "cafe.example.com/coffee", "send": "GET /coffee HTTP/1.0", "recv": "coffee", "interval": 5, "timeout": 10 }, { "path": "cafe.example.com/tea", "send": "GET /tea HTTP/1.0", "recv": "tea", "interval": 5, "timeout": 10 } ] virtual-server.f5.com/http-port: 443 virtual-server.f5.com/ip: controller-default Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal MonitorRuleNotUsed 60m (x287 over 67m) k8s-bigip-ctlr Health Monitor path '' does not match any Ingress paths. Normal AddedOrUpdated 38m (x13 over 3h30m) nginx-ingress-controller Configuration for default/cafe-ingress was added or updated Normal ResourceConfigured 6s (x580 over 67m) k8s-bigip-ctlr Created a ResourceConfig 'ingress_172-16-100-196_443' for the Ingress. Name: f5demoapp-ingress Namespace: default Address: 172.16.100.196 Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- f5demoapp.com / f5demoapp-svc:80 (10.244.2.6:80) Annotations: virtual-server.f5.com/balance: round-robin virtual-server.f5.com/http-port: 443 virtual-server.f5.com/ip: controller-default virtual-server.f5.com/partition: k8s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal AddedOrUpdated 10m nginx-ingress-controller Configuration for default/f5demoapp-ingress was added or updated |
- Next, pls follow https://github.com/nginxinc/kubernetes-ingress/tree/master/examples to understand all examples and advance CRD examples.
文章评论