Cloud Native应用交付

  • 首页
  • 关于本站
  • 个人介绍
  • Downloads
  • Repo
    • Github
    • Container
  • F5
    • F5 Python SDK
    • F5-container
    • F5-LBaaS
  • 社交
    • 联系我
    • 微信/微博
    • 公众号
    • 打赏赞助
行至水穷处 坐看云起时
☁️We are in new App Mesh era: imesh.club ☁️
  1. 首页
  2. F5技术
  3. 正文

[转]Consul服务自动化发现及F5动态配置更新

2017年10月9日 4851点热度 0人点赞 0条评论

Last update from July 20th, 2015 - I gave a tech talk at Airbnb office related to this topic. you may find the presentation here at my github

In these days where microservices are becoming big, a compulsory modification of traditional architectural approaches are becoming evident. A more automatic distribution of application-afine services, such as load balancing, is required, as demands increase for agile software feature deployments. Traditional, manual configurations of load balancing services are becoming less of a viable option, as it becomes a major road block for organizations to deploy new application services or get new features to today’s rapidly changing software market.

In my quest to develop a solution for automate Virtual IPs deployment for F5 BIG-IP platforms, I’ve been abusing a lot of the iControl REST API available since 11.5.1 code, and I am really happy and pleased. It is powerful and allows safe and almost full interaction with the chassis config – tested for LTM and AFM modules, with a Ruby Sinatra API user interface.

In other hand, the approach with Hashicorp tools allows easily auto-scaling configuration management more close to DevOps world, extending all the good benefits from this hardware based load balancer, in the same way they would do with ha-proxy or keepalived software solutions. BIG-IP solutions has been proven stable and robust, with a good support, nice features with ASIC based routing, which performs awesome with FastL4 type profiles (no TCP proxying).

One of the things I love in BIG-IP platforms is not only they are based on Linux, but also they do not do stupid things to lock the system shell for the admin. Other nice feature is the concept of Partition, which allows you to create namespaces with different objects, roles, and last but not least, configuration files!

But the real BIG WEAPON we are using today is consul-template, which basically query a Consul instance and update a local fs config file based on Go templates, as well can execute arbitrary commands when template updates occurs.

Consul is used for service discovery, health check, and k/v registry for datacenter nodes, allowing query via REST API or DNS.  There is a good tutorial at Hashicorp Consul website about how to install a Vagrant Consul cluster if you’re not familiar with it and want to poke around.

Enough chit chat, lets get hacking!

First, create a different partition at BIG-IP – I am calling my new partition consul at my BIG-IP lab. In my case, the load balancer config file will be stored at /config/partitions/consul/bigip.conf.

Considering a simple scenario where you have at your datacenter a Consul server and a pool of application servers running consul agents publishing a web app at tcp/port 80, as well the app health check URL.

{
"service": {
"name": "web",
"id": "web",
"tags": [
"production"
],
"port": 80,
"check": {
"id": "api",
"name": "HTTP API on port 80",
"http": "http://localhost:80/health",
"interval": "10s",
"timeout": "1s"
}
}
view rawservice_web.json hosted with ❤ by GitHub

At this point Consul can provide reliable information about our app pool health. If a node fail or the health check times out, Consul will remove the service from the healthy pool. Here is an example of a raw query for my consul server:
curl http://localhost:8500/v1/catalog/service/web

[
{
"Node": "n1.nethero.org",
"Address": "172.20.20.10",
"ServiceID": "web",
"ServiceName": "web",
"ServiceTags": [
"production"
],
"ServiceAddress": "",
"ServicePort": 80
},
{
"Node": "n2.nethero.org",
"Address": "172.20.20.11",
"ServiceID": "web",
"ServiceName": "web",
"ServiceTags": [
"production"
],
"ServiceAddress": "",
"ServicePort": 80
}
]
view rawquery-web.json hosted with ❤ by GitHub

This output provides all the information we need for generating BIG-IP nodes and pool objects, so we are ready to setup consul-template for updating and reloading our BIG-IP consul partition config at every Consul detected event, like node down, service failure, as well include new servers (or containers) brought in rotation to consul service pool.

Now we download consul agent and consul-template binaries for Linux AMD64 platform, and place both at our F5 BIG-IP /sbin folder. Make sure you setup your F5 BIG-IP /etc/consul.conf.json to join to the consul cluster, like following example:

{
"datacenter": "netheroDC",
"data_dir": "/var/lib/consul",
"statsd_addr": "127.0.0.1",
"bind_addr": "0.0.0.0",
"disable_remote_exec": true,
"dns_config": {
"enable_truncate": false
}
}
view rawconsul.conf.json hosted with ❤ by GitHub

As well create /etc/consul.d and /etc/consul-templates directories, placing the following template file at /etc/consul-template/bigip.ctmpl

{{range service "web"}}
ltm node /consul/{{.Node}} {
address {{.Address}}
}
{{end}}
ltm pool /consul/dynamic-test {
description "Last change by consul-template => {{timestamp}}"
members {
{{range service "web"}}
/consul/{{.Node}}:{{.Port}} {
address {{.Address}}
}
{{end}}
}
monitor /Common/tcp
}
view rawbigip.ctmpl.go hosted with ❤ by GitHub

Now create a /sbin/reload-consul.sh file with this content:

#!/bin/bash
echo "$(date) - config sync started"
tmsh load sys config partitions { consul }
echo "$(date) - config sync completed"
view rawreload-consul.sh hosted with ❤ by GitHub

Finally, add these lines to your /config/startup:

#Consul agent
echo "Starting consul-agent..."
/sbin/consul agent -config-file=/etc/consul.conf.json -config-dir=/etc/consul.d -pid-file=/var/run/consul.pid -retry-join=consul1.nethero.org -retry-join=consul2.nethero.org -retry-max=0 >>/var/log/consul-agent.log &
sleep 5
#Consul template
echo "Starting consul-template..." ;
/sbin/consul-template -consul 127.0.0.1:8500 -template "/etc/consul-templates/bigip.ctmpl:/config/partitions/consul/bigip.conf:/sbin/reload-consul.sh" 2>&1 >>/var/log/consul-template.log &
view rawstartup.sh hosted with ❤ by GitHub

And the magic happens when consul template starts:

consul-template

Yay! Our F5 BIG-IP is being auto configured based on consul service catalog. As soon as you bring up more app servers and spin up their consul agent publishing their services, it’ll be automatically added to our load balancer pool. Beautiful isn’t it ?

The good news is that BIG-IP parses the config before applying it, so if you break your template after a change, BIG-IP wont overwrite the running-config with the faulty new one, until you provide a valid config. Safe is priceless!BIG-IP config file generated by consul-template:

ltm node /consul/n1.nethero.org {
address 172.20.20.10
}
ltm node /consul/n2.nethero.org {
address 172.20.20.11
}
ltm pool /consul/dynamic-test {
description "Last change by consul-template => 2015-05-02T21:11:57Z"
members {
/consul/n1.nethero.org:80 {
address 172.20.20.10
}
/consul/n2.nethero.org:80 {
address 172.20.20.11
}
}
monitor /Common/tcp
}
原文 http://www.nethero.org/
本作品采用 知识共享署名 4.0 国际许可协议 进行许可
标签: consul 服务发现 自动化配置
最后更新:2017年10月9日

纳米

linjing.io

打赏 点赞
< 上一篇
下一篇 >

文章评论

取消回复

纳米

linjing.io

☁️迈向Cloud Native ADC ☁️

认证获得:
Kubernetes: CKA #664
Microsoft: MCSE MCDBA
Cisco: CCNP
Juniper: JNCIS
F5:
F5 Certified Solution Expert, Security
F5 Certified Technology Specialist, LTM/GTM/APM/ASM
F5 Certified BIG-IP Administrator
  • 点击查看本博技术要素列表
  • 分类
    • Avi Networks (3)
    • Cisco ACI (1)
    • CISCO资源 (21)
    • F5 with ELK (8)
    • F5-Tech tips (38)
    • F5技术 (203)
    • Juniper (4)
    • Linux (7)
    • Nginx (18)
    • SDN (4)
    • ServiceMesh (19)
    • WEB编程 (8)
    • WINDOWS相关 (7)
    • 业界文章 (18)
    • 交换机技术 (20)
    • 化云为雨/Openstack (35)
    • 协议原理 (52)
    • 容器/k8s (64)
    • 我的工作 (19)
    • 我的生活 (70)
    • 网站技术 (19)
    • 路由器技术 (80)
    • 项目案例 (28)
    标签聚合
    F5 k8s openstack nginx istio DNS envoy gtm docker network flannel api irule bigip neutron cc kubernetes ELK vxlan BGP dhcp VPN IPSec lbaas ingress ingress controller nginx plus sidecar IPSec VPN NAT sql
    最新 热点 随机
    最新 热点 随机
    Say hello for 2021 二进制flannel部署,非cni网络模式下与k8s CIS结合方案 又是一年国庆 Service Account Token Volume Projection Istio ingressgateway 静态TLS证书加载与SDS发现方式配置区别 Istio里Gateway的port定义与实际ingressgateway的listener端口关系及规则 Helm 3 部署NGINX Ingress Controller 应用交付老兵眼中的Envoy, 云原生时代下的思考 Istio sidecar iptables以及流量控制分析 Istio 熔断策略及envoy配置
    [一个有意思的问题]关于vlan的 【原创updated】再读BGP of self-study [转]ReplicationController,Replica Set,Deployment区别 BGP 属性优先级 二进制flannel部署,非cni网络模式下与k8s CIS结合方案 ELK stack (Elasticsearch,logstash,Kibana)集群安装步骤备忘 [原创]eigrp基本试验 常见路由器关闭端口 [转]我奋斗了18年才喝上星巴克 [flash]我没有车我没有房
    链接表
    • Jimmy Song‘s Blog
    • SDNap
    • SDNlab
    • SDN论坛
    • Service Mesh社区
    • 三斗室
    • 个人profile

    COPYRIGHT © 2020 Cloud Native应用交付. ALL RIGHTS RESERVED.

    THEME KRATOS MADE BY VTROIS

    京ICP备14048088号-1

    京公网安备 11010502041506号

    [ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin