跳到主要内容

Cloud Controller Manager

Cloud Controller Manager(CCM)提供 Kubernetes 与阿里云基础产品的对接能力,例如CLB(原SLB)、NLBVPC等。

创建用户并授权

CCM 需要访问阿里云的 API,因此需要创建一个用户并授予相应的权限,完整的权限文件参考这里

给用户授权后,记录其ak/sk,后续需要在 CCM 的配置文件中使用

创建CCM使用的ConfigMap

将阿里云账号对应的AccessKey保存到环境变量

export ACCESS_KEY_ID=LTAI********************
export ACCESS_KEY_SECRET=HAeS**************************

执行以下脚本创建ConfigMap, 根据实际替换代码中的region值,然后执行脚本

configmap-ccm.sh
#!/bin/bash

## create ConfigMap kube-system/cloud-config for CCM.
accessKeyIDBase64=`echo -n "$ACCESS_KEY_ID" |base64 -w 0`
accessKeySecretBase64=`echo -n "$ACCESS_KEY_SECRET"|base64 -w 0`

cat <<EOF >cloud-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cloud-config
namespace: kube-system
data:
cloud-config.conf: |-
{
"Global": {
"accessKeyID": "$accessKeyIDBase64",
"accessKeySecret": "$accessKeySecretBase64",
"region": "cn-hangzhou"
}
}
EOF

kubectl create -f cloud-config.yaml

部署CCM

创建部署,完整的部署文件位于cloud-controller-manager.yml

文件内容如下,需要修改${ImageVersion}{$ClusterCIDR}:

  • 根据CCM的变更记录获取ImageVersion。具体请参见Cloud Controller Manager
  • 通过kubectl cluster-info dump | grep -m1 cluster-cidr命令查看ClusterCIDR
ccm.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:cloud-controller-manager
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- update
- create
- apiGroups:
- ""
resources:
- persistentvolumes
- services
- secrets
- endpoints
- configmaps
- serviceaccounts
- pods
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- delete
- patch
- update
- apiGroups:
- ""
resources:
- services/status
verbs:
- update
- patch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- events
- endpoints
verbs:
- create
- patch
- update
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:cloud-controller-manager
subjects:
- kind: ServiceAccount
name: cloud-controller-manager
namespace: kube-system
#---
#kind: ClusterRoleBinding
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
# name: system:shared-informers
#roleRef:
# apiGroup: rbac.authorization.k8s.io
# kind: ClusterRole
# name: system:cloud-controller-manager
#subjects:
# - kind: ServiceAccount
# name: shared-informers
# namespace: kube-system
#---
#kind: ClusterRoleBinding
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
# name: system:cloud-node-controller
#roleRef:
# apiGroup: rbac.authorization.k8s.io
# kind: ClusterRole
# name: system:cloud-controller-manager
#subjects:
# - kind: ServiceAccount
# name: cloud-node-controller
# namespace: kube-system
#---
#kind: ClusterRoleBinding
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
# name: system:pvl-controller
#roleRef:
# apiGroup: rbac.authorization.k8s.io
# kind: ClusterRole
# name: system:cloud-controller-manager
#subjects:
# - kind: ServiceAccount
# name: pvl-controller
# namespace: kube-system
#---
#kind: ClusterRoleBinding
#apiVersion: rbac.authorization.k8s.io/v1
#metadata:
# name: system:route-controller
#roleRef:
# apiGroup: rbac.authorization.k8s.io
# kind: ClusterRole
# name: system:cloud-controller-manager
#subjects:
# - kind: ServiceAccount
# name: route-controller
# namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: cloud-controller-manager
tier: control-plane
name: cloud-controller-manager
namespace: kube-system
spec:
selector:
matchLabels:
app: cloud-controller-manager
tier: control-plane
template:
metadata:
labels:
app: cloud-controller-manager
tier: control-plane
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: cloud-controller-manager
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node.cloudprovider.kubernetes.io/uninitialized
# nodeSelector:
# node-role.kubernetes.io/master: ""
containers:
- command:
- /cloud-controller-manager
- --leader-elect=true
- --cloud-provider=alicloud
- --use-service-account-credentials=true
- --cloud-config=/etc/kubernetes/config/cloud-config.conf
- --configure-cloud-routes=true
- --route-reconciliation-period=3m
- --leader-elect-resource-lock=leases
# replace ${cluster-cidr} with your own cluster cidr
# example: 172.16.0.0/16
- --cluster-cidr=${ClusterCIDR}
# replace ${ImageVersion} with the latest release version
# example: v2.1.0
image: registry.cn-hangzhou.aliyuncs.com/acs/cloud-controller-manager-amd64:${ImageVersion}
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10258
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: cloud-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
- mountPath: /etc/kubernetes/config
name: cloud-config
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
- configMap:
defaultMode: 420
items:
- key: cloud-config.conf
path: cloud-config.conf
name: cloud-config
name: cloud-config

给node添加providerID

为了使node能够自动添加为负载均衡节点,需要给node添加providerID,可以通过以下命令获取当前节点的名称,并将其作为providerID进行更新

NODE_NAME=$(echo `curl -s http://100.100.100.200/latest/meta-data/region-id`.`curl -s http://100.100.100.200/latest/meta-data/instance-id`)
kubectl patch node ${NODE_NAME} -p '{"spec":{"providerID": "${NODE_NAME}"}}'

结果验证

创建一个测试pod,并声明Service类型为LoadBalancer,查看是否能正常自动创建并分配负载均衡器,更多可参考通过Annotation配置传统型负载均衡CLB

test-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
annotations:
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet"
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2

检查

可以观察ccm的日志,查看是否能正常自动创建负载均衡,创建的负载均衡会自动添加节点监听等

这里的EXTERNAL-IP是负载均衡的IP地址,可以通过访问该IP来验证负载均衡是否正常工作

~ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.16.0.1 <none> 443/TCP 13d
nginx LoadBalancer 172.31.92.246 10.92.162.233 80:31670/TCP 34s

参考文档