0%

在阿里云上部署OpenYurt

安装kubelet、kubeadm、kubectl

添加k8s yum源

1
2
3
4
5
6
7
8
9
10
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
1
2
3
yum install -y kubectl-1.22.15 kubeadm-1.22.15 kubelet-1.22.15

systemctl enable kubelet

注意 docker 与 kubelet 的 cgroupdriver 配置必须相同,否则 kubelet 起不来,根据情况选择 systemd、cgroupfs,必须保持一致

1
2
3
4
5
6
7
8
# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

# /etc/sysconfig/kubelet 或者 /etc/default/kubelet
echo 'KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs' > /etc/sysconfig/kubelet
echo 'KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs' > /etc/default/kubelet

修改一致后重载

1
2
3
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

初始化k8s集群

因为阿里云服务器内部网卡没有对应公网ip地址,需要创建虚拟网卡

1
ifconfig eth0:0 x.x.x.x up

修改内核参数

1
2
3
4
5
6
cat <<EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sysctl --system

初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
kubeadm init \
--apiserver-advertise-address=a.a.a.a \
--apiserver-cert-extra-sans=127.0.0.1 \
--apiserver-cert-extra-sans=x.x.x.x \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.22.15 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

部署网络插件

1
2
3
4
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 查看是否部署成功
kubectl get pods -n kube-system

部署kuboard

1
2
3
4
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

# 查看是否部署成功
kubectl get pods -n kuboard

给云端节点打标签

1
2
3
kubectl taint node ai-01 node-role.kubernetes.io/master:NoSchedule-

kubectl label node ai-01 openyurt.io/is-edge-worker=false

Kube-Controller-Manager调整

修改 /etc/kubernetes/manifests/kube-controller-manager.yaml,添加 -nodelifecycle 参数

1
2
3
4

- --controllers=*,bootstrapsigner,tokencleaner
修改为
- --controllers=-nodelifecycle,*,bootstrapsigner,tokencleaner

部署Yurt-Tunnel专用DNS

1
2
3
4
5
6
7
git clone https://github.com/openyurtio/openyurt.git
cd openyurt
git checkout v1.1.0
kubectl apply -f config/setup/yurt-tunnel-dns.yaml

# 查看是否部署成功
kubectl get svc -n kube-system | grep yurt-tunnel-dns

Kube-apiserver调整

修改 /etc/kubernetes/manifests/kube-apiserver.yaml:

  1. 修改dnsPolicy=”None”
  2. 增加dnsConfig配置,其中的 nameservers 配置为 yurt-tunnel-dns service 的 clusterIP
  3. 修改启动参数–kubelet-preferred-address-types=Hostname,InternalIP,ExternalIP,确保Kube-apiserver优先使用Hostname访问kubelet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: Pod
...
spec:
dnsPolicy: "None" # 1. dnsPolicy修改为None
dnsConfig: # 2. 增加dnsConfig配置
nameservers:
- 10.99.13.133 # 使用yurt-tunnel-dns service的clusterIP替换
searches:
- kube-system.svc.cluster.local
- svc.cluster.local
- cluster.local
options:
- name: ndots
value: "5"
containers:
- command:
- kube-apiserver
...
- --kubelet-preferred-address-types=Hostname,InternalIP,ExternalIP # 3. 把Hostname放在第一位
...

CoreDNS调整

增加annotation,利用OpenYurt中Yurthub的边缘数据过滤机制实现服务流量拓扑能力,确保节点上的域名解析请求只会发给同一节点池内的CoreDNS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
# 利用openyurt实现endpoint过滤
kubectl annotate svc kube-dns -n kube-system openyurt.io/topologyKeys='openyurt.io/nodepool'
CoreDNS DaemonSet部署
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: kube-dns
name: coredns
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
containers:
- args:
- -conf
- /etc/coredns/Corefile
image: registry.aliyuncs.com/google_containers/coredns:v1.8.4
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8181
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/coredns
name: config-volume
readOnly: true
dnsPolicy: Default
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- operator: Exists
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
name: coredns
name: config-volume

减少CoreDNS Deployment 副本数

1
kubectl scale --replicas=0 deployment/coredns -n kube-system

KubeProxy调整

注释掉 config.conf 文件下的 clientConnection.kubeconfig,修改完后效果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl edit cm -n kube-system kube-proxy

apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
#kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 0s

安装helm

1
2
3
wget https://get.helm.sh/helm-v3.12.2-linux-amd64.tar.gz
tar -zxvf helm-v3.12.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin

部署yurt-app-manager

1
2
3
4
5
6
7
git clone https://github.com/openyurtio/openyurt-helm.git
cd openyurt-helm
git checkout openyurt-1.1.0
helm install yurt-app-manager -n kube-system ./charts/yurt-app-manager/

# 确保yurt-app-manager组件的pod和服务配置已被成功创建
kubectl get pod,svc -n kube-system | grep yurt-app-manager

创建节点池

1
2
3
4
5
6
7
8
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1beta1
kind: NodePool
metadata:
name: master
spec:
type: Cloud
EOF

将云端节点加入节点池

1
kubectl label node ai-01 apps.openyurt.io/desired-nodepool=master

部署openyurt组件

因为云端节点和边缘节点不在同一网络平面内,所以需要手动修改 openyurt-helm/charts/values.yaml 中 tunnel 相关参数:
yurtTunnelAgent.parameters.tunnelserverAddr=”x.x.x.x:31008”
yurtTunnelServer.parameters.certIps=”x.x.x.x”

1
2
3
4
helm install openyurt ./charts/openyurt/ -n kube-system

# 确认是否安装成功
helm list -A

接入边缘节点

安装yurtadm

1
2
3
wget https://github.com/openyurtio/openyurt/releases/download/v1.1.0/yurtadm
chmod +x yurtadm
mv yurtadm /usr/local/bin/

接入
注意坑,k8s集群要求节点hostname不能有下划线,否则会失败,需要修改hostname

1
yurtadm join x.x.x.x:6443 --token=aeqnan.hpwqepdey7rc43s4 --node-type=edge --discovery-token-unsafe-skip-ca-verification --v=5

使用节点池管理边缘节点

可以将网络互通的边缘节点加入到同一个节点池

1
2
3
4
5
6
7
8
9
10
11
12
# 创建节点池
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: NodePool
metadata:
name: 节点池名称
spec:
type: Edge
EOF

# 将节点加入节点池
kubectl label node 节点名称 apps.openyurt.io/desired-nodepool=节点池名称