一键在arm64架构上部署高可用kubernetes v1.19.0集群

只能用丝滑一词形容的kubernetes高可用安装工具,一条命令,离线安装,包含所有依赖,内核负载不依赖haproxy keepalived,纯golang开发,99年证书,支持v1.16 v1.17 v1.18 v1.19

post thumb
Kubernetes
作者 Louis 发表于 2020年12月6日

[TOC]

一个二进制工具加一个资源包,不依赖haproxy keepalived ansible等重量级工具,一条命令就可实现kubernetes高可用集群构建, 无论是单节点还是集群,单master还是多master,生产还是测试都能很好支持!简单不意味着阉割功能,照样能全量支持kubeadm所有配置。 立即获取sealos及arm64安装包

sealos特性与优势:

  • 支持arm
  • 支持离线安装,工具与资源包(二进制程序 配置文件 镜像 yaml文件等)分离,这样不同版本替换不同离线包即可
  • 百年证书
  • 使用简单
  • 支持自定义配置
  • 内核负载,极其稳定,因为简单所以排查问题也极其简单
  • 不依赖ansible haproxy keepalived, 一个二进制工具,0依赖
  • 资源包放在阿里云oss上,再也不用担心网速
  • dashboard ingress prometheus等APP 同样离线打包,一键安装
  • etcd一键备份(etcd原生api调用)。支持上传至oss,实现异地备份, 用户无需关心细节。

快速开始

img

注意事项

  1. 必须同步所有服务器时间
  2. 所有服务器主机名不能重复

推荐

系统支持:centos7.6以上 ubuntu16.04以上 内核推荐4.14以上

推荐配置:centos7.8

环境信息

主机名 IP地址
master0 192.168.0.2
master1 192.168.0.3
master2 192.168.0.4
node0 192.168.0.5

服务器密码:123456

kubernetes高可用安装教程(arm64)

只需要准备好服务器(arm64),在任意一台服务器上执行下面命令即可

# 下载并安装sealos, sealos是个golang的二进制工具,直接下载拷贝到bin目录即可, release页面也可下载
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/latest/sealos-arm64 && \ 
    chmod +x sealos-arm64 && mv sealos-arm64 /usr/bin/sealos

# 下载离线资源包, arm64 版本就不免费了. 请自行购买.

# 安装一个三master的kubernetes集群
$ sealos init --passwd 123456 
--master 192.168.0.2  --master 192.168.0.3  --master 192.168.0.4  
--node 192.168.0.5 
--pkg-url /root/kube1.19.0-arm64.tar.gz 
--version v1.19.0

参数含义

参数名 含义 示例
passwd 服务器密码 123456
master k8s master节点IP地址 192.168.0.2
node k8s node节点IP地址 192.168.0.3
pkg-url 离线资源包地址,支持下载到本地,或者一个远程地址 /root/kube1.19.0-arm64.tar.gz
version 资源包对应的版本 v1.19.0

增加master

$ sealos join --master 192.168.0.6 --master 192.168.0.7
$ sealos join --master 192.168.0.6-192.168.0.9  # 或者多个连续IP

增加node

$ sealos join --node 192.168.0.6 --node 192.168.0.7
$ sealos join --node 192.168.0.6-192.168.0.9  # 或者多个连续IP

init example

部署3 master和 1 node. 架构是arm64. 基础镜像是centos7.8

$ sealos init --master 192.168.0.208 --master 192.168.0.143 --master 192.168.0.223     --node 192.168.0.233 --passwd 123456 --version v1.19.0 --pkg-url /tmp/kube1.19.0-arm64.tar.gz
...
20:18:26 [INFO] [ssh.go:51] [192.168.0.223:22] [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
20:18:26 [INFO] [ssh.go:51] [192.168.0.143:22] [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
20:18:27 [INFO] [ssh.go:51] [192.168.0.143:22] [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
20:18:27 [INFO] [ssh.go:51] [192.168.0.143:22] [control-plane] Creating static Pod manifest for "kube-controller-manager"
20:18:27 [INFO] [ssh.go:51] [192.168.0.143:22] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
20:18:27 [INFO] [ssh.go:51] [192.168.0.143:22] [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
20:18:37 [INFO] [ssh.go:51] [192.168.0.223:22] [etcd] Announced new etcd member joining to the existing etcd cluster
20:18:37 [INFO] [ssh.go:51] [192.168.0.223:22] [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
20:18:49 [INFO] [ssh.go:51] [192.168.0.223:22] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
20:18:56 [INFO] [ssh.go:51] [192.168.0.143:22] [etcd] Announced new etcd member joining to the existing etcd cluster
20:18:56 [INFO] [ssh.go:51] [192.168.0.143:22] [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
20:18:56 [INFO] [ssh.go:51] [192.168.0.223:22] [mark-control-plane] Marking the node sealos-0002 as control-plane by adding the label "node-role.kubernetes.io/master=''"
20:18:56 [INFO] [ssh.go:51] [192.168.0.223:22] 
20:18:56 [INFO] [ssh.go:58] [ssh][192.168.0.223:22] sed "s/192.168.0.208 apiserver.cluster.local/192.168.0.223 apiserver.cluster.local/g" -i /etc/hosts
20:18:57 [INFO] [ssh.go:58] [ssh][192.168.0.223:22] mkdir -p /root/.kube && cp -i /etc/kubernetes/admin.conf /root/.kube/config
20:18:57 [INFO] [ssh.go:58] [ssh][192.168.0.223:22] rm -rf /root/kube || :
20:18:58 [INFO] [ssh.go:51] [192.168.0.143:22] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
20:18:58 [INFO] [ssh.go:51] [192.168.0.143:22] [mark-control-plane] Marking the node sealos-0001 as control-plane by adding the label "node-role.kubernetes.io/master=''"
20:18:59 [INFO] [ssh.go:51] [192.168.0.143:22] 
20:18:59 [INFO] [ssh.go:58] [ssh][192.168.0.143:22] sed "s/192.168.0.208 apiserver.cluster.local/192.168.0.143 apiserver.cluster.local/g" -i /etc/hosts
20:18:59 [INFO] [ssh.go:58] [ssh][192.168.0.143:22] mkdir -p /root/.kube && cp -i /etc/kubernetes/admin.conf /root/.kube/config
20:18:59 [INFO] [ssh.go:58] [ssh][192.168.0.143:22] rm -rf /root/kube || :
20:19:00 [DEBG] [print.go:20] ==>SendPackage==>KubeadmConfigInstall==>InstallMaster0==>JoinMasters
20:19:00 [INFO] [ssh.go:13] [ssh][192.168.0.233:22] sealos route --host 192.168.0.233
20:19:00 [DEBG] [ssh.go:25] [ssh][192.168.0.233:22]command result is: ok

20:19:00 [INFO] [ssh.go:51] [192.168.0.233:22] 20:19:00 [WARN] [service.go:119] IsVirtualServerAvailable warn: virtual server is empty.
20:19:00 [INFO] [ssh.go:58] [ssh][192.168.0.233:22] kubeadm join 10.103.97.2:6443 --token dj481d.x6uuozqh8n5a9ppx --discovery-token-ca-cert-hash sha256:2777e62c68c5907e480b98be5d5668cc5a89a08eb47f70a7e04e2c2fddd5a9b2 -v 0
20:19:01 [INFO] [ssh.go:51] [192.168.0.233:22] [preflight] Running pre-flight checks
20:19:01 [INFO] [ssh.go:51] [192.168.0.233:22] 	[WARNING FileExisting-socat]: socat not found in system path
20:19:11 [INFO] [ssh.go:51] [192.168.0.233:22] [preflight] Reading configuration from the cluster...
20:19:11 [INFO] [ssh.go:51] [192.168.0.233:22] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
20:19:11 [INFO] [ssh.go:51] [192.168.0.233:22] [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
20:19:34 [INFO] [ssh.go:13] [ssh][192.168.0.233:22] mkdir -p /etc/kubernetes/manifests
20:19:34 [DEBG] [ssh.go:25] [ssh][192.168.0.233:22]command result is: 
20:19:34 [INFO] [scp.go:158] [ssh][192.168.0.233:22]transfer total size is: 0MB
20:19:34 [INFO] [ssh.go:58] [ssh][192.168.0.233:22] rm -rf /root/kube
20:19:35 [DEBG] [print.go:20] ==>SendPackage==>KubeadmConfigInstall==>InstallMaster0==>JoinMasters==>JoinNodes

安装完成后. pod正常运行.

[root@sealos-0001 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-84445dd79f-8sqcf   1/1     Running   0          2m32s
kube-system   calico-node-2g98f                          1/1     Running   0          2m19s
kube-system   calico-node-k44r8                          1/1     Running   0          82s
kube-system   calico-node-n2f4w                          1/1     Running   0          90s
kube-system   calico-node-r4vff                          1/1     Running   0          2m32s
kube-system   coredns-66bff467f8-nnng4                   1/1     Running   0          2m32s
kube-system   coredns-66bff467f8-xgk55                   1/1     Running   0          2m32s
kube-system   etcd-sealos                                1/1     Running   0          2m39s
kube-system   etcd-sealos-0001                           1/1     Running   0          2m
kube-system   etcd-sealos-0002                           1/1     Running   0          2m
kube-system   kube-apiserver-sealos                      1/1     Running   0          2m39s
kube-system   kube-apiserver-sealos-0001                 1/1     Running   1          115s
kube-system   kube-apiserver-sealos-0002                 1/1     Running   0          2m19s
kube-system   kube-controller-manager-sealos             1/1     Running   1          2m39s
kube-system   kube-controller-manager-sealos-0001        1/1     Running   0          56s
kube-system   kube-controller-manager-sealos-0002        1/1     Running   0          2m18s
kube-system   kube-proxy-8bbmq                           1/1     Running   0          90s
kube-system   kube-proxy-c4zvv                           1/1     Running   0          2m19s
kube-system   kube-proxy-ch52f                           1/1     Running   0          2m32s
kube-system   kube-proxy-s5rp4                           1/1     Running   0          82s
kube-system   kube-scheduler-sealos                      1/1     Running   1          2m39s
kube-system   kube-scheduler-sealos-0001                 1/1     Running   0          62s
kube-system   kube-scheduler-sealos-0002                 1/1     Running   0          2m18s
kube-system   kube-sealyun-lvscare-sealos-0003           1/1     Running   0          10s
[root@sealos-0001 ~]# kubectl get nodes 
NAME          STATUS   ROLES    AGE     VERSION
sealos        Ready    master   3m11s   v1.19.0
sealos-0001   Ready    master   2m38s   v1.19.0
sealos-0002   Ready    master   2m39s   v1.19.0
sealos-0003   Ready    <none>   102s    v1.19.0
[root@sealos-0001 ~]# kubectl get nodes -owide
NAME          STATUS   ROLES    AGE     VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                   KERNEL-VERSION              CONTAINER-RUNTIME
sealos        Ready    master   3m17s   v1.19.0   192.168.0.208    <none>        CentOS Linux 7 (AltArch)   4.18.0-80.7.2.el7.aarch64   docker://19.3.12
sealos-0001   Ready    master   2m44s   v1.19.0   192.168.0.143   <none>        CentOS Linux 7 (AltArch)   4.18.0-80.7.2.el7.aarch64   docker://19.3.12
sealos-0002   Ready    master   2m45s   v1.19.0   192.168.0.223   <none>        CentOS Linux 7 (AltArch)   4.18.0-80.7.2.el7.aarch64   docker://19.3.12
sealos-0003   Ready    <none>   108s    v1.19.0   192.168.0.233   <none>        CentOS Linux 7 (AltArch)   4.18.0-80.7.2.el7.aarch64   docker://19.3.12
[root@sealos-0001 ~]# arch
aarch64
上一篇
基于sni使用apisix代理四层TCP/UDP