sealos etcd及exec子命令使用相关

sealos 基于etcd的api调用, 来备份kubernetes集群的etcd;调用kubernetes的api,获取主机ip 执行命令

post thumb
Sealos
作者 Louis 发表于 2020年9月20日

sealos etcd 命令

使用sealos etcd save

备份落盘位置会在执行sealos的主机上. 同时, restore的时候, 也会恢复在执行sealos的主机上, sealos save 逻辑是调用 etcd clientv3 生成snapshot备份文件.

$ sealos etcd save -h
Flags:
      --aliId string        aliyun accessKeyId to save snapshot
      --aliKey string       aliyun accessKeySecrets to save snapshot
      --backupPath string   Specify snapshot backup dir (default "/opt/sealos/ectd-backup")
      --bucket string       oss bucketName to save snapshot
      --docker              snapshot your kubernets etcd in container, will add unix timestamp to snapshot name
      --ep string           aliyun endpoints to save snapshot
  -h, --help                help for save
      --name string         Specify snapshot name (default "snapshot")
      --objectPath string   aliyun oss objectPath to save snapshot, like: /sealos/snapshots/

说明一下选项

  • --aliId: 阿里云的accessKeyId
  • --aliKey: 阿里云的accessKeySecrets
  • --backupPath: 在执行sealos的主机以及master主机上的备份路径. 默认为/opt/sealos/ectd-backup
  • --bucket: 阿里云的oss bucketName
  • --ep: 阿里云oss endpoint. 例如: oss-cn-hangzhou.aliyuncs.com
  • --name: 备份文件的名字, 默认为snapshot
  • --objectPath: 阿里云oss objectPath. 例如: /sealos/snapshots/

这里, 上传阿里云oss. 使用命令行或者编辑配置文件均可.

$ sealos etcd save --docker \
    --aliId youraliyunkeyid \
    --aliKey youraliyunkeysecrets \
    --ep oss-cn-hangzhou.aliyuncs.com  \
    --bucket etcdbackup  \
    --objectPath /sealos/

或者使用vim 编辑 .sealos/config.yaml 文件. 均可

$ cat .sealos/config.yaml
masters:
- 192.168.0.31:22
nodes:
- 192.168.0.30:22
- 192.168.0.88:22
- 192.168.0.65:22
dnsdomain: cluster.local
apiservercertsans:
- 127.0.0.1
- apiserver.cluster.local
- 192.168.0.31
- 10.103.97.2
user: root
passwd: ""
privatekey: /root/.ssh/id_rsa
pkpassword: ""
apiserverdomian: apiserver.cluster.local
vip: 10.103.97.2
pkgurl: /root/kube1.18.0.tar.gz
version: v1.18.0
repo: k8s.gcr.io
podcidr: 100.64.0.0/10
svccidr: 10.96.0.0/12
certpath: /root/.sealos/pki
certetcdpath: /root/.sealos/pki/etcd
lvscarename: fanux/lvscare
lvscaretag: latest
alioss:
  ossendpoint: oss-cn-hangzhou.aliyuncs.com
  accesskeyid: *****
  accesskeysecrets: ****
  bucketname: etcdbackup
  objectpath: /sealos/

kubernetes 使用cronjob备份

依赖 ~/.sealos/config.yaml 依赖 etcd cacert,cert, key, 目前读取的文件是 ~/.sealos/pki/etcd/的证书。

首先. 只需要挂载~/.sealos/到容器~/.sealos/. 挂载这个目录的目的是, 使用其中四个文件. 给四个文件创建secret感觉有点臃肿. 故而采用目录挂载. 我们假设master-01是您执行sealos init的主机.

如果在集群外执行init. 请使用scp. 例如scp -rf ~/.sealos master-01:/root/.

$ kubectl label nodes master-01  name=sealos --overwrite

如果你通过ssh连接到各master的方式如果为秘钥验证, 则需要添加一个secret保存秘钥即可. 为了安全,我这边使用在kube-system中创建的。

$ kubectl create secret generic pk-sealos --from-file=/root/.ssh/id_rsa  -n kube-system

我们编辑crontjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: backup-etcd
  namespace: kube-system   ## you can change to any namespace
spec: #当没有指定pod选择器时它将根据pod模板中的标签创建
  schedule: "0 21 * * *"  # backup everyday on 21:00
  successfulJobsHistoryLimit: 3
  jobTemplate: #创建新pod所使用的pod模板
    spec:
      template: # 此CronJob创建Job资源会用到的模板
        spec:
          restartPolicy: OnFailure  # Job不能使用Always作为默认的重新启动策略
          nodeSelector:
            name: sealos    ## this is your sealos init machine. so we can use hostPath.
          volumes:
          - hostPath:
              path: /root/.sealos
              type: DirectoryOrCreate
            name: sealos-home
          # if use password to auth, just remove this secret
          # before use secret, you must create it.
          - secret:
              defaultMode: 420
              items:
                - key: id_rsa
                  path: id_rsa
              secretName: pk-sealos
            name: pk-sealos
          containers:
          - name: sealos
            image: louisehong/sealos:3.3.10
            args:
              - etcd
              - save
              - --docker
            volumeMounts:
              - mountPath: /root/.sealos
                name: sealos-home
              # if use password to auth, just remove this volumeMounts
              - mountPath: /root/.ssh/id_rsa
                name: pk-sealos
                subPath: id_rsa

然后执行apply即可.

$ kubectl apply -f crontjob.yaml
$ kubectl get cronjobs.batch -n kube-system
NAME          SCHEDULE     SUSPEND   ACTIVE   LAST SCHEDULE   AGE
backup-etcd   0 21 * * *   False     0        20h             9d

备份后, 可以查看具体的日志.

$ $ kubectl get pods -n kube-system| grep backup
backup-etcd-1598662800-69sbh               0/1     Completed   0          2d8h
backup-etcd-1598706000-vwhpz               0/1     Completed   0          44h
backup-etcd-1598792400-m5z5z               0/1     Completed   0          20h
$ kubectl logs -f backup-etcd-1598792400-m5z5z -n kube-system
...
13:00:12 [INFO] [etcd_save.go:120] Finished saving/uploading snapshot [snapshot-1598792407] on aliyun oss [etcdbackup] bucket
13:00:12 [INFO] [etcd.go:111] Finished saving/uploading snapshot [snapshot-1598792407]
13:00:12 [INFO] [etcd_save.go:259] health check for etcd: [{192.168.0.31:2379 true 18.571896ms }]

使用sealos etcd health

很简单的调用etcd clientv3的sdk. 执行健康检查. 每次 save 也会进行健康检查.

$ sealos etcd health
17:14:06 [INFO] [etcd_save.go:255] health check for etcd: [{192.168.0.31:2379 true 10.493436ms }]

使用sealos etcd restore

restore的逻辑是(如果有高手有好的恢复逻辑, 请单独联系我):

  1. 手动交互确认. 如果使用了-f 或者 --force 则跳过确认.
  2. 通过save下来的文件, 进行restore操作. 会在执行sealos etcd restore的主机下生成目录restorePath + hostname.
  3. 停止 kubernetes kube-apiserver kube-controller-manager kube-scheduler etcd . 并备份/var/lib/etcd/
  4. 将2生成的备份目录打包成tar包, 复制到各etcd节点(一般是master节点). 然后在各节点分别解压到/var/lib/etcd.
  5. 启动kubernetes kube-apiserver kube-controller-manager kube-scheduler etcd
  6. 最后做一次健康检查(60s).
  7. 如果其中有错误发生. 则恢复3步骤的备份.
$ sealos etcd restore -h
Restores an etcd member snapshot to an etcd directory

Usage:
  sealos etcd restore [flags]

Flags:
      --backupPath string    Specify snapshot backup dir (default "/opt/sealos/ectd-backup")
  -f, --force                restore need interactive to confirm
  -h, --help                 help for restore
      --name string          Specify snapshot name (default "snapshot")
      --restorePath string   Specify snapshot restore dir (default "/opt/sealos/ectd-restore")

Global Flags:
      --config string   config file (default is $HOME/.sealos/config.yaml)

sealos save 相反的操作. 但是此操作比较危险. 所以添加了交互确认.

选项说明

  • --backupPath: 存储备份的文件夹. 默认和save下来的一致, /opt/sealos/ectd-backup

  • --restorePath: 恢复文件夹. 默认/opt/sealos/ectd-restore. 配合主机名. 避免多master导致恢复文件夹重复.

  • -f, --force: 交互式确认是否要执行restore.

  • --name: 备份的文件名字. 默认和save下来的一致. snapshot

单机测试restore恢复

如下:

[root@dev-k8s-master ~]# ./sealos etcd restore
restore cmd will stop your kubernetes cluster immediately and restore etcd from your backup snapshot file  (y/n)?y
17:34:17 [INFO] [ssh.go:12] [ssh][192.168.160.243] hostname
17:34:17 [DEBG] [ssh.go:24] [ssh][192.168.160.243]command result is: dev-k8s-master

17:34:17 [INFO] [ssh.go:105] [ssh][192.168.160.243] cd /tmp && rm -rf /opt/sealos/ectd-restore-dev-k8s-master
17:34:17 [INFO] [ssh.go:12] [ssh][192.168.160.243] hostname
17:34:18 [DEBG] [ssh.go:24] [ssh][192.168.160.243]command result is: dev-k8s-master

{"level":"info","ts":1598866458.160008,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/opt/sealos/ectd-backup/snapshot","wal-dir":"/opt/sealos/ectd-restore-dev-k8s-master/member/wal","data-dir":"/opt/sealos/ectd-restore-dev-k8s-master","snap-dir":"/opt/sealos/ectd-restore-dev-k8s-master/member/snap"}
{"level":"info","ts":1598866458.1982617,"caller":"mvcc/kvstore.go:380","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":970469}
{"level":"info","ts":1598866458.2281547,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"d12074ddc55c9483","local-member-id":"0","added-peer-id":"5dfe17d3cf203a7e","added-peer-peer-urls":["https://192.168.160.243:2380"]}
{"level":"info","ts":1598866458.235216,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/opt/sealos/ectd-backup/snapshot","wal-dir":"/opt/sealos/ectd-restore-dev-k8s-master/member/wal","data-dir":"/opt/sealos/ectd-restore-dev-k8s-master","snap-dir":"/opt/sealos/ectd-restore-dev-k8s-master/member/snap"}
17:34:28 [INFO] [ssh.go:105] [ssh][192.168.160.243] cd /tmp && mv /etc/kubernetes/manifests /etc/kubernetes/manifestslezSCljV
17:34:28 [INFO] [ssh.go:105] [ssh][192.168.160.243] cd /tmp && mv /var/lib/etcd /var/lib/etcdlezSCljV
17:34:38 [INFO] [etcd.go:136] send restore file to etcd master node and start etcd
17:34:38 [INFO] [ssh.go:12] [ssh][192.168.160.243] hostname
17:34:38 [DEBG] [ssh.go:24] [ssh][192.168.160.243]command result is: dev-k8s-master

17:34:39 [INFO] [etcd_restore.go:140] compress file
17:34:39 [INFO] [ssh.go:57] [ssh][192.168.160.243] mkdir -p /var/lib || true
17:34:39 [DEBG] [download.go:29] [192.168.160.243]please wait for mkDstDir
17:34:39 [INFO] [ssh.go:12] [ssh][192.168.160.243] ls -l /var/lib/ectd-restore-dev-k8s-master.tar 2>/dev/null |wc -l
17:34:39 [DEBG] [ssh.go:24] [ssh][192.168.160.243]command result is: 0

17:34:39 [DEBG] [scp.go:24] [ssh]source file md5 value is bc76f9bb1aea210fb815a43aed27aa29
17:34:40 [ALRT] [scp.go:98] [ssh][192.168.160.243]transfer total size is: 1244.01KB ;speed is 1MB
17:34:40 [INFO] [ssh.go:12] [ssh][192.168.160.243] md5sum /var/lib/ectd-restore-dev-k8s-master.tar | cut -d" " -f1
17:34:40 [DEBG] [ssh.go:24] [ssh][192.168.160.243]command result is: bc76f9bb1aea210fb815a43aed27aa29

17:34:40 [DEBG] [scp.go:27] [ssh]host: 192.168.160.243 , remote md5: bc76f9bb1aea210fb815a43aed27aa29
17:34:40 [INFO] [scp.go:31] [ssh]md5 validate true
17:34:40 [INFO] [download.go:38] [192.168.160.243]copy file md5 validate success
17:34:40 [DEBG] [download.go:44] [192.168.160.243]please wait for after hook
17:34:40 [INFO] [ssh.go:57] [ssh][192.168.160.243] tar xf /var/lib/ectd-restore-dev-k8s-master.tar -C /var/lib/  && mv /var/lib/ectd-restore-dev-k8s-master  /var/lib/etcd && rm -rf /var/lib/ectd-restore-dev-k8s-master.tar
17:34:41 [INFO] [etcd.go:145] Start kube-apiserver kube-controller-manager kube-scheduler
17:34:41 [INFO] [ssh.go:105] [ssh][192.168.160.243] cd /tmp && mv /etc/kubernetes/manifestslezSCljV /etc/kubernetes/manifests
17:34:41 [INFO] [etcd.go:148] Wait 60s to health check for etcd
17:35:41 [INFO] [etcd_save.go:259] health check for etcd: [{192.168.160.243:2379 true 6.206351ms }]
17:35:41 [INFO] [etcd.go:151] restore kubernetes yourself glad~

sealos exec 命令

支持在指定节点执行自定义命令, 拷贝一个文件到一些指定节点

如在所有master节点创建一个目录

sealos exec --cmd "mkdir /data" --label node-role.kubernetes.io/master=""
sealos exec --cmd "mkdir /data" --node x.x.x.x
sealos exec --cmd "mkdir /data" --node dev-k8s-mater

拷贝一个文件到一些指定节点

sealos exec --src /data/foo --dst /root/foo --label node-role.kubernetes.io/master=""
sealos exec --src /data/foo --dst /root/foo --node x.x.x.x
sealos exec --src /data/foo --dst /root/foo --node dev-k8s-mater

实现方法

获取待执行的ip序列逻辑

--label , --node hostname 所关联的对象全部转为ip, 以ip为基点去执行copy或者是cmd.

使用ListOptions, 通过labelSelector直接定位到相关的nodeList节点. 如果label为空, 则返回空. 如果不为空, 则返回的ip 加入到待执行的ip序列.

func GetNodeListByLabel(k8sClient *kubernetes.Clientset, label string) (*v1.NodeList, error) {
	listOption := &metav1.ListOptions{LabelSelector: label}
	return k8sClient.CoreV1().Nodes().List(context.TODO(), *listOption)
}
func GetNodeIpByLabel(k8sClient *kubernetes.Clientset, label string) ([]string, error) {
	var ips []string
	if label == "" {
		return ips, nil
	}
	nodes, err := GetNodeListByLabel(k8sClient, label)
	if err != nil {
		return nil, err
	}
	for _, node := range nodes.Items {
		for _, v := range node.Status.Addresses {
			if v.Type == v1.NodeInternalIP {
				ips = append(ips, v.Address)
			}
		}
	}
	if len(ips) != 0 {
		return ips, nil
	}
	return nil, fmt.Errorf("label %s is not fount in kubernetes nodes", label)
}

--node 进行区分, 如果是ip ,则加入执行ip序列, 如果是hostname, 则加入到hostname序列, 通过使用Get方法, 获取K8s的ClientSet资源对象.通过nodename直接找到定位到node节点. 在通过一次for loop 找到hostname对应的ip, 将得到的ip加入到待执行的ip序列.

node, err := k8sClient.CoreV1().Nodes().Get(context.TODO(), nodeName, metav1.GetOptions{})

for _, node := range resHost {
	ip, err := GetNodeIpByName(k8sClient, node)
	if err == nil {
		ips = append(ips, ip)
	}
}

这里对ips没有进行过滤. 只要满足是ipv4, 即加入到执行序列, 没有对master ipsnodes ips进行比对.

// 集群
k8s-master    192.168.0.31
huohua-test   192.168.0.30
server65      192.168.0.65
server88-new  192.168.0.88

// sealos exec --cmd "hostname"  --node 192.168.0.21
// 192.168.0.21 is not in your kubernetes. but if your ssh access 192.168.0.21. the command will be exec in 192.168.0.21.

执行命令逻辑

判断是执行copy 或者是 cmd方法如下. 如果两者都存在, 则先执行copy逻辑, 再执行cmd逻辑.

type ExecFlag struct {
	Dst      string
	Src      string
	Cmd      string
	Label    string
	ExecNode []string
	SealConfig
}

func (e *ExecFlag) IsUseCopy() bool {
	return FileExist(e.Src) && e.Dst != ""
}

func (e *ExecFlag) IsUseCmd() bool {
	return e.Cmd != ""
}

远程命令和复制实现

实现scp复制, 则是通过复制单个文件, 然后递归复制即可. 查看具体的源码 如果--dst在目标机器存在, 则不执行copy动作, 直接就跳过了.

// todo 这里是否需要添加一个flag, 比如--force, -f, 直接覆盖? 或者先删除再复制?

// CopyLocalToRemote is copy file or dir to remotePath
func (ss *SSH) CopyLocalToRemote(host, localPath, remotePath string) {

}
// ssh session is a problem, 复用ssh链接
func (ss *SSH) copyLocalDirToRemote(sftpClient *sftp.Client, localPath, remotePath string) {

}
// solve the session
func (ss *SSH) copyLocalFileToRemote(sftpClient *sftp.Client, localPath, remotePath string) {

}

实现执行命令的逻辑. 这里是用的ssh. 具体不赘述.

使用方法

$ ./sealos exec -h
support exec cmd or copy file by Label/nodes

Usage:
  sealos exec [flags]

Examples:

	# exec cmd by label or nodes.  when --label and --node is Exist, get Union of both.
	sealos exec --cmd "mkdir /data" --label node-role.kubernetes.io/master= --node 192.168.0.2
	sealos exec --cmd "mkdir /data" --node 192.168.0.2 --nodes dev-k8s-mater

	# exec copy src file to dst by label or nodes. when --label and --node is Exist, get Union of both.
	sealos exec --src /data/foo --dst /root/foo --label node-role.kubernetes.io/master=""
	sealos exec --src /data/foo --dst /root/foo --node 192.168.0.2


Flags:
      --cmd string     exec command string
      --dst string     dest file location
  -h, --help           help for exec
      --label string   kubernetes labels like node-role.kubernetes.io/master=
      --node strings   node ip or hostname in kubernetes
      --src string     source file location

Global Flags:
      --config string   config file (default is $HOME/.sealos/config.yaml)

选项说明:

  • --cmd: 执行的命令.
  • --src: 本地文件路径, 可以使文件或者是文件夹, 配合--dst使用.
  • --dst: 目标文件路径, 配合--src使用.
  • --label: kubernetes集群的label. 支持label逻辑表达式, 如 kubernetes.io/arch!=amd64
  • --node: 节点的ip或者是hostname

测试相关

集群如下

$ kubectl get nodes --show-labels
NAME           STATUS   ROLES    AGE   VERSION   LABELS
huohua-test    Ready    <none>   93d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=huohua-test,kubernetes.io/os=linux,name=huohua
k8s-master     Ready    master   93d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
server65       Ready    <none>   89d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server65,kubernetes.io/os=linux
server88-new   Ready    <none>   90d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server88-new,kubernetes.io/os=linux,name=front

仅使用 --label 执行 cmd 命令

Use –label to exec cmd

$ ./sealos exec --cmd "hostname "  --label beta.kubernetes.io/arch=amd64
19:09:35 [INFO] [ssh.go:57] [ssh][192.168.0.88] cd /tmp && hostname
19:09:35 [INFO] [ssh.go:57] [ssh][192.168.0.31] cd /tmp && hostname
19:09:35 [INFO] [ssh.go:57] [ssh][192.168.0.30] cd /tmp && hostname
19:09:35 [INFO] [ssh.go:57] [ssh][192.168.0.65] cd /tmp && hostname
19:09:35 [INFO] [ssh.go:50] [192.168.0.88] server88-new
19:09:35 [INFO] [ssh.go:50] [192.168.0.30] huohua-test
19:09:35 [INFO] [ssh.go:50] [192.168.0.65] server65
19:09:36 [INFO] [ssh.go:50] [192.168.0.31] k8s-master

仅使用 --node 执行 cmd 命令

Use –node to exec cmd

$ ./sealos exec --cmd "hostname -i"  --node huohua-test --node 192.168.0.65
19:20:47 [INFO] [ssh.go:57] [ssh][192.168.0.30] cd /tmp && hostname -i
19:20:47 [INFO] [ssh.go:57] [ssh][192.168.0.65] cd /tmp && hostname -i
19:20:47 [INFO] [ssh.go:50] [192.168.0.30] 192.168.0.30
19:20:47 [INFO] [ssh.go:50] [192.168.0.65] 192.168.0.65

使用 --label--node执行 cmd 命令

Use --node --label to exec cmd

$ ./sealos exec --cmd "hostname -i"  --node huohua-test  --label node-role.kubernetes.io/master=
19:21:44 [INFO] [ssh.go:57] [ssh][192.168.0.30] cd /tmp && hostname -i
19:21:44 [INFO] [ssh.go:57] [ssh][192.168.0.31] cd /tmp && hostname -i
19:21:44 [INFO] [ssh.go:50] [192.168.0.30] 192.168.0.30
19:21:45 [INFO] [ssh.go:50] [192.168.0.31] 192.168.0.31

使用 --label--node执行 cmd 命令和 复制文件

Use –node & –label to exec cmd & copy files

$ ./sealos exec --cmd "ls -lh /data/01.txt" --src /root/01.txt --dst /data/01.txt  --node huohua-test  --label node-role.kubernetes.io/master=
19:23:01 [INFO] [ssh.go:12] [ssh][192.168.0.30] ls -l /data/01.txt 2>/dev/null |wc -l
19:23:01 [INFO] [ssh.go:12] [ssh][192.168.0.31] ls -l /data/01.txt 2>/dev/null |wc -l
19:23:01 [DEBG] [ssh.go:24] [ssh][192.168.0.30]command result is: 0
19:23:02 [INFO] [scp.go:328] [ssh]transfer [/root/01.txt] total size is: 2.11MB ;speed is 2MB
19:23:02 [DEBG] [ssh.go:24] [ssh][192.168.0.31]command result is: 0
19:23:02 [INFO] [scp.go:328] [ssh]transfer [/root/01.txt] total size is: 2.11MB ;speed is 2MB
19:23:02 [INFO] [ssh.go:57] [ssh][192.168.0.30] cd /tmp && ls -lh /data/01.txt
19:23:02 [INFO] [ssh.go:57] [ssh][192.168.0.31] cd /tmp && ls -lh /data/01.txt
19:23:02 [INFO] [ssh.go:50] [192.168.0.30] -rw-r--r-- 1 root root 2.2M 9月   4 19:23 /data/01.txt
19:23:03 [INFO] [ssh.go:50] [192.168.0.31] -rw-r--r--. 1 root root 2.2M Sep  4 19:23 /data/01.txt

使用 --label--node执行 cmd 命令和 复制文件夹

Use –node & –label to exec cmd & copy dir

$ ./sealos exec --cmd "ls -lh /data/test" --src /root/test --dst /data/test  --node huohua-test  --label node-role.kubernetes.io/master=
19:24:24 [INFO] [ssh.go:12] [ssh][192.168.0.30] ls -l /data/test 2>/dev/null |wc -l
19:24:24 [INFO] [ssh.go:12] [ssh][192.168.0.31] ls -l /data/test 2>/dev/null |wc -l
19:24:24 [DEBG] [ssh.go:24] [ssh][192.168.0.30]command result is: 0
19:24:24 [INFO] [scp.go:328] [ssh]transfer [/root/test/crontab.yaml] total size is: 1.19KB ;speed is 1KB
19:24:24 [INFO] [scp.go:328] [ssh]transfer [/root/test/crontab.yaml.bak] total size is: 2.23KB ;speed is 2KB
19:24:24 [DEBG] [ssh.go:24] [ssh][192.168.0.31]command result is: 0
19:24:25 [INFO] [scp.go:328] [ssh]transfer [/root/test/crontab.yaml] total size is: 1.19KB ;speed is 1KB
19:24:25 [INFO] [scp.go:328] [ssh]transfer [/root/test/crontab.yaml.bak] total size is: 2.23KB ;speed is 2KB
19:24:25 [INFO] [ssh.go:57] [ssh][192.168.0.30] cd /tmp && ls -lh /data/test
19:24:25 [INFO] [ssh.go:57] [ssh][192.168.0.31] cd /tmp && ls -lh /data/test
19:24:25 [INFO] [ssh.go:50] [192.168.0.30] 总用量 8.0K
19:24:25 [INFO] [ssh.go:50] [192.168.0.30] -rw-r--r-- 1 root root 1.2K 9月   4 19:24 crontab.yaml
19:24:25 [INFO] [ssh.go:50] [192.168.0.31] total 8.0K

使用 --label时, label 不存在

Use –label if label not exist

$ ./sealos exec --cmd "hostname -i"  --node huohua-test  --label node-role.kubernete.
12:48:25 [EROR] [exec.go:53] get ips err:  unable to parse requirement: invalid label key "node-role.kubernete.": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
$ ./sealos exec --cmd "hostname -i"  --node huohua-test  --label node-role.kubernete
12:48:42 [EROR] [exec.go:53] get ips err:  label node-role.kubernete is not fount in kubernetes nodes

使用 --node时, node不存在

Use --node if node not exist in kuernetes

12:50:05 [INFO] [ssh.go:50] [192.168.0.88] server88-new
12:50:05 [INFO] [ssh.go:50] [192.168.0.30] huohua-test
12:50:05 [INFO] [ssh.go:50] [192.168.0.65] server65
12:50:05 [INFO] [ssh.go:50] [192.168.0.31] k8s-master
## this will output nothing
$ ./sealos exec --cmd "hostname -i"  --node huohua-test031
## only exec on exsit nodes.
./sealos exec --cmd "hostname -i"  --node huohua-test031 --node 192.168.0.65
12:51:28 [INFO] [ssh.go:57] [ssh][192.168.0.65] cd /tmp && hostname -i
12:51:29 [INFO] [ssh.go:50] [192.168.0.65] 192.168.0.65
## when nodes ip is format but is not in kubernetes, will appear ssh session error , timeout.
$ ./sealos exec --cmd "hostname -i"  --node huohua-test031 --node 192.168.9.65
12:52:14 [INFO] [ssh.go:57] [ssh][192.168.9.65] cd /tmp && hostname -i
12:53:14 [EROR] [ssh.go:60] [ssh][192.168.9.65]Error create ssh session failed,dial tcp 192.168.9.65:22: i/o timeout

支持kubernetes label的逻辑表达式.

support kubernetes label exp like kubernetes.io/arch!=amd64

$ kubectl get nodes --show-labels
NAME           STATUS   ROLES    AGE   VERSION   LABELS
huohua-test    Ready    <none>   93d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=huohua-test,kubernetes.io/os=linux,name=huohua
k8s-master     Ready    master   93d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
server65       Ready    <none>   89d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server65,kubernetes.io/os=linux
server88-new   Ready    <none>   89d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server88-new,kubernetes.io/os=linux,name=front
$ ./sealos exec --cmd  "hostname -i"  --label name!=front
09:17:21 [INFO] [ssh.go:57] [ssh][192.168.0.65] cd /tmp && hostname -i
09:17:21 [INFO] [ssh.go:57] [ssh][192.168.0.30] cd /tmp && hostname -i
09:17:21 [INFO] [ssh.go:57] [ssh][192.168.0.31] cd /tmp && hostname -i
09:17:21 [INFO] [ssh.go:50] [192.168.0.30] 192.168.0.30
09:17:21 [INFO] [ssh.go:50] [192.168.0.65] 192.168.0.65
09:17:21 [INFO] [ssh.go:50] [192.168.0.31] 192.168.0.31
上一篇
为什么 TCP 建立连接需要三次握手

文章推荐