核心组件

|name|备注|

|-------|-------|

|kube-apiserver|该组件负责公开 K8s API,负责 API 请求处理,提供了资源操作的唯一入口(如认证、授权、访问控制、API 注册等),是 K8s 的前端控制平面组件。|

|etcd|是兼顾一致性与高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库,它保存了整个 K8s 集群的状态。|

|kube-scheduler|该组件负责资源调度,如监视新创建的、未指定运行节点(node) 的 Pods, 并选择节点来让 Pod 在上面运行。|

|kube-controller-manager|该组件负责运行控制器进程,从逻辑上讲,每个控制器都是单独的一个进程,但是为了降低复杂性,这些控制器都被编译到同一个可执行文件,并在同一个进程中运行。负责维护集群状态,如故障检测、自动扩展、滚动更新等。|

|kubelet|该组件运行在 K8s 集群中的每个 work 节点上,保证容器都运行在 Pod 中|

|kube-proxy|该组件运行在 K8s 集群中的每个 work 节点上,是集群中每个 work 节点上运行的网络代理,负责维护节点上的一些网络规则, 这些网络规则会允许从集群内部或外部的网络会话与 Pod 进行网络通信。|

|Container Runtime|该组件是容器运行时,是负责运行容器的软件|

这里我准备了三台centos7操作系统

一、单master节点部署

|角色|ip|组件|

|-------|-------|-------|

|k8s-master|192.168.0.135|kube-apiserver,kube-controller-manager,kube-scheduler,etcd|

|k8s-node1|192.168.0.136|kubelet,kube-proxy,docker,etcd|

|k8s-node2|192.168.0.137|kubelet,kube-proxy,docker,etcd|

2、操作系统初始化配置(所有节点都需要做)

```

#换所有机器换国内yum源,全部更新下源[centos7换源](https://zmzycc.top/archives/centos%E6%8D%A2%E6%BA%90?token=36b8be6f33d7402fbee5354b46dff588)

yum update

#关闭系统防火墙

systemctl stop firewalld

systemctl disable firewalld

#关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config #永久

setenforce 0 # 临时

#关闭swap

swapoff -a #临时

sed -ri 's/.*swap.*/#&/' /etc/fstab #永久

#根据规划设置主机名

hostnamectl set-hostname k8s-master1

hostnamectl set-hostname k8s-node1

hostnamectl set-hostname k8s-node2

#查看日期时间、时区及NTP状态(需要一致)

timedatectl

#修改时

timedatectl set-timezone Asia/Shanghai

#添加hosts

cat >> /etc/hosts << EOF

192.168.0.135 k8s-master1

192.168.0.136 k8s-node1

192.168.0.137 k8s-node2

EOF

系统limit设置

vim /etc/security/limits.conf

# 末尾添加如下内容

* soft nofile 65536

* hard nofile 131072

* soft nproc 65535

* hard nproc 655350

* soft memlock unlimited

* hard memlock unlimited

#升级内核

3.10内核在大规模集群具有不稳定性,所以需要升级内核 。(所有机器都要升级到一样)

# 内核要求是4.18+,如果是`CentOS 8`则不需要升级内核

#载入公钥

$ rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

#安装 ELRepo 最新版本

$ yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm

#列出可以使用的 kernel 包版本

$ yum list available --disablerepo=* --enablerepo=elrepo-kernel

#安装指定的 kernel 版本:(已查看版本为准,采用lt长期支持版本)

$ yum install -y kernel-lt-5.4.238-1.el7.elrepo --enablerepo=elrepo-kernel #官网下载会很慢可以通过已经下载好的安装地址是:

$ yum install -y https://alist.zmzycc.com/d/%E9%98%BF%E9%87%8C%E4%BA%91%E7%9B%98/centos7%E5%86%85%E6%A0%B8/kernel-lt-5.4.238-1.el7.elrepo.x86_64.rpm

#查看系统可用内核

$ cat /boot/grub2/grub.cfg | grep menuentry

#设置开机从新内核启动

$ grub2-set-default 'CentOS Linux (5.4.238-1.el7.elrepo.x86_64) 7 (Core)'

#查看内核启动项

$ grub2-editenv list

saved_entry=CentOS Linux (5.4.238-1.el7.elrepo.x86_64) 7 (Core)

#重启系统使内核生效:

$ reboot

#启动完成查看内核版本是否更新:

$ uname -r

5.4.238-1.el7.elrepo.x86_64

#ipvs安装

在K8s中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块

yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"

for kernel_module in \${ipvs_modules}; do

/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1

if [ $? -eq 0 ]; then

/sbin/modprobe \${kernel_module}

fi

done

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

#检查是否加载:

lsmod | grep ip_vs

#内核参数优化

cat > /etc/sysctl.d/k8s.conf << EOF

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

fs.may_detach_mounts = 1

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv4.tcp_keepalive_time = 600

net.ipv4.tcp.keepaliv.probes = 3

net.ipv4.tcp_keepalive_intvl = 15

net.ipv4.tcp.max_tw_buckets = 36000

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp.max_orphans = 327680

net.ipv4.tcp_orphan_retries = 3

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.top_timestamps = 0

net.core.somaxconn = 16384

EOF

# 立即生效

sysctl --system

#时间同步chonry或者ntp

#使用阿里云时间服务器进行临时同步

```

3.cfssl证书生成工具准备

cfssl简介:

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

找任意一台服务器操作,这里用Master1节点。

```

#创建目录存放cfssl工具

mkdir -p /opt/cfssl

#下载相关工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -P /opt/cfssl/

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -P /opt/cfssl/

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -P /opt/cfssl/

chmod +x /opt/cfssl/*

cp /opt/cfssl/cfssl_linux-amd64 /usr/local/bin/cfssl

cp /opt/cfssl/cfssljson_linux-amd64 /usr/local/bin/cfssljson

cp /opt/cfssl/cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

```

3.4 自签证书颁发机构(CA)

3.4.1 创建工作目录

```

mkdir -p ~/TLS/{etcd,k8s}

cd ~/TLS/etcd/

```

3.4.2 生成自签CA配置

```

cat > ca-config.json << EOF

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"www": {

"expiry": "87600h",

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

]

}

}

}

}

EOF

cat > ca-csr.json << EOF

{

"CN": "etcd CA",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu"

}

]

}

EOF

```

3.4.3 生成自签CA证书

```

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

```

![image-1676449255338](https://zmzycc.top/upload/2023/02/image-1676449255338.png)

当前目录下会生成 ca.pem和ca-key.pem文件

![image-1676449290838](https://zmzycc.top/upload/2023/02/image-1676449290838.png)

3.5 使用自签CA签发etcd https证书

```

cat > server-csr.json << EOF

{

"CN": "etcd",

"hosts": [

"192.168.0.135",

"192.168.0.136",

"192.168.0.137"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu"

}

]

}

EOF

```

说明:

上述文件hosts字段中ip为所有etcd节点的集群内部通信ip,一个都不能少,为了方便后期扩容可以多写几个预留的ip。

3.5.2 生成证书

```

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

```

如下显示成功

![2691680096389_.pic](https://zmzycc.top/upload/2023/03/2691680096389_.pic.jpg)

说明:

当前目录下会生成 server.pem 和 server-key.pem

3.6 下载etcd二进制文件

```

wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

```

3.7 部署etcd集群

以下操作在master1上面操作,为简化操作,待会将master1节点生成的所有文件拷贝到其他节点。

3.7.1 创建工作目录并解压二进制包

```

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar -xf etcd-v3.4.9-linux-amd64.tar.gz

mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

```

3.8 创建etcd配置文件

```

cat > /opt/etcd/cfg/etcd.conf << EOF

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.0.135:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.0.135:2379"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.135:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.135:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.135:2380,etcd-2=https://192.168.0.136:2380,etcd-3=https://192.168.0.137:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

EOF

```

配置说明:

ETCD_NAME: 节点名称,集群中唯一

ETCD_DATA_DIR:数据目录

ETCD_LISTEN_PEER_URLS:集群通讯监听地址

ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

ETCD_INITIAL_CLUSTER:集群节点地址

ETCD_INITIALCLUSTER_TOKEN:集群Token

ETCD_INITIALCLUSTER_STATE:加入集群的状态:new是新集群,existing表示加入已有集群

3.9 systemd管理etcd

```

cat > /etc/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

EnvironmentFile=/opt/etcd/cfg/etcd.conf

ExecStart=/opt/etcd/bin/etcd \

--cert-file=/opt/etcd/ssl/server.pem \

--key-file=/opt/etcd/ssl/server-key.pem \

--peer-cert-file=/opt/etcd/ssl/server.pem \

--peer-key-file=/opt/etcd/ssl/server-key.pem \

--trusted-ca-file=/opt/etcd/ssl/ca.pem \

--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \

--logger=zap

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

```

将上述生成的证书copy到指定目录

```

cp /root/TLS/etcd/server.pem server-key.pem ca.pem /opt/etcd/ssl

```

3.10 将master1节点所有生成的文件拷贝到节点2和节点3

```

for i in {2..3}

do

scp -r /opt/etcd/ root@192.168.242.5$i:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.242.5$i:/usr/lib/systemd/system/

done

```

3.11 修改节点2,节点3 ,etcd.conf配置文件中的节点名称和当前服务器IP:

vi /opt/etcd/cfg/etcd.conf

```

#[Member]

ETCD_NAME="etcd-2" #节点2修改为: etcd-2 节点3修改为: etcd-3

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.0.136:2380" #修改为对应节点IP

ETCD_LISTEN_CLIENT_URLS="https://192.168.0.136:2379" #修改为对应节点IP

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.136:2380" #修改为对应节点IP

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.136:2379" #修改为对应节点IP

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.135:2380,etcd-2=https://192.168.0.136:2380,etcd-3=https://192.168.0.137:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

```

![image-1676452885318](https://zmzycc.top/upload/2023/02/image-1676452885318.png)

说明:

etcd须多个节点同时启动,不然执行systemctl start etcd会一直卡在前台,连接其他节点,建议通过批量管理工具,或者脚本同时启动etcd。

```

systemctl daemon-reload

systemctl start etcd

systemctl enable etcd

```

3.13 检查etcd集群状态

```

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.135:2379,https://192.168.0.136:2379,https://192.168.0.137:2379" endpoint health --write-out=table

```

![image-1676453053424](https://zmzycc.top/upload/2023/02/image-1676453053424.png)

如果为以上状态证明部署的没有问题

安装Docker(所有节点)

这里使用Docker作为容器引擎,也可以换成别的,例如containerd,k8s在1.20版本就不在支持docker

4.2 docker配置镜像加速

mkdir -p /etc/docker

vim /etc/docker/daemon.json

```

{

"registry-mirrors": ["https://giuamyjr.mirror.aliyuncs.com"]

}

```

systemctl daemon-reload

systemctl start docker

systemctl restart docker

systemctl enable docker

二、部署Master组件(安装api server)

在master01节点操作

5.1.1 自签证书颁发机构(CA)

```

cd ~/TLS/k8s

cat > ca-config.json << EOF

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"kubernetes": {

"expiry": "87600h",

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

]

}

}

}

}

EOF

cat > ca-csr.json << EOF

{

"CN": "kubernetes",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu",

"O": "k8s",

"OU": "System"

}

]

}

EOF

```

生成证书:

```

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

```

![image-1676456034582](https://zmzycc.top/upload/2023/02/image-1676456034582.png)

目录下会生成 ca.pem 和 ca-key.pem

5.1.2 使用自签CA签发kube-apiserver https证书

创建证书申请文件:

```

cat > server-csr.json << EOF

{

"CN": "kubernetes",

"hosts": [

"10.0.0.1",

"127.0.0.1",

"192.168.0.135",

"192.168.0.136",

"192.168.0.137",

"192.168.0.138",

"kubernetes",

"kubernetes.default",

"kubernetes.default.svc",

"kubernetes.default.svc.cluster",

"kubernetes.default.svc.cluster.local"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu",

"O": "k8s",

"OU": "System"

}

]

}

EOF

```

说明:

上述文件中hosts字段中IP为所有Master/LB/VIP IP,一个都不能少,为了方便后期扩容可以多写几个预留的IP。

生成证书:

```

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

```

当前目录下会生成server.pem 和 server-key.pem文件。

下载网址kubernetes-server

```

https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz

```

5.3 解压二进制包

```

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

tar zxvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin

cp kubectl /usr/bin/

```

5.4 部署kube-apiserver

5.4.1 创建配置文件

```

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF

KUBE_APISERVER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--etcd-servers=https://192.168.0.135:2379,https://192.168.0.136:2379,https://192.168.0.137:2379 \\

--bind-address=192.168.0.135 \\

--secure-port=6443 \\

--advertise-address=192.168.0.135 \\

--allow-privileged=true \\

--service-cluster-ip-range=10.0.0.0/24 \\

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\

--authorization-mode=RBAC,Node \\

--enable-bootstrap-token-auth=true \\

--token-auth-file=/opt/kubernetes/cfg/token.csv \\

--service-node-port-range=30000-32767 \\

--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\

--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\

--tls-cert-file=/opt/kubernetes/ssl/server.pem \\

--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\

--client-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--service-account-issuer=api \\

--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\

--etcd-cafile=/opt/etcd/ssl/ca.pem \\

--etcd-certfile=/opt/etcd/ssl/server.pem \\

--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\

--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\

--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\

--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\

--requestheader-allowed-names=kubernetes \\

--requestheader-extra-headers-prefix=X-Remote-Extra- \\

--requestheader-group-headers=X-Remote-Group \\

--requestheader-username-headers=X-Remote-User \\

--enable-aggregator-routing=true \\

--audit-log-maxage=30 \\

--audit-log-maxbackup=3 \\

--audit-log-maxsize=100 \\

--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

EOF

```

```

--logtostderr :启用日志

--v :日志等级

--log-dir :日志目录

--etcd-servers :etcd集群地址

--bind-address :监听地址

--secure-port :https安全端口

--advertise-address :集群通告地址

--allow-privileged :启动授权

--service-cluster-ip-range :Service虚拟IP地址段

--enable-admission-plugins : 准入控制模块

--authorization-mode :认证授权,启用RBAC授权和节点自管理

--enable-bootstrap-token-auth :启用TLS bootstrap机制

--token-auth-file :bootstrap token文件

--service-node-port-range :Service nodeport类型默认分配端口范围

--kubelet-client-xxx :apiserver访问kubelet客户端证书

--tls-xxx-file :apiserver https证书

1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file

--etcd-xxxfile :连接etcd集群证书

--audit-log-xxx :审计日志

```

5.4.2 拷贝刚才生成k8s文件夹的证书

```

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

```

5.4.3 启用TLS bootstrapping机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

创建上述配置文件中token文件:

```

# 格式:token,用户名,UID,用户组

cat > /opt/kubernetes/cfg/token.csv << EOF

$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"

EOF

```

token也可自行生成替换:

```

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

```

5.4.4 systemd管理apiserver

```

cat > /etc/systemd/system/kube-apiserver.service << EOF

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf

ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

```

5.4.5 启动并设置开机启动

systemctl daemon-reload

systemctl start kube-apiserver

systemctl enable kube-apiserver

```

可查看kube-apiserver启动报错的日志

cat /var/log/messages|grep kube-apiserver|grep -i error

```

5.5 部署kube-controller-manager

5.5.1 创建配置文件

```

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--leader-elect=true \\

--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\

--bind-address=127.0.0.1 \\

--allocate-node-cidrs=true \\

--cluster-cidr=10.244.0.0/16 \\

--service-cluster-ip-range=10.0.0.0/24 \\

--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\

--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--root-ca-file=/opt/kubernetes/ssl/ca.pem \\

--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\

--cluster-signing-duration=87600h0m0s"

EOF

```

--kubeconfig :连接apiserver配置文件。

--leader-elect :当该组件启动多个时,自动选举(HA)

--cluster-signing-cert-file :自动为kubelet颁发证书的CA,apiserver保持一致

--cluster-signing-key-file :自动为kubelet颁发证书的CA,apiserver保持一致

# 创建证书请求文件

cd ~/TLS/k8s

生成kube-controller-manager证书 :

```

cat > kube-controller-manager-csr.json << EOF

{

"CN": "system:kube-controller-manager",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu",

"O": "system:masters",

"OU": "System"

}

]

}

EOF

```

# 生成证书

```

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

```

生成kubeconfig文件(以下是shell命令,直接在shell终端执行)

```

KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"

KUBE_APISERVER="https://192.168.0.135:6443"

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-controller-manager \

--client-certificate=./kube-controller-manager.pem \

--client-key=./kube-controller-manager-key.pem \

--embed-certs=true \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-controller-manager \

--kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

```

5.5.3 systemd管理controller-manager

```

cat > /etc/systemd/system/kube-controller-manager.service << EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf

ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

```

5.5.4 启动并设置开机自启

systemctl daemon-reload

systemctl start kube-controller-manager

systemctl enable kube-controller-manager

5.6 部署 kube-scheduler

5.6.1 创建配置文件

```

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF

KUBE_SCHEDULER_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--leader-elect \\

--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\

--bind-address=127.0.0.1"

EOF

```

--kubeconfig :连接apiserver配置文件

--leader-elect :当该组件启动多个时,自动选举(HA)。

# 创建证书请求文件

生成kube-scheduler证书

```

# 切换工作目录

cd ~/TLS/k8s

cat > kube-scheduler-csr.json << EOF

{

"CN": "system:kube-scheduler",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu",

"O": "system:masters",

"OU": "System"

}

]

}

EOF

```

# 生成证书

```

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

```

生成kubeconfig文件 :

```

KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"

KUBE_APISERVER="https://192.168.0.135:6443"

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-scheduler \

--client-certificate=./kube-scheduler.pem \

--client-key=./kube-scheduler-key.pem \

--embed-certs=true \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-scheduler \

--kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

```

5.6.3 systemd管理scheduler

```

cat > /etc/systemd/system/kube-scheduler.service << EOF

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf

ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

```

5.6.4 启动并设置开机启动

```

systemctl daemon-reload

systemctl start kube-scheduler

systemctl enable kube-scheduler

```

查看集群状态

生成kubectl连接集群的证书 :

```

cd ~/TLS/k8s

cat > admin-csr.json <<EOF

{

"CN": "admin",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu",

"O": "system:masters",

"OU": "System"

}

]

}

EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

```

生成kubeconfig文件 :

```

mkdir /root/.kube

KUBE_CONFIG="/root/.kube/config"

KUBE_APISERVER="https://192.168.0.135:6443"

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials cluster-admin \

--client-certificate=./admin.pem \

--client-key=./admin-key.pem \

--embed-certs=true \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \

--cluster=kubernetes \

--user=cluster-admin \

--kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

```

通过kubectl工具查看当前集群组件状态 :

```

kubectl get cs

```

![image-1676462742442](https://zmzycc.top/upload/2023/02/image-1676462742442.png)

如上说明Master节点组件运行正常。

5.6.6 授权kubelet-bootstrap用户允许请求证书

```

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

```

如果报错:

![image-1680102363178](https://zmzycc.top/upload/2023/03/image-1680102363178.png)

解决:

```

kubectl delete clusterrolebindings kubelet-bootstrap #删除原有认证,重新加入

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

```

6、部署Work Node

下面还是在master node上面操作,即当Master节点,也当Work Node节点

注: 在所有work node创建工作目录

```

work中执行,master中已执行

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

```

部署kubelet(master)

创建配置文件

```

cat > /opt/kubernetes/cfg/kubelet.conf << EOF

KUBELET_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--hostname-override=k8s-master1 \\

--network-plugin=cni \\

--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\

--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\

--config=/opt/kubernetes/cfg/kubelet-config.yml \\

--cert-dir=/opt/kubernetes/ssl \\

--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

```

6.2.2 配置文件

```

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 0.0.0.0

port: 10250

readOnlyPort: 10255

cgroupDriver: cgroupfs

clusterDNS:

- 10.0.0.2

clusterDomain: cluster.local

failSwapOn: false

authentication:

anonymous:

enabled: false

webhook:

cacheTTL: 2m0s

enabled: true

x509:

clientCAFile: /opt/kubernetes/ssl/ca.pem

authorization:

mode: Webhook

webhook:

cacheAuthorizedTTL: 5m0s

cacheUnauthorizedTTL: 30s

evictionHard:

imagefs.available: 15%

memory.available: 100Mi

nodefs.available: 10%

nodefs.inodesFree: 5%

maxOpenFiles: 1000000

maxPods: 110

EOF

```

6.2.3 生成kubelet初次加入集群引导kubeconfig文件

```

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"

KUBE_APISERVER="https://192.168.0.135:6443" # apiserver IP:PORT

TOKEN="b4a023e14175d86780a0642938f8220d" # 与token.csv里保持一致 /opt/kubernetes/cfg/token.csv

# 生成 kubelet bootstrap kubeconfig 配置文件

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials "kubelet-bootstrap" \

--token=${TOKEN} \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \

--cluster=kubernetes \

--user="kubelet-bootstrap" \

--kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

```

![image-1676467852585](https://zmzycc.top/upload/2023/02/image-1676467852585.png)

```

cp /opt/kubernetes/server/bin/kubelet /opt/kubernetes/bin

```

6.2.4 systemd管理kubelet

```

cat > /etc/systemd/system/kubelet.service << EOF

[Unit]

Description=Kubernetes Kubelet

After=docker.service

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf

ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

```

启动

```

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

journalctl -u kubelet 可查看报错日志

```

查看kubelet证书请求

```

[root@k8s-master1 bin]# kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION

node-csr-w8boABmvVMTFfK1coJyMRQR-HOyjr_7oQzdR2XIZ3uU 2m14s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

#允许kubelet节点申请

[root@k8s-master1 bin]# kubectl certificate approve node-csr-w8boABmvVMTFfK1coJyMRQR-HOyjr_7oQzdR2XIZ3uU

#查看申请

[root@k8s-master1 bin]# kubectl get csr

NAME AGE SIGNERNAME REQUESTOR CONDITION

node-csr-w8boABmvVMTFfK1coJyMRQR-HOyjr_7oQzdR2XIZ3uU 2m55s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued

#查看节点

[root@k8s-master1 opt]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s-master1 NotReady <none> 42d v1.20.15

k8s-node1 NotReady <none> 42d v1.20.15

k8s-node2 NotReady <none> 42d v1.20.15

```

![image-1676469570585](https://zmzycc.top/upload/2023/02/image-1676469570585.png)

说明:

由于网络插件还没有部署,节点会没有准备就绪NotReady

部署kube-proxy

创建配置文件

```

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF

KUBE_PROXY_OPTS="--logtostderr=false \\

--v=2 \\

--log-dir=/opt/kubernetes/logs \\

--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

EOF

```

配置参数文件

```

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF

kind: KubeProxyConfiguration

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

metricsBindAddress: 0.0.0.0:10249

clientConnection:

kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig

hostnameOverride: k8s-master1

clusterCIDR: 10.244.0.0/16

EOF

```

生成kube-proxy证书文件

```

# 切换工作目录

cd ~/TLS/k8s

# 创建证书请求文件

cat > kube-proxy-csr.json << EOF

{

"CN": "system:kube-proxy",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"L": "chengdu",

"ST": "chengdu",

"O": "k8s",

"OU": "System"

}

]

}

EOF

# 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

```

6.3.4 生成kube-proxy.kubeconfig文件

```

KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"

KUBE_APISERVER="https://192.168.0.135:6443"

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-proxy \

--client-certificate=./kube-proxy.pem \

--client-key=./kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=${KUBE_CONFIG}

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

cp /opt/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin

```

6.3.5 systemd管理kube-proxy

```

cat > /usr/lib/systemd/system/kube-proxy.service << EOF

[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf

ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

```

启动并设置开机自启

```

systemctl daemon-reload

systemctl start kube-proxy

systemctl enable kube-proxy

```

部署网络组件(Calico)

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。

```

curl https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml -O

kubectl apply -f calico.yaml

kubectl get pods -n kube-system

```

![image-1676471014267](https://zmzycc.top/upload/2023/02/image-1676471014267.png)

calico应与k8s的版本相符合,否则将报错。查看k8s对应的calico的版本 https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements

6.5 授权apiserver访问kubelet

应用场景:如kubectl logs

```

cat > apiserver-to-kubelet-rbac.yaml << EOF

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

annotations:

rbac.authorization.kubernetes.io/autoupdate: "true"

labels:

kubernetes.io/bootstrapping: rbac-defaults

name: system:kube-apiserver-to-kubelet

rules:

- apiGroups:

- ""

resources:

- nodes/proxy

- nodes/stats

- nodes/log

- nodes/spec

- nodes/metrics

- pods/log

verbs:

- "*"

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: system:kube-apiserver

namespace: ""

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:kube-apiserver-to-kubelet

subjects:

- apiGroup: rbac.authorization.k8s.io

kind: User

name: kubernetes

EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

```

7、新增加Work Node

7.1 拷贝以部署好的相关文件到新节点

在Master节点将Work Node涉及文件拷贝到新节点 192.168.0.136/192.168.0.137

```

scp -r /opt/kubernetes/ root@192.168.0.136:/opt/

scp -r /etc/systemd/system/{kubelet,kube-proxy}.service root@192.168.0.136:/etc/systemd/system/

scp -r /opt/kubernetes/ root@192.168.0.137:/opt/

scp -r /etc/systemd/system/{kubelet,kube-proxy}.service root@192.168.0.136:/etc/systemd/system/

```

删除kubelet证书和kubeconfig文件(work节点都删)

```

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig

rm -f /opt/kubernetes/ssl/kubelet*

```

说明:

这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除。

7.3 修改主机名(修改所有work节点分别修改节点名称,不能一样即可)

```

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=k8s-node1

vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: k8s-node1

```

7.4 启动并设置开机自启

```

systemctl daemon-reload

systemctl start kubelet kube-proxy

systemctl enable kubelet kube-proxy

```

7.5 在Master上同意新的Node kubelet证书申请

```

kubectl get csr

```

![image-1676473073079](https://zmzycc.top/upload/2023/02/image-1676473073079.png)!

```

kubectl certificate approve node-csr-qy50-VsltxRW2y0IPA5nwD5vyXhkgWVHPcavhHQQx08

```

![image-1676472527301](https://zmzycc.top/upload/2023/02/image-1676472527301.png)

7.6 查看Node状态(要稍等会才会变成ready,会下载一些初始化镜像)

![image-1676473106410](https://zmzycc.top/upload/2023/02/image-1676473106410.png)

说明:

其他节点同上

8、部署Dashboard和CoreDNS

```

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

```

修改默认配置

vim recommended.yaml

```

spec:

type: NodePort #新增

ports:

- port: 443

targetPort: 8443

nodePort: 30007 #新增

selector:

k8s-app: kubernetes-dashboard

```

查看namespace下的kubernetes-dashboard下的资源

```

kubectl apply -f recommended.yaml

```

![image-1676474786719](https://zmzycc.top/upload/2023/02/image-1676474786719.png)

访问地址: https://NodeIP:30007

创建service account并绑定默认cluster-admin管理员集群角色

可以看到kubernetes-dashboard的Pod部署在k8s-master节点(192.168.0.135),映射的NodePort为30007,使用浏览器访问https://192.168.0.135:30007/会看到kubernetes-dashboard的登录页

创建访问账户,获取token

1)创建账号

```

kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

```

授权

```

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

```

3)获取账号token

```

kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin

```

![image-1676475207640](https://zmzycc.top/upload/2023/02/image-1676475207640.png)

```

kubectl describe secrets dashboard-admin-token-kqhc7 -n kubernetes-dashboard

```

![image-1676475282166](https://zmzycc.top/upload/2023/02/image-1676475282166.png)

在登录页面上输入上面的token

![image-1676475311329](https://zmzycc.top/upload/2023/02/image-1676475311329.png)

登录后,看到如下页面:

![image-1676475385866](https://zmzycc.top/upload/2023/02/image-1676475385866.png)