微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

01-k8s之kubeadm搭建集群内部自动实现步骤

目录

单控制平面集群部署

单控制平面集群,以下实例各一:

  • kube-apiserver,kube-controller-manager,kube-scheduler,etcd
  • kubeeadm,kubelet,kube-proxy和docker

每个工作节点分别运行如下实例各一:

  • kubelet,kube-proxy和dokcer

  • kubeadm

service是使用iptables规则负载均衡,并向pod流量转发。但是如果一个集群节点数和pod数量巨大,创建的关于service的iptables数量巨大,内核在匹配规则时将不堪重负。好在有ipvs,可以简化这一过程。
在集群的每个节点上都会运行一个kube-proxy的进程,这个进程负责生成iptables或者ipvs规则的,它监视着service的变动,入果service变动,让会让当前节点上的iptables或者ipvs的规则页跟着变动。而生成iptables规则还是ipvs规则(kubeadm认使用的是iptables),取决于我们的配置文件,它的配置文件保存在configmap当中:
	kubectl get cm -n kube-system
里面有一个kube-proxy的configmap,我们修改这个配置文件就可以改成ipvs规则:
kubectl edit cm kube-proxy -n kube-system
到里面找"mode"字段,改成"ipvs",保存即可。但是保存之后不会立即生效,需要把kube-proxy的pod全部删除,然后会重新创建一个kube-proxy的pod就行了。

# 删除kube-proxy相关的所有节点的pod
kubectl get pods --show-labels -n kube-system
kubectl delete -l k8s-app=kube-proxy -n kube-system

# 找一个节点,验证service使用ipvs规则
apt install ipvsadm
ipvsadm -nL

对于ipvs而言,与iptables的区别在于,他会在每个节点上生成一个专用的接口,这个接口上会配置上对应的地址。这时我们会发现现在的iptables规则会少很多。

核心逻辑

Pod控制器编排以Pod形式运行的应用;
Service把同一个应用的不同副本收为逻辑组,并对外提供统一入口
PV和PVC为应用提供存储空间
ConfigMap和Secret配置容器应用
DownwardAPI为应用提供了解外部环境的反射机制。

Kubeadm init 工作流程

kubeadm初始化控制平面的过程由许多步骤组成:

Run Preflight Checks-->Kubelet Start-->Generate Certificates -->Generate static Pod Manifests for the Control Plane-->Wait for the control plane to be healthy-->Uploadkubeadm&kubelet config yo a ConfigMap-->Taint and label the master-->Generate a (by default random Bootstrap Token)-->Setup the RBAC Authorization System-->Install DNS Proxy Addons.
阶段名称 主要功能
preflight 初始化前的环境检查
kubelet-start 生成kubelet配置并启动或重启kubelet以便于以静态Pod运行各组件
certs 创建集群用到的各数字证书,分别用于ca、apiserver、front-proxy、和etcd等
kubeconfig 为控制平面的各组件以及集群管理原分别生成kubeconfig文件
control-plane 为apiserver、controller-manager和scheduler分别生成静态Pod配置清单
etcd 为本地etcd生成静态pod配置清单
upload-config 将kubeadm和kubelet的配置存储为集群上的configmap资源对象
upload-certs 上传证书为kubeadm-certs
mark-control-plane 将主机标记为控制平面,即Master节点
bootstrap-token 生成用于将Node加入到控制平面的引导令牌(Bootstrap Token)
kubelet-finalize TLS bootstrap之后更新与kubelet有关的配置
addon 为集群添加核心附件conredns和kube-proxy

单控制平面集群部署demo

# 前提
1.各节点配置时间同步(chrony)、主机名称解析、关闭iptables或相关的服务、禁用SELinux和Swap设备;
2.各节点安装并启动docker容器运行时,配置好容器镜像加速服务;
	建议使用阿里云容器加速;
	需要指定cgroup驱动为systemd;
	vim /etc/docker/daemon.json
	{
		"exec-opts":["native.cgroupdriver=systemd"],
		"log-driver":"json-file",
		"log-opts":{
			"max-size":"100m"
		},
		"storage-driver":"overlay2",
		"registry-mirrors":["https:mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn","https://registry.docker.com"]
	}

1.第一个主节点

1.安装kubeamd、kubectl和kubelet;
	可按需要指定版本,例如kubeadm-1.18.1-00;
    建议使用阿里云的kubernetes镜像仓库;
    apt install kubeadm=1.18.1-00 kubelet=1.18.1-00 kubectl=1.18.1-00
2.运行kubeadm init 命令进行第一个控制平面节点初始化

kubeamd init --kubernetes-version v.1.18.2 --control-plane-endpoint k8s-api.ilinux.io --token-ttl=0 --iamge-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address 172.29.9.1 --pod-network-cidr 10.244.0.0/16
	设置kubectl
    部署网络插件
apiversion: v1
kind: Pod
Metadata:
  name: pod-demo
  namespace: default
spec:
  containers:
    name: adminBox
    image: ikubernetes/admin-Box:v1.0
    imagePullPolice: IfNotPresent
    command: ["/bin/sh","-c","sleep 9999"]
kubectl delete pods pod-demo --force --grace-period=0 # 正常情况下删除pod,有30秒的宽限期,加--force参数是强制删除,--grace-period宽限期是0秒,意味着立即删除,产线不要这样做。
kubectl apply -f pod-demo.yaml 
kubectl exec -it pod-demo -- /bin/sh

kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0
kubectl get deploy # 创建了一个deploy对象,这个对象会自动创建一个replicaset对象。
    Name					READY		STATUS		RESTARTS	AGE
    demoapp-6c5d545684-h7jpw  1/1		  Running         0		   23s
    6c5d545684:是replicaset名称,h7jpw:是pod的名称

kubectl get pods -o wide
NAME					READY	STATUS	  RESTARTS	AGE		IP[PODIP]		NODE	           NOMINATED NODE		REASInesS-
demoapp-6c5d545684-h7jpw  1/1	  Running	   0	  34s  10.244.1.4  k8s.node1.ilinux.io  <none>          <none>

curl 10.244.1.4
iKubernetes demoapp v1.0 !!  ClusterIP:10.244.0.0,ServiceName:demoapp-6c5d545684-h7jpw,ServiceIP:10.244.1.4!


kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --dry-run=client -o yaml # 在客户端一侧干跑一遍

kubectl get pods --show-labels

kubectl create service clusterip demoapp --tcp=80:80  # 给标签为demoapp的pod创建一个service,监听在80端口(一个是宿主机端口,一个是service端口)

kubectl get services  # 分配了一个随机CLUSTER-IP:10.104.19.129

kubectl describe service demoapp

kubectl get pods 
kubectl exec -it pod-demo -- /bin/sh
# k8s在部署的时候,会认部署一个coredns的服务,它能解析主机名;每个服务都有一个主机名,叫服务名(demoapp),

curl demoapp.default.svc.cluster.local.  # default:名称空间        svc.cluster.local. :固定后缀,cluster.local. : 集群域名,没有指定的话,认就是cluster.local.  

while true; do curl demoapp.default.svc.cluster.local.; sleep .5; done

kubectl scale deployment/demoapp --replicas=3  # 扩充demoapp(pod)的副本数量
kubectl get pods

上面的curl 请求认被负载到我们新增的副本上。

创建的service的IP并没有绑定到我们的网卡上,而是出现在iptables上。
iptables -t nat -S # -S --> --list-rules 列出nat链上的所有规则。如果不指定链,认打印iptables-save中设定的所有规则。

image-20211117184757477

cluster ip类型的service,报文经过service的转发:

NodePort类型的service,报文经过service的转发:

cluster ip 和nodeport 类型的service的区别:

入栈:经过service之后都进行了一次目标地址转换(DNAT)
出栈:经过service之后都进行了一次源地址转换(SNAT),但是NodePort类型的service转换成的是其类型的IP,cluster IP类型的service转换成的是其类型的IP。

示例(命令行修改镜像版本)

升级并暂停(金丝雀发布),通过循环请求进行验证:
kubectl set image deployment demoapp demoapp="ikubernetes/demoapp:v1.1" && \
                 kubectl rollout pause deployments/demoapp

查看暂停状态的更新: # 如下图1,已经有一个pod已经更新到1.1版本了,其他的还是1.0
kubectl rollout status deployments/demoapp
Waiting for deployment "demoapp" rollout to finish:1 out of 3 new replicas have been updated.

继续完成处于暂停状态的更新操作:# 如下图2,这样也不会发生服务中断 
kubectl rollout resume deployments/demoapp
deployment apps/demoapp resumed

图1:

图2:

示例二:添加就绪性探测

apiVersion: v1
kind: Pod
Metadata:
  name: readiness-httpget-demo
  namespace: default
spec:
  containers:
    name: demo
    image: ikubernetes/demoapp:v1.0
    imagePullPolicy: IfNotPresent
    readinessProbe:
      httpGet:
        path: '/readyz'
        port: 80
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 2
      periodSeconds: 2
      failureThreshold: 3
  restartPolicy: Always
有时候pod中容器程序初始化时间较长,不会立马启动起来,这时候如果service将流量转发到此pod上,就导致容器中的应用无法访问,这时候就可以加一个就绪性探针,确保service不会将流量调度到没有初始化完的pod上。在容器启动时,定义一个一个就绪性探针,发送一个接口请求(/readyz),如果返回OK,则表示健康。initialDelaySeconds指定15s,表示pod在初始化15s是才进行探测,

kubectl apply -f readiness-httpget-demo.yaml
kubectl get pods # 发现当前这个被创建的pod还不是ready状态,这就是就绪性探针initialDelaySeconds字段延迟15秒的效果,大概等15s左右,pod才变为ready状态。

Nginx-exporter

apiVersion: v1
kind: Configmap
Metadata:
  name: Nginx-conf
  namespace: default
data: 
  default.conf: |
    server {
      listen 80;
      server_name localhost;
      location / {
        root /usr/share/Nginx/html;
        index index.PHP index.html index.htm;
      }
      location /stub_status {
        stub_status on;
        access_log off;
        allow 127.0.0.1/8;
        deny all;
      }
      error_page 500 502 503 504 /50.html;
      location = /50.html {
        root /usr/share/Nginx/html;
      }
    }
---
apiVersion: apps/v1
kind: Deployment
Metadata:
  labels:
    app: Nginx-with-exporter
  name: Nginx-with-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: Nginx-with-exporter
  template:
    Metadata:
      labels:
        app: Nginx-with-exporter
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9113"
        prometheus.io/path: "/metrics"
    spec:
      containers:
        image: Nginx:alpine
        name: Nginx-with-exporter
        volumeMounts:
          name: ngxconfs
          mountPath: /etc/Nginx/conf.d/
          readOnly: true
        image: Nginx/Nginx-prometheus-exporter:latest
        name: exporter
        args: ["-Nginx.scrape-uri=http://127.0.0.1/stub_status"]
      volumes:
        name: ngxconfs
        configMap:
          name: Nginx-conf
          optional: false
---
apiVersion: v1
kind: Service
Metadata:
  labels:
    app: Nginx-with-exporter
  name: Nginx-svc

grafana.yaml

apiVersion: v1
kind: ConfigMap
Metadata:
  name: grafana-datasources
  namespace: prom
data:
  prometheus.yaml: |-
    {
      "aipVersion": 1,
      "datasources": [
        {
          "access": "proxy",
          "editable": true,
          "name": "prometheus",
          "orgId": 1,
          "type": "prometheus",
          "url": "http://prometheus.prom.svc.cluster.local.:9090",
          "version": 1
        }
      ]
    }
---
apiVersion: v1
kind: service
Metadata:
  name: grafana
  namespace: prom
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "3000"
spec:
 selector:
   app: grafana
 type: NodePort
 ports:
   ports:
     port: 3000
     targetPort: 3000
     nodePort: 32000
---
apiVersion: apps/v1
kind: Deployment
Metadata:
  name: grafana
  namespace: prom
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    Metadata:
      name: grafana
      labels:
        app: grafana
    spec:
      containers:
        name: grafana
        image: grafana/grafana:latest # 使用latest,不论本地有没有latest的镜像,都会从镜像仓库中拉取,所以镜像拉取策略,使用本地如果存在就使用本地的,不存在再拉取。一般,镜像每次都使用latest,策略选用always;避免本地的镜像被污染。
        imagePullPolicy: IfNotPresent
        ports:
          name: grafana
          containerPort: 3000
        volumeMounts:
          mountPath: /etc/grafana/provisioning/datasources
          name: grafana-datasources
          readOnly: false
      volumes:
        name: grafana-datasources
        configMap:
          defaultMode: 420
kubectl label nodes k8s-node1.ilinux.io k8s-node1.ilinux.io  # 给node1节点打标签
kubectl get nodes --show-labels
kubectl drain k8s-node3.ilinux.io # 将node3节点排空,并标记为不可被调度


docker-compose 运行Nginx代理的wordpress

version: '3'
services:
  db:
    image: MysqL:5.7
   volumes:
   - ./db/data:/var/lib/MysqL/
   env_file:
   - ./db/env.list
   networks:
     webnet:
       aliases:
       - 'MysqL'
   expose:
   - '3306'
wp:
  image: wordpress:5-PHP7.2
  env_file:
  - ./wp/env.list
  networks:
    webnet:
      aliases:
      - 'wordpress'
  expose:
  - '80'
  ports:
  - '8080:80'
  depends_on:
  - db

Nginx:
  image: Nginx:alpine
  volumes:
  - ./Nginx/default.conf:/etc/Nginx/conf.d/default.conf
  networks:
    webnet:
      aliases:
      - 'www'
  expose:
  - '80'
  ports:
  - "80:80"
  depends_on:
  - db
  - wp

networks:
  webnet: {}

基于微服务网格的Prometheus和grafana

version: '3'
services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.13.1
    volumes:
    - "./front_envoy/envoy-config.yaml:/etc/envoy/envoy.yaml"
    networks:
      envoymesh:
        aliases:
        - front-envoy
        - front
    ports:
    - 8080:80
    - 9901:9901
  
  service_a_envoy:
    images: envoyproxy/envoy-alpine:v1.13.1
    volumes:
    - "./service_a/envoy-config.yaml:/etc/envoy/envoy.yaml"
    networks:
      envoymesh:
        aliases:
        - service_a_envoy
        - service-a-envoy
    ports:
    - 8786:8786
    - 8788:8788
  
  service_a:
    build: service_a/
    networks:
      envoymesh:
        aliases:
        - service_a
    ports:
    - 8789:8789
  
  service_b:
    build: service_b/
    networks:
      envoymesh:
        aliases:
        - service_b
        - service-b
    ports:
    - 8082:8082
  
  service_c_envoy:
    image: envoyproxy/envoy-alpine:v1.13.1
    volumes:
    - "./service_c/envoy-config.yaml:/etc/envoy/envoy.yaml"
    networks:
      envoymesh:
        aliases:
        - service_c_envoy
        - service-c-envoy
    ports:
    - 8790:8790
  
  service_c:
    build: service_c/
    networks:
      envoymesh:
        aliases:
        - service_c
        - service-c
    ports:
    - 8083:8083
  
  statsd_exporter:
    image: prom/statsd-exporter:latest
    networks:
      envoymesh:
        aliases:
        - statsd_exporter
    ports:
    - 9125:9125
    - 9102:9102
  
  prometheus:
    image:prom/prometheus
    volumes:
    - "./prometheus/config/yaml:/etc/prometheus.yaml"
    networks:
      envoymesh:
        aliases:
        - prometheus
    ports:
    - 9090:9090
    command: "--config.file=/etc/prometheus.yaml"

  grafana:
    image: grafana/grafana
    volumes:
    - "./grafana/grafana.ini:/etc/grafana/grafana.ini"
    - "./grafana/datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yaml"
    - "./grafana/dashboard.yaml:/etc/grafana/provisioning/dashboards/dashboard.yaml"
    - "./grafana/dashboard.json:/etc/grafana/provisioning/dashboards/dashboard.json"
    networks:
      envoymesh:
        aliases:
        - grafana
    ports:
    - 3000:3000

networks:
  envoymesh: {}

docker-compose编排EFK

version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.13.1
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks: 
      - envoymesh
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9091"
  
  service_blue:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - myservice
          - blue
      environment:
        - SERVICE_NAME=blue
      expose:
        - "80"

  service_green:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - myservice
          - green
    environment:
      - SERVICE_NAME=green
    expose:
    - "80"
    
  service_red:
    image:ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - myservice
          - red
    environment:
      - SERVICE_NAME=red
    expose:
      - "80"
  elasticssearch:
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.4.1"
    environment:
    - "ES_JAVA_OPTS=-xms1g -Xmx1g"
    - "discovery.type=single-node"
    networks:
      envoymesh:
        aliases:
          - es
    ports:
    - "9200:9200"
    volumes:
    - elasticsearch_data:/usr/share/elasticsearch/data

  kibana:
    image: "docker.elastic.co/kibana/kibana:7.4.1"
    networks:
      envoymesh:
        aliases:
          - kibana
          - kib
    ports:
    - "5601:5601"
  
  filebeat:
    image: "docker.elastic.co/beats/filebeat:7.4.1"
    networks:
      envoymesh:
        aliases:
          - filebeat
          - fb
    user: root
    volumes:
    - ./filebeat/filebeat.yaml:/usr/share/filebeat/filebeat.yml:ro
    - /var/lib/docker:/var/lib/docker:ro
    - /var/run/docker.sock:/var/run/docker.sock

volumes:
  elasticsearch_data:

networks:
  envoymesh: {}

kubeadm 安装集群碰到问题

master01:
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo?spm=a2c6h.25603864.0.0.4d4a4ccaWJCB9w
mv docker-ce.repo\?spm\=a2c6h.25603864.0.0.4d4a4ccaWJCB9w docker-ce.repo
vim kubernetes.repo  # https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    [kubernetes]
    name=Kubernetes Repo
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    enable=1
yum install -y docker-ce kubeadm kubelet kubectl  # 报错:NO KEY 
修改 kubernetes.repo 中 gpgcheck=0
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg
再次下载,就可以了。。如果不行,下载rpm-package-key.gpg,并导入,再下载。
	wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
	rpm --import rpm-package-key.gpg
kubeadm支持使用本地镜像及自己私有的docker仓库,可以编辑Unit File:
vim /usr/lib/systemd/system/docker.service
	Environment="HTTPS_PROXY=http://www.ik8s.io:10080" # 没有的话,就注释掉。
	Environment="NO_PROXY=127.0.0.0/8,172.16.0.0/16"
	
vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://mirrors.aliyun.com"]
}
systemctl daemon-reload
systemctl start docker
systemctl enable docker
docker info # 检查HTTPS_PROXY NO_PROXY 是否出现

docker会生成大量的iptables规则,iptables内部的nf-call需要打开桥接的功能,保证参数值为1。
cat /proc/sys/net/bridge/bridge-nf-call-iptables
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables

kubelet的配置相关

rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service

cat /etc/sysconfig/kubelet
	KUBELET_EXTRA_ARGS=

systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet # 报错,查看启动日志如下图,/var/lib/messages

原因是我们的kubelet的初始化还没完成,所以不能启动。。。
vim /etc/sysconfig/kubelet

kubeadm 初始化

kubeadm init --help
--apiserver-advertise-address # apiserver公告地址,认是0.0.0.0
--apiserver-bind-port int32 # apiserver监听地址,认是6443
--cert-dir string  # 证书目录,认是/etc/kubernetes/pki
--config string  # kubeadm 初始化指定配置文件的路径,也可以直接在命令行指定一个个的参数
--ignore-preflight-errors strings # 在预检查的时候出现的错误可以忽略掉
--image-repository
--kubernetes-version  # kubernetes版本,认是stable-1 当前为1.22.4-0
--node-name
--pod-network-cidr # pod通信所使用的网络,flannel认使用10.244.0.0/16
 --service-cidr # service与pod通信的地址,认在10.96.0.0/12 网络中

# 初始化(第一次)图1
kubeadm init --kubernetes-version=v1.22.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

使用以上命令进行初始化,会报错以下错误:
1.docker和kubelet要设置开机自启动
	systemctl enable docker  && systemctl enable kubelet 
2.关闭交换分区
swapoff

可以通过修改kubelet配置文件,忽略在初始化时的出错。。
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 表示如果swap分区是开启开启状态时,忽略此情况,不要抛错。

# 初始化(第二次) 图2
kubeadm init --kubernetes-version=v1.22.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap # 使用国内阿里云镜像仓库拉取基础镜像(etcd/apiserver/controller-manager/pause/scheduler),不然拉取速度慢,容易超时。
      
      
初始化报错,说kubelet没启动。之前没启动kubelet是因为缺少初始化的config文件,现在已经在kubeadm初始化时已经生成(虽然没初始化成功),我们可以启动kubelet,但还是报错,如下图3. 需要添加以 "exec-opts": ["native.cgroupdriver=systemd"]配置。
cat /etc/docker/daemon.json
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "registry-mirrors": ["https://mirrors.aliyun.com"],
 "log-driver": "json-file",
 "log-opts": {
   "max-size": "100m"
  },
 "storage-driver": "overlay2",
 "storage-opts": [
   "overlay2.override_kernel_check=true"
  ]
}

这里就需要重新初始化了,那怎么办?
kubeadm reset # 将之前的初始化过程占用的端口及创建的目录文件全部清除。
kubeadm init --kubernetes-version=v1.22.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap # 初始化成功
'''
[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
	[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 192.168.1.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.1.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.1.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.004704 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2uhyx4.fe0t17wj6df58k58
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: coredns
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONfig=/etc/kubernetes/admin.conf

You should Now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.130:6443 --token 2uhyx4.fe0t17wj6df58k58 \
	--discovery-token-ca-cert-hash sha256:31613d739cda08acf6bc6f899390bc69e2110ebedf2357ee15c6b6f98aed3874 
'''

# 初始化完,k8s已经将这些基础插件运行为pod,查看docker容器 如下如4
docker images


# 查看kubelet进程的状态,如下图5,告诉我们需要一个cni(容器网络接口:container network interface)
这个网络插件,kubernetes官方并没有提供,我们需要自己安装网络插件。flannel/calico/canel等等。。

使用集群之前需要如下配置,用以我们使用kubectl连接apiserver操作集群;如果是root用户(不建议直接使用root用户)直接执行以下命令;
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是普通用户,需要有sudo权限,编辑/etc/sudoers文件,或者创建在/etc/sudoers.d/目录创建一个文件添加普通用户有sudo权限,然后执行上述命令。

获取基础插件的状态:
kubectl get cs # cs:ComponentStatus;发现scheduler插件的存活性探测,连接被拒绝,ss -lnt 此端口也没监听。如下图6
需要将/etc/kubernetes/manifests/scheduler.yaml以及controll manager .yaml中都port=0注释掉

kubectl get nodes # 发现当前mastero1节点并未处于就绪状态(NotReady),需要网络插件。如图7

图1:

图2:

图3:

图4:

图5:

图6:

图7:

部署flannel网络插件

flannel插件的官方地址在github上:https://github.com/flannel-io/flannel
告诉我们部署部署方式为:基于一个flannel.yaml清单下载镜像,并运行成pod;
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

woker01和worker02加入集群配置

yum install -y docker-ce-19.03.3 kubeadm kubelet kubectl # 报错,如下图1
# 下载包完整性校验key
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import rpm-package-key.gpg
# 再次下载
yum install -y docker-ce-19.03.3 kubeadm kubelet kubectl
scp /etc/docker/daemon.json worker01:/etc/docker/
scp /etc/docker/daemon.json worker02:/etc/docker/
scp /etc/sysconfig/kubelet worker02:/etc/sysconfig/
systemctl enable docker kubelet && systemctl start docker kubelet # worker节点kubelet没有运行起来,查看/var/log/messages,发现其需要config.yaml文件,但是此文件是woker节点加入集群时,才会生成,所以不用管,直接执行下面的加入集群命令,之后kubelet会自动恢复到running状态。


kubeadm join 192.168.1.130:6443 --token 2uhyx4.fe0t17wj6df58k58 --discovery-token-ca-cert-hash sha256:31613d739cda08acf6bc6f899390bc69e2110ebedf2357ee15c6b6f98aed3874 --ignore-preflight-errors=Swap 
需要拉取基础镜像flannel和kube-proxy,并运行成pod。# 如下图2

如果在执行kubeadm join时,出现错误,可以使用 kubeadm reset --force (-f) 重置,然后再重新执行加入集群。

图1:

图2:

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐