微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

K8S集群以及dashboard部署

@H_404_0@ 

@H_404_0@ 什么是Kubernetes ?

Kubernetes是一个轻便的和可扩展的开源平台,用于管理容器化应用和服务。通过Kubernetes能够进行应用的自动化部署和扩缩容。在Kubernetes中,会将组成应用的容器组合成 一个逻辑单元以更易管理和发现。   Kubernetes集群架构节点角色功能包括:   Master Node节点: 1.k8s集群控制节点,对集群进行调度管理,接受集群外用户去集群操作请求; 2.Master Node由API Server、Scheduler、ClusterState Store(ETCD数据库)和Controller mangerServer所组成;   Worker Node节点: 1.集群工作节点,运行用户业务应用容器; 2.Worker Node包含kubelet、kube proxy和ContainerRuntime;  

@H_404_0@Kubernetes 集群组件功能

@H_404_0@Master组件

@H_404_0@1、API Server:

@H_404_0@   K8S对外的唯一接口,提供HTTP/HTTPS RESTful API,所有的请求都需要经过这个接口进行通信,主要负责接收、校验并响应所有的REST请求,结果状态被持久存储在etcd当中,所有资源增删改查的唯一入口。

@H_404_0@2、etcd:

@H_404_0@负责保存k8s 集群的配置信息和各种资源的状态信息,当数据发生变化时,etcd会快速通知k8s相关组件,etcd是一个独立的服务组件,并不隶属于K8S集群。生产环境当中etcd应该以集群方式运行,以确保服务的可用性。

@H_404_0@3、Controller Manager:

@H_404_0@负责管理集群各种资源,保证资源处于预期的状态。Controller Manager由多种controller组成,包括replication controller、endpoints controller、namespace controller、serviceaccounts controller等 。由控制器完成的主要功能主要包括生命周期功能和API业务逻辑,

@H_404_0@4、调度器(Schedule)

@H_404_0@资源调度,负责决定将Pod放到哪个Node上运行。Scheduler在调度时会对集群的结构进行分析,当前各个节点的负载,以及应用对高可用、性能等方面的需求。

 

Node组件

1、Kubelet

kubelet是node的agent,当Scheduler确定在某个Node上运行Pod后,会将Pod的具体配置信息(image、volume等)发送给该节点的kubelet,kubelet会根据这些信息创建和运行容器,并向master报告运行状态。

2、Container Runtime

每个Node都需要提供一个容器运行时(Container Runtime)环境,它负责下载镜像并运行容器。

3、Kube-proxy:

service在逻辑上代表了后端的多个Pod,外借通过service访问Pod。service接收到请求就需要kube-proxy完成转发到Pod的。每个Node都会运行kube-proxy服务,负责将访问的service的TCP/UDP数据流转发到后端的容器。

@H_404_0@ 

@H_404_0@什么是pod ?

@H_404_0@Kubernetes 并不直接地运行容器,而是被一个抽象的资源对象Pod所封装,它是K8S最小的调度单位,Pod可以封装一个或多个容器,同一个Pod中共享网络名称空间和存储资源,而容器之间可以通过本地回环接口直接通信,但是彼此之间又在Mount、User和Pid等名称空间上保持了隔离。

 

pod创建调度过程

@H_404_0@1.首先,用户使用create yaml创建pod,请求给apiseerver,apiserver将yaml中的属性信息写入etcd。

@H_404[email protected]触发watch机制开始创建pod,信息转发给调度器,调度器使用算法选择相应的node节点,并将node信息反馈给apiserver,apiserver将绑定的node信息写入etcd。

@H_404[email protected]再通过watch机制,调用kubelet,指定pod信息,触发docker run命令创建容器,创建完成后反馈给kubelet, kubelet再将pod的状态信息给apiserver,apiserver则将pod的状态信息写入etcd。



集群部署:

@H_404_0@ 

@H_404_0@环境准备:

@H_404_0@系统:centos7.6 1810 (Core)

@H_404_0@K8s版本:1.21.x

@H_404_0@docker版本:19.03.15

@H_404_0@ 

@H_404_0@各虚拟服务器规划如下:

@H_404_0@ 

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@注:

@H_404_0@一.各节点按规划配置好hosts,另本次复用在master01节点部署安装工具,因此在master01做ssh-keygen,并ssh-copy-id其余节点

@H_404_0@二.同时做好chrony时间同步

@H_404_0@三. 各master以及node节点安装docker

@H_404_0@ 

@H_404_0@1.部署harbor,本次采用https

@H_404_0@ # 解压软件

@H_404_0@[root@k8s-harbor tools]# tar xzvf harbor-offline-installer-v2.3.2.tgz 

@H_404_0@[root@k8s-harbor ~]# mkdir -p /key/harbor/certs/

@H_404_0@[root@k8s-harbor ~]# cd /key/harbor/certs/

@H_404_0@#生成key 以及签发证书

@H_404_0@[root@k8s-harbor certs]# openssl genrsa -out harbor-ca.key

@H_404_0@[root@k8s-harbor certs]# openssl req -x509 -new -nodes -key harbor-ca.key  -subj "/CN=magedu.gfeng.net" -days 7120 -out harbor-ca.crt 

@H_404_0@[root@k8s-harbor certs]# ls 

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@修改配置文件

@H_404_0@[root@k8s-harbor tools]# vim harbor/harbor.yml

@H_404_0@ 

@H_404_0@配置如下:

@H_404_0@# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for Nginx
certificate: /key/harbor/certs/harbor-ca.crt
private_key: /key/harbor/certs/harbor-ca.key

@H_404_0@# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal

@H_404_0@# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

@H_404_0@# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 123456

@H_404_0@# Harbor DB configuration
database:
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900

@H_404_0@# The default data volume
data_volume: /data

@H_404_0@ 

@H_404_0@安装harbor:

@H_404_0@[root@k8s-harbor harbor]# ./install.sh --with-trivy

@H_404_0@安装完成后,访问https://172.16.1.174测试:

@H_404_0@ 

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@2.客户端同步证书验证

@H_404_0@[root@k8s-master01 ~]# mkdir -p  /etc/docker/certs.d/magedu.gfeng.net/

@H_404_0@[root@k8s-harbor certs]# scp harbor-ca.crt [email protected]:/etc/docker/certs.d/magedu.gfeng.net/

@H_404_0@[root@k8s-master01 magedu.gfeng.net]# ls

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@重启docker并验证

@H_404_0@[root@k8s-master01 magedu.gfeng.net]# docker login magedu.gfeng.net

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@采用同样的方式,对master02和node节点做同样的操作,当然也可以采写脚本,这里不做演示

@H_404_0@ 

@H_404_0@3.部署haproxy+keepalive高可用负载(之前文章已经部署,这里不做演示)

@H_404_0@#配置haproxy

@H_404_0@[root@lb ~]# vim /etc/haproxy/haproxy.cfg

@H_404_0@#添加如下内容

@H_404_0@frontend main 172.16.1.96:6443
default_backend k8s

@H_404_0@backend k8s
balance roundrobin
server server1 172.16.1.190:6443 check
server server2 172.16.1.191:6443 check

@H_404_0@ 

@H_404_0@#配置完成后,重启haproxy

@H_404_0@ 

@H_404_0@ 

@H_404_0@ K8s部署:

@H_404_0@1.在master01节点操作

@H_404_0@#安装ansible

@H_404_0@[root@k8s-master01 ~]# yum install ansible -y

@H_404_0@#下载部署工具以及组件

@H_404_0@[root@k8s-master01 ~]# export release=3.1.0

@H_404_0@[root@k8s-master01 ~]#curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown

@H_404_0@[root@k8s-master01 ~]# chmod a+x ezdown

@H_404_0@#修改配置文件

@H_404_0@[root@k8s-master01 ~]# vim ezdown

@H_404_0@# default settings, can be overridden by cmd line options, see usage
DOCKER_VER=19.03.15
KUBEASZ_VER=3.1.0
K8S_BIN_VER=v1.21.0

@H_404_0@#使用工具脚本下载

@H_404_0@./ezdown -D

@H_404_0@上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz

@H_404_0@ 

@H_404_0@2.生成ansible hosts文件

@H_404_0@[root@k8s-master01 ~]# cd /etc/kubeasz/

@H_404_0@[root@k8s-master01 kubeasz]# ./ezctl new k8s-001

@H_404_0@#编辑生成hosts文件

@H_404_0@[root@k8s-master01 kubeasz]# vim clusters/k8s-001/hosts

@H_404_0@ 

@H_404_0@内容如下:

@H_404_0@# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.16.1.194
172.16.1.195
172.16.1.196

@H_404_0@# master node(s)
[kube_master]
172.16.1.190
172.16.1.191

@H_404_0@# work node(s)
[kube_node]
172.16.1.192
172.16.1.193

@H_404_0@# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#172.16.1.8 NEW_INSTALL=false

@H_404_0@# [optional] loadbalance for accessing k8s from outside
[ex_lb]
172.16.1.97 LB_ROLE=backup EX_APISERVER_VIP=172.16.1.96 EX_APISERVER_PORT=8443
172.16.1.98 LB_ROLE=master EX_APISERVER_VIP=172.16.1.96 EX_APISERVER_PORT=8443

@H_404_0@# [optional] ntp server for the cluster
[chrony]
#172.16.1.1

@H_404_0@[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

@H_404_0@# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

@H_404_0@# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

@H_404_0@# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

@H_404_0@# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"

@H_404_0@# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"

@H_404_0@# NodePort Range
NODE_PORT_RANGE="30000-32767"

@H_404_0@# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="magedu.local"

@H_404_0@# -------- Additional Variables (don't change the default value right Now) ---
# Binaries Directory

@H_404_0@ 

@H_404_0@#编辑生成config.yml文件

@H_404_0@[root@k8s-master01 kubeasz]# vim /etc/kubeasz/clusters/k8s-001/config.yml

@H_404_0@内容如下:

@H_404_0@SYS_RESERVED_ENABLED: "no"

@H_404_0@# haproxy balance mode
BALANCE_ALG: "roundrobin"

@H_404_0@
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

@H_404_0@# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

@H_404_0@# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

@H_404_0@# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

@H_404_0@# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

@H_404_0@# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

@H_404_0@# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"

@H_404_0@# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

@H_404_0@# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

@H_404_0@# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

@H_404_0@# [cilium]镜像版本
cilium_ver: "v1.4.1"

@H_404_0@# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

@H_404_0@# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

@H_404_0@# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

@H_404_0@# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

@H_404_0@# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

@H_404_0@# [kube-router]kube-router 镜像版本
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

@H_404_0@# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

@H_404_0@# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
- "ntp1.aliyun.com"
- "time1.cloud.tencent.com"
- "0.cn.pool.ntp.org"

@H_404_0@# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",认全部允许
local_network: "0.0.0.0/0"

@H_404_0@
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

@H_404_0@# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

@H_404_0@
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_data_dir: "/var/lib/etcd"
ETCD_WAL_DIR: ""

@H_404_0@
############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

@H_404_0@# [containerd]基础容器镜像
SANDBox_IMAGE: "easzlab/pause-amd64:3.4.1"

@H_404_0@# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

@H_404_0@# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

@H_404_0@# [docker]开启Restful API
ENABLE_REMOTE_API: false

@H_404_0@# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8","172.16.1.174"]'

@H_404_0@
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
- "10.1.1.1"
- "k8s.test.io"
#- "www.test.com"

@H_404_0@# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24

@H_404_0@
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

@H_404_0@# node节点最大pod 数
MAX_PODS: 210

@H_404_0@# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "yes"

@H_404_0@# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

@H_404_0@# haproxy balance mode
BALANCE_ALG: "roundrobin"

@H_404_0@
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

@H_404_0@# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

@H_404_0@# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

@H_404_0@# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

@H_404_0@# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

@H_404_0@# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

@H_404_0@# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"

@H_404_0@# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

@H_404_0@# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

@H_404_0@# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

@H_404_0@# [cilium]镜像版本
cilium_ver: "v1.4.1"

@H_404_0@# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

@H_404_0@# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

@H_404_0@# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

@H_404_0@# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

@H_404_0@# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

@H_404_0@# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busyBox_ver: "1.28.4"

@H_404_0@# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busyBox_offline: "busyBox_{{ busyBox_ver }}.tar"

@H_404_0@
############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.17.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

@H_404_0@# metric server 自动安装
metricsserver_install: "no"
metricsver: "v0.3.6"

@H_404_0@# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"

@H_404_0@# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"

@H_404_0@# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

@H_404_0@# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

@H_404_0@############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

@H_404_0@# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

@H_404_0@# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true

@H_404_0@注:几个自动安装都设置成no,DNS设置成false

@H_404_0@ 

@H_404_0@3.部署集群

@H_404_0@首先:

@H_404_0@[root@k8s-master01 kubeasz]#vim playbooks/01.prepare.yml 关掉负载均衡初始化
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
- kube_master
- kube_node
- etcd
- ex_lb
- chrony
去掉- ex_lb
- chrony

@H_404_0@ 

@H_404_0@开始集群初始化安装:

@H_404_0@ 

@H_404_0@[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 01              初始化集群

@H_404_0@[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 02             部署etcd集群

@H_404_0@etcd节点验证:

@H_404_0@编写一个脚本,内容如下:

@H_404_0@[root@k8s-etcd01 server]# vim etcd.sh 

@H_404_0@#!/bin/sh

@H_404_0@export NODE_IPS="172.16.1.194 172.16.1.195 172.16.1.196"
for ip in ${NODE_IPS}; do
ETCDCTL_API=3 /opt/kube/bin/etcdctl \
--endpoints=https://${ip}:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/kubernetes/ssl/etcd.pem \
--key=/etc/kubernetes/ssl/etcd-key.pem \
endpoint health; done

@H_404_0@ 

@H_404_0@[root@k8s-etcd01 server]# chmod+x etcd.sh 

@H_404_0@[root@k8s-etcd01 server]# bash etcd.sh 

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 全部为successfully表示正常,否则错误

@H_404_0@ 

@H_404_0@ 

@H_404_0@[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 03

@H_404_0@ 

@H_404_0@[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 04 部署master节点

@H_404_0@#master部署完成后,执行验证

@H_404_0@[root@k8s-master01 kubeasz]# kubectl get node

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ [root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 05 部署node节点

@H_404_0@#node部署完成后,执行验证

@H_404_0@[root@k8s-master01 kubeasz]# kubectl get node

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ [root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 06  部署网络组件

@H_404_0@PLAY [kube_master,kube_node] ***************************************************************************************************************************

@H_404_0@TASK [Gathering Facts] *********************************************************************************************************************************
ok: [172.16.1.190]
ok: [172.16.1.191]
ok: [172.16.1.193]
ok: [172.16.1.192]

@H_404_0@TASK [calico : 在节点创建相关目录] ******************************************************************************************************************************
ok: [172.16.1.191] => (item=/etc/cni/net.d)
ok: [172.16.1.193] => (item=/etc/cni/net.d)
ok: [172.16.1.192] => (item=/etc/cni/net.d)
ok: [172.16.1.190] => (item=/etc/cni/net.d)
changed: [172.16.1.191] => (item=/etc/calico/ssl)
changed: [172.16.1.192] => (item=/etc/calico/ssl)
changed: [172.16.1.193] => (item=/etc/calico/ssl)
changed: [172.16.1.190] => (item=/etc/calico/ssl)
ok: [172.16.1.191] => (item=/opt/kube/images)
ok: [172.16.1.193] => (item=/opt/kube/images)
ok: [172.16.1.192] => (item=/opt/kube/images)
ok: [172.16.1.190] => (item=/opt/kube/images)

@H_404_0@TASK [创建calico 证书请求] ***********************************************************************************************************************************
changed: [172.16.1.190]
ok: [172.16.1.191]
ok: [172.16.1.192]
ok: [172.16.1.193]

@H_404_0@TASK [创建 calico证书和私钥] **********************************************************************************************************************************
changed: [172.16.1.191]
changed: [172.16.1.190]
changed: [172.16.1.193]
changed: [172.16.1.192]

@H_404_0@TASK [分发calico证书相关] ************************************************************************************************************************************
changed: [172.16.1.191] => (item=ca.pem)
changed: [172.16.1.193] => (item=ca.pem)
changed: [172.16.1.192] => (item=ca.pem)
changed: [172.16.1.190] => (item=ca.pem)
changed: [172.16.1.191] => (item=calico.pem)
changed: [172.16.1.193] => (item=calico.pem)
changed: [172.16.1.192] => (item=calico.pem)
changed: [172.16.1.190] => (item=calico.pem)
changed: [172.16.1.191] => (item=calico-key.pem)
changed: [172.16.1.193] => (item=calico-key.pem)
changed: [172.16.1.192] => (item=calico-key.pem)
changed: [172.16.1.190] => (item=calico-key.pem)

@H_404_0@TASK [get calico-etcd-secrets info] ********************************************************************************************************************
changed: [172.16.1.190]

@H_404_0@TASK [创建 calico-etcd-secrets] **************************************************************************************************************************
changed: [172.16.1.190]

@H_404_0@TASK [检查是否已下载离线calico镜像] *******************************************************************************************************************************
changed: [172.16.1.190]

@H_404_0@TASK [calico : 尝试推送离线docker 镜像(若执行失败,可忽略)] *************************************************************************************************************
changed: [172.16.1.191] => (item=pause.tar)
changed: [172.16.1.193] => (item=pause.tar)
changed: [172.16.1.190] => (item=pause.tar)
changed: [172.16.1.192] => (item=pause.tar)
changed: [172.16.1.193] => (item=calico_v3.15.3.tar)
changed: [172.16.1.190] => (item=calico_v3.15.3.tar)
changed: [172.16.1.191] => (item=calico_v3.15.3.tar)
changed: [172.16.1.192] => (item=calico_v3.15.3.tar)

@H_404_0@TASK [获取calico离线镜像推送情况] ********************************************************************************************************************************
changed: [172.16.1.191]
changed: [172.16.1.190]
changed: [172.16.1.192]
changed: [172.16.1.193]

@H_404_0@TASK [导入 calico的离线镜像(若执行失败,可忽略)] ***********************************************************************************************************************
changed: [172.16.1.190] => (item=pause.tar)
changed: [172.16.1.193] => (item=pause.tar)
changed: [172.16.1.192] => (item=pause.tar)
changed: [172.16.1.191] => (item=pause.tar)
changed: [172.16.1.190] => (item=calico_v3.15.3.tar)
changed: [172.16.1.193] => (item=calico_v3.15.3.tar)
changed: [172.16.1.191] => (item=calico_v3.15.3.tar)
changed: [172.16.1.192] => (item=calico_v3.15.3.tar)

@H_404_0@TASK [配置 calico DaemonSet yaml文件] **********************************************************************************************************************
changed: [172.16.1.190]

@H_404_0@TASK [运行 calico网络] *************************************************************************************************************************************
changed: [172.16.1.190]

@H_404_0@TASK [calico : 删除认cni配置] ******************************************************************************************************************************
changed: [172.16.1.190]
changed: [172.16.1.191]
changed: [172.16.1.192]
changed: [172.16.1.193]

@H_404_0@TASK [下载calicoctl 客户端] *********************************************************************************************************************************
changed: [172.16.1.193] => (item=calicoctl)
changed: [172.16.1.192] => (item=calicoctl)
changed: [172.16.1.191] => (item=calicoctl)
changed: [172.16.1.190] => (item=calicoctl)

@H_404_0@TASK [准备 calicoctl配置文件] ********************************************************************************************************************************
changed: [172.16.1.192]
changed: [172.16.1.193]
changed: [172.16.1.191]
changed: [172.16.1.190]

@H_404_0@TASK [轮询等待calico-node 运行,视下载镜像速度而定] ********************************************************************************************************************
changed: [172.16.1.190]
changed: [172.16.1.193]
changed: [172.16.1.192]
changed: [172.16.1.191]

@H_404_0@PLAY RECAP *********************************************************************************************************************************************
172.16.1.190 : ok=17 changed=16 unreachable=0 Failed=0 skipped=51 rescued=0 ignored=0
172.16.1.191 : ok=12 changed=10 unreachable=0 Failed=0 skipped=40 rescued=0 ignored=0
172.16.1.192 : ok=12 changed=10 unreachable=0 Failed=0 skipped=40 rescued=0 ignored=0
172.16.1.193 : ok=12 changed=10 unreachable=0 Failed=0 skipped=40 rescued=0 ignored=0

@H_404_0@ 

@H_404_0@#验证calico

@H_404_0@[root@k8s-master01 kubeasz]# calicoctl node status

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@4.创建容器测试网络通信

@H_404_0@[root@k8s-master01 kubeasz]# docker pull alpine

@H_404_0@[root@k8s-master01 kubeasz]# docker tag alpine  magedu.gfeng.net/magedu/alpine

@H_404_0@[root@k8s-master01 kubeasz]# docker push  magedu.gfeng.net/magedu/alpine

@H_404_0@#创建pod测试主机网络通信是否正常

@H_404_0@[root@k8s-master01 kubeasz]# kubectl run net-test1 --image=magedu.gfeng.net/magedu/alpine:latest sleep 30000

@H_404_0@[root@k8s-master01 kubeasz]# kubectl run net-test2 --image=magedu.gfeng.net/magedu/alpine:latest sleep 30000

@H_404_0@[root@k8s-master01 kubeasz]# kubectl get pod -A -o wide

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 5. 部署coredns

@H_404_0@上传kubernetes.tar.gz到master01节点并解压

@H_404_0@[root@k8s-master01 kubeasz]#cd /server/kubernetes/cluster/addons/dns/coredns

@H_404_0@[root@k8s-master01 kubeasz]# ls

@H_404_0@

@H_404_0@ 

@H_404_0@[root@k8s-master01 kubeasz]#  cp coredns.yaml.base /root/coredns-n56.yml

@H_404_0@[root@k8s-master01 kubeasz]#  cd ~

@H_404_0@ 

@H_404_0@#首先pull coredns(版本1.8.0)

@H_404_0@[root@k8s-master01 ~]#  docker pull coredns/coredns:1.8.0

@H_404_0@[root@k8s-master01 ~]# docker tag coredns/coredns:1.8.0 magedu.gfeng.net/magedu/coredns:1.8.0

@H_404_0@[root@k8s-master01 ~]#  docker push magedu.gfeng.net/magedu/coredns:1.8.0

@H_404_0@#更改配置文件

@H_404_0@[root@k8s-master01 ~]# vim coredns-n56.yaml 

@H_404_0@找到并更改其中几项如下:

@H_404_0@ 

@H_404_0@data:

@H_404_0@  Corefile: |

@H_404_0@    .:53 {

@H_404_0@        errors

@H_404_0@        health {

@H_404_0@            lameduck 5s

@H_404_0@        }

@H_404_0@        ready

@H_404_0@        kubernetes magedu.local(更改为hosts中设置的域名) in-addr.arpa ip6.arpa {

@H_404_0@            pods insecure

@H_404_0@            fallthrough in-addr.arpa ip6.arpa

@H_404_0@            ttl 30

@H_404_0@        }

@H_404_0@        prometheus :9153

@H_404_0@        forward . 223.6.6.6(更改为外网地址) {

@H_404_0@            max_concurrent 1000

@H_404_0@ 

@H_404_0@ 

@H_404_0@- name: coredns

@H_404_0@        image: magedu.gfeng.net/magedu/coredns:1.8.0 (镜像地址更改为下载并tag上传到仓库地址)

@H_404_0@ 

@H_404_0@ 

@H_404_0@resources:

@H_404_0@          limits:

@H_404_0@            memory: 256Mi (更改大小,此处为测试,具体环境请自行根据情况设置)

@H_404_0@ 

@H_404_0@ 

@H_404_0@spec:

@H_404_0@  type: NodePort  (添加选项)

@H_404_0@  selector:

@H_404_0@    k8s-app: kube-dns

@H_404_0@  clusterIP: 10.100.0.2(此处地址为hosts文件设置的网段,配置地址可以通过如下查看)

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@  ports:

@H_404_0@  - name: dns

@H_404_0@    port: 53

@H_404_0@    protocol: UDP

@H_404_0@  - name: dns-tcp

@H_404_0@    port: 53

@H_404_0@    protocol: TCP

@H_404_0@  - name: metrics

@H_404_0@    port: 9153

@H_404_0@    protocol: TCP

@H_404_0@    targetPort: 9153

@H_404_0@    nodePort: 30009(暴露端口,后面可以通过web访问)

@H_404_0@ 

@H_404_0@配置完后,保存,然后执行如下:

@H_404_0@[root@k8s-master01 ~]# kubectl apply -f coredns-n56.yaml

@H_404_0@#验证coredns是否运行,如下方正常

@H_404_0@ 

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@#验证pod是否能访问域名:

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ #验证coredns 指标

@H_404_0@http://172.16.1.193:30009/metrics

@H_404_0@[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/metrics-scraper:v1.0.6

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@6. 部署dashboard

@H_404_0@下载镜像并 tag上传仓库:

@H_404_0@[root@k8s-master01 ~]# docker pull kubernetesui/dashboard:v2.3.1

@H_404_0@[root@k8s-master01 ~]# docker tag kubernetesui/dashboard:v2.3.1 magedu.gfeng.net/magedu/dashboard:v2.3.1

@H_404_0@[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/dashboard:v2.3.1

@H_404_0@[root@k8s-master01 ~]# docker pull kubernetesui/metrics-scraper:v1.0.6

@H_404_0@[root@k8s-master01 ~]# docker tag kubernetesui/metrics-scraper:v1.0.6  magedu.gfeng.net/magedu/metrics-scraper:v1.0.6

@H_404_0@[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/metrics-scraper:v1.0.6

@H_404_0@[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

@H_404_0@#编辑下载的配置文件

@H_404_0@[root@k8s-master01 ~]# mv recommended.yaml dashboard-v2.3.1.yaml

@H_404_0@[root@k8s-master01 ~]# vim dashboard-v2.3.1.yaml

@H_404_0@ 更改以及添加内容如下:

@H_404_0@spec:

@H_404_0@  type: NodePort(增加选项)

@H_404_0@  ports:

@H_404_0@    - port: 443

@H_404_0@      targetPort: 8443

@H_404_0@      nodePort: 30002(暴露访问端口)

@H_404_0@  selector:

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@spec:

@H_404_0@      containers:

@H_404_0@        - name: kubernetes-dashboard

@H_404_0@          image: magedu.gfeng.net/magedu/dashboard:v2.3.1(镜像地址更改为上传的仓库地址)

@H_404_0@ 

@H_404_0@ 

@H_404_0@ spec:

@H_404_0@      containers:

@H_404_0@        - name: dashboard-metrics-scraper

@H_404_0@          image: magedu.gfeng.net/magedu/metrics-scraper:v1.0.6(镜像地址更改为上传的地址)

@H_404_0@ 

@H_404_0@配置完成后,保存,然后执行

@H_404_0@[root@k8s-master01 ~]#  kubectl apply -f dashboard-v2.3.1.yaml

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 浏览器访问:

@H_404_0@https://172.16.1.192:30002

@H_404_0@

@H_404_0@ 

@H_404_0@ 发现有token,这时需要另外一个yaml文件配置生成token

@H_404_0@ 

@H_404_0@ 上传admin-user.yml到master节点,然后执行如下截图,生成token

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@#再次访问刚才的web页面,输入token登录,最终结果如下

@H_404_0@

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 至此,部署完成

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

@H_404_0@ 

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐