1、Namespace

资源创建方式

  • 命令行
  • YAML

名称空间用来隔离资源

命令创建和删除

1
2
kubectl create ns hello
kubectl delete ns hello

yaml实现创建和删除

1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
name: hello

2、Pod

2.1单个容器Pod

运行中的一组容器,Pod是kubernetes中应用的最小单位.Pod中可以有多个容器

创建

  • 命令
1
kubectl run mynginx --image=nginx

集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod

  • yaml
1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: Pod
metadata:
labels:
run: mynginx
name: mynginx
# namespace: default
spec:
containers:
- image: nginx
name: mynginx

pod的一些命令

非常重要的排错命令

1
2
3
4
# 查看Pod的运行日志
kubectl logs Pod名字
# 查看pod的Events
kubectl describe pod Pod名字
1
2
3
4
5
6
7
8
9
10
11
12
13
# 查看default名称空间的Pod
kubectl get pod

# 删除
kubectl delete pod Pod名字
# 删除yaml文件也会删除
kubectl delete -f pod.yaml

# 打印更详细的信息,可以查看pod运行在哪个节点,ip地址等,每个Pod - k8s都会分配一个ip,
kubectl get pod -owide

# 使用Pod的ip+pod里面运行容器的端口
curl 192.168.169.136

image-20220513110531565

集群中的任意一个机器(外面的不行)都能通过Pod分配的ip来访问这个Pod

我们在node2访问node1的nginx

image-20220513110757771

为什么pod的ip地址为192.168.36.67

因为在init的时候有这样的代码

1
2
3
4
5
6
7
kubeadm init \
--apiserver-advertise-address=172.31.0.4 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
  • pod-network-cidr就表示了pod的网络范围 。
  • service-cidr表示了service的网络范围

2.2多容器Pod

创建vi multicantainer.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp
name: myapp
spec:
containers:
- image: nginx
name: nginx
- image: tomcat:8.5.68
name: tomcat

image-20220513111537756

在同一个pod中访问就只需要127.0.0.1:端口,就可以了。

其他节点就需要使用下面的方法。

1
2
3
4
# 访问tomcate
curl 192.168.169.136:8080
# 访问nginx
curl 192.168.169.136:80

一个Pod中不能占用相同端口

例如下面就会出错,因为都在80端口:

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp
name: myapp
spec:
containers:
- image: nginx
name: nginx
- image: nginx
name: nginx

3、Deployment

控制Pod,使Pod拥有多副本自愈扩缩容等能力

清除所有Pod:先查看pod再

1
kubectl delete pod myapp mynginx -n default

比较下面两个命令有何不同效果?

1
2
kubectl run mynginx --image=nginx
kubectl create deployment mytomcat --image=tomcat:8.5.68

删除它们所起的服务发现deployment删除后会立马重启一个服务

删除deployment

1
2
kubectl get deploy
kubectl delete deploy 名字

3.1、多副本

1
kubectl create deployment my-dep --image=nginx --replicas=3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-dep
name: my-dep
spec:
replicas: 3
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
- image: nginx
name: nginx

image-20220513120742714

3.2扩缩容

扩容和缩容的时候,就指定replicas的值就可以

1
kubectl scale --replicas=5 deployment/my-dep

image-20220513121055864

1
kubectl scale --replicas=3 deployment/my-dep

image-20220513121156696

还可以通过yaml来修改

1
2
3
#使用命令打开yaml
kubectl edit deployment/my-dep
#然后修改replicas的值

自愈和故障转移

自愈:重启pod

故障转移:一个node机器失联了,在另一个node机器重启一个同样的pod

3.3滚动更新

更新一个Pod下线旧一个Pod,这是其他的旧Pod都还在。直到更新完。

1
2
3
4
5
# 修改版本号实现更新
kubectl set image deployment/my-dep nginx=nginx:1.16.1 --record

# 在另一个窗口输入下面命令,查看变化,
kubectl get pod -w

3.5版本回退

1
2
3
4
5
6
7
8
9
10
11
#查看更新历史记录
kubectl rollout history deployment/my-dep

#查看某个历史详情
kubectl rollout history deployment/my-dep --revision=2

#回滚(回到上次)
kubectl rollout undo deployment/my-dep

#回滚(回到指定版本)
kubectl rollout undo deployment/my-dep --to-revision=2

更多:

除了Deployment,k8s还有 StatefulSetDaemonSetJob 等 类型资 源。我们都称为 工作负载

有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署

https://kubernetes.io/zh/docs/concepts/workloads/controllers/

image-20220511201447890

4、Service:Pod的服务发现与负载均衡

将一组 Pods 统一暴露为一个IP供外界访问。

实验:

进入每个nginx的pod将它们的index分别改为111,222,333

image-20220513124029926

访问三个pod:

image-20220513124150226

4.1ClusterIP

暴露的只能在同一个集群内部访问

1
2
3
4
5
6
7
8
9
#暴露Deploy,下面的命令为,将my-dep的所有pod的80端口都暴露为service的8000端口
kubectl expose deployment my-dep --port=8000 --target-port=80

#查看service
kubectl get service
#删除
kubectl delete svc my-dep
#使用标签检索Pod
kubectl get pod -l app=my-dep

image-20220513124551853

这里的service地址就是kubeadm init时候的指定的范围。

这时候访问这个service的8000端口:

image-20220513124709107

还有一种访问的方式:服务名.名称空间.svc:8000my-dep.default.svc.8000,只能在集群内访问

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
labels:
app: my-dep
name: my-dep
spec:
selector:
app: my-dep
ports:
- port: 8000
protocol: TCP
targetPort: 80

4.2、NodePort

暴露的可以在集群外访问

1
kubectl expose deployment my-dep --port=8000 --target-port=80 --type=NodePort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
labels:
app: my-dep
name: my-dep
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 80
selector:
app: my-dep
type: NodePort

NodePort范围在 30000-32767 之间,所以要设置安全组

使用http://139.198.163.80:32080/可以访问

5、Ingress

官网地址:https://kubernetes.github.io/ingress-nginx/

Service的统一网关入口

5.1安装

1
2
3
4
5
6
7
8
9
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

#修改镜像
vi deploy.yaml
#将image的值改为如下值:
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0

# 检查安装的结果
kubectl get pod,svc -n ingress-nginx

image-20220513132433613

把ingress的svc暴露的端口要放行

image-20220513132646133

5.2使用

https://139.198.163.80:32616/

http://139.198.163.80:30674/

1
2
#查看ingress
kubectl get ing

测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-server
spec:
replicas: 2
selector:
matchLabels:
app: hello-server
template:
metadata:
labels:
app: hello-server
spec:
containers:
- name: hello-server
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-demo
name: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-demo
name: nginx-demo
spec:
selector:
app: nginx-demo
ports:
- port: 8000
protocol: TCP
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-server
name: hello-server
spec:
selector:
app: hello-server
ports:
- port: 8000
protocol: TCP
targetPort: 9000

image-20220513133657320

image-20220513133834721

1.域名访问

现在设计

hello.tt.com:31405请求转给hello-server进行处理

demo.tt.com:31405请求转给nginx-demo进行处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-host-bar
spec:
ingressClassName: nginx
rules:
- host: "hello.tt.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-server
port:
number: 8000
- host: "demo.tt.com"
http:
paths:
- pathType: Prefix
path: "/" # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
backend:
service:
name: nginx-demo ## java,比如使用路径重写,去掉前缀nginx
port:
number: 8000

image-20220513135126726

在电脑的的host文件添加

image-20220513135602064

http://hello.tt.com:30674/

http://demo.tt.com:30674/

问题: path: “/nginx” 与 path: “/“ 为什么会有不同的效果?

path: “/nginx”:http://demo.tt.com:30674/nginx会把请求会转给下面的服务,下面的服务一定要能**处理这个路径**(有这个文件/usr/share/nginx/html/nginx.html),不能处理就是404

路径重写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: ingress-host-bar
spec:
ingressClassName: nginx
rules:
- host: "hello.tt.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-server
port:
number: 8000
- host: "demo.tt.com"
http:
paths:
- pathType: Prefix
path: "/nginx(/|$)(.*)"
backend:
service:
name: nginx-demo
port:
number: 8000
  • rewrite.bar.com/something rewrites to rewrite.bar.com/
  • rewrite.bar.com/something/ rewrites to rewrite.bar.com/
  • rewrite.bar.com/something/new rewrites to rewrite.bar.com/new

流量控制

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-limit-rate
annotations:
nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
ingressClassName: nginx
rules:
- host: "haha.tt.com"
http:
paths:
- pathType: Exact
path: "/"
backend:
service:
name: nginx-demo
port:
number: 8000

Exact:表示必须访问haha.tt.com/才到nginx-demo。haha.tt.com/*就不会

6.存储抽象

6.1环境准备

1.所有节点安装NFS

1
2
#所有机器安装
yum install -y nfs-utils

2、主节点

1
2
3
4
5
6
7
8
#nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

mkdir -p /nfs/data
systemctl enable rpcbind --now
systemctl enable nfs-server --now
#配置生效
exportfs -r

image-20220513145356346

3、从节点

1
2
3
4
5
6
7
8
9
10
11
# 查看172.31.0.2服务器有哪些可以挂载
showmount -e 172.31.0.2

#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount
mkdir -p /nfs/data

#将172.31.0.2:/nfs/data的和本机的/nfs/data同步
mount -t nfs 172.31.0.2:/nfs/data /nfs/data

# 在master节点写入一个测试文件
echo "hello nfs server" > /nfs/data/test.txt

image-20220513145518770

node1节点

image-20220513150202843

node2节点

image-20220513150236707

主节点

image-20220513150148690

4、原生方式数据挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-pv-demo
name: nginx-pv-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-pv-demo
template:
metadata:
labels:
app: nginx-pv-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts: # 将/usr/share/nginx/html文件以html的文件名挂在到外面
- name: html
mountPath: /usr/share/nginx/html
volumes: # 在这里接受volumeMounts,名字和上面的name:html相同,挂载的方式为nfs以及服务器路径配置
- name: html
nfs:
server: 172.31.0.2
path: /nfs/data/nginx-pv

注意,一定要创建nginx-pv目录,不然会报错。

6.2.PV&PVC

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置

PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格

image-20220513152242417

1、创建pv池

静态供应

1
2
3
4
#nfs主节点
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03

创建PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01-10m
spec:
capacity:
storage: 10M #设置空间大小
accessModes: # 可读可写
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/01 # 文件夹位置
server: 172.31.0.2 # nsf的主节点
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv02-1gi
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/02
server: 172.31.0.2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv03-3gi
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/03
server: 172.31.0.2
1
kubectl get pv

image-20220513152952909

2、PVC创建与绑定

创建PVC

1
2
3
4
5
6
7
8
9
10
11
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: nfs #和上面的storageClassName要相同

image-20220513153312547

创建Pod绑定PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy-pvc
name: nginx-deploy-pvc
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deploy-pvc
template:
metadata:
labels:
app: nginx-deploy-pvc
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
persistentVolumeClaim:
claimName: nginx-pvc

6.3ConfigMap

抽取应用配置,并且可以自动更新

1、redis示例

1
vi redis.conf

写入

1
appendonly yes

把配置文件创建为配置集

1
2
# 创建配置,redis保存到k8s的etcd;
kubectl create cm redis-conf --from-file=redis.conf

image-20220513155020898

这时可以删除redis.conf文件

1
rm -rf redis.conf

查看redis.conf中的信息

1
kubectl get cm redis-conf -oyaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
data: # data是所有真正的数据
redis.conf: | # 文件名:配置的数据
appendonly yes
kind: ConfigMap
metadata:
creationTimestamp: "2022-05-13T07:50:13Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:redis.conf: {}
manager: kubectl-create
operation: Update
time: "2022-05-13T07:50:13Z"
name: redis-conf
namespace: default
resourceVersion: "33290"
uid: 5eabe44f-c109-4d24-9fcb-c7482d391197

2.使用ConfigMap抽取配置

1.创建Pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
command: # 容器启动后的命令
- redis-server
- "/redis-master/redis.conf" #指的是redis容器内部的位置
ports:
- containerPort: 6379
volumeMounts: # /redis-master和/data都需要挂载,挂载的名字name
- mountPath: /data
name: data
- mountPath: /redis-master
name: config
volumes: # 配置挂载的方式
- name: data
emptyDir: {}
- name: config
configMap: # configMap挂载的方式
name: redis-conf # master去etcd找一个redis-conf的配置
items: # 表示应用redis-conf中data的什么配置
- key: redis.conf
path: redis.conf

2.检查默认配置

进入redis的pod

1
docker exec -it redis /bin/bash

image-20220513162210903

1
2
3
kubectl exec -it redis -- redis-cli

127.0.0.1:6379> CONFIG GET appendonly

3.修改ConfigMap

image-20220513163523612

4、检查配置是否更新

1
2
3
4
kubectl exec -it redis -- redis-cli

127.0.0.1:6379> CONFIG GET maxmemory
127.0.0.1:6379> CONFIG GET maxmemory-policy

image-20220513163701663

配置值未更改,因为需要重新启动 Pod 才能从关联的 ConfigMap 中获取更新的值。

原因:我们的Pod部署的中间件自己本身没有热更新能力

6.4.Secret

Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。

1
2
3
4
5
6
7
8
9
10
11
kubectl create secret docker-registry leifengyang-docker \
--docker-username=leifengyang \
--docker-password=Lfy123456 \
--docker-email=534096094@qq.com

##命令格式
kubectl create secret docker-registry regcred \
--docker-server=<你的镜像仓库服务器> \
--docker-username=<你的用户名> \
--docker-password=<你的密码> \
--docker-email=<你的邮箱地址>
1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Pod
metadata:
name: private-nginx
spec:
containers:
- name: private-nginx
image: leifengyang/guignginx:v1.0
imagePullSecrets:
- name: leifengyang-docker