1、Namespace
资源创建方式
名称空间用来隔离资源
命令创建和删除
1 2
| kubectl create ns hello kubectl delete ns hello
|
yaml实现创建和删除
1 2 3 4
| apiVersion: v1 kind: Namespace metadata: name: hello
|
2、Pod
2.1单个容器Pod
运行中的一组容器,Pod是kubernetes中应用的最小单位.Pod中可以有多个容器
创建
1
| kubectl run mynginx --image=nginx
|
集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod
1 2 3 4 5 6 7 8 9 10 11
| apiVersion: v1 kind: Pod metadata: labels: run: mynginx name: mynginx
spec: containers: - image: nginx name: mynginx
|
pod的一些命令
非常重要的排错命令
1 2 3 4
| kubectl logs Pod名字
kubectl describe pod Pod名字
|
1 2 3 4 5 6 7 8 9 10 11 12 13
| kubectl get pod
kubectl delete pod Pod名字
kubectl delete -f pod.yaml
kubectl get pod -owide
curl 192.168.169.136
|
集群中的任意一个机器(外面的不行)都能通过Pod分配的ip来访问这个Pod
我们在node2访问node1的nginx
为什么pod的ip地址为192.168.36.67?
因为在init的时候有这样的代码
1 2 3 4 5 6 7
| kubeadm init \ --apiserver-advertise-address=172.31.0.4 \ --control-plane-endpoint=cluster-endpoint \ --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \ --kubernetes-version v1.20.9 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16
|
- pod-network-cidr就表示了pod的网络范围 。
- service-cidr表示了service的网络范围
2.2多容器Pod
创建vi multicantainer.yaml
1 2 3 4 5 6 7 8 9 10 11 12
| apiVersion: v1 kind: Pod metadata: labels: run: myapp name: myapp spec: containers: - image: nginx name: nginx - image: tomcat:8.5.68 name: tomcat
|
在同一个pod中访问就只需要127.0.0.1:端口,就可以了。
其他节点就需要使用下面的方法。
1 2 3 4
| curl 192.168.169.136:8080
curl 192.168.169.136:80
|
一个Pod中不能占用相同端口
例如下面就会出错,因为都在80端口:
1 2 3 4 5 6 7 8 9 10 11 12
| apiVersion: v1 kind: Pod metadata: labels: run: myapp name: myapp spec: containers: - image: nginx name: nginx - image: nginx name: nginx
|
3、Deployment
控制Pod,使Pod拥有多副本,自愈,扩缩容等能力
清除所有Pod:先查看pod再
1
| kubectl delete pod myapp mynginx -n default
|
比较下面两个命令有何不同效果?
1 2
| kubectl run mynginx --image=nginx kubectl create deployment mytomcat --image=tomcat:8.5.68
|
删除它们所起的服务发现deployment删除后会立马重启一个服务
删除deployment
1 2
| kubectl get deploy kubectl delete deploy 名字
|
3.1、多副本
1
| kubectl create deployment my-dep --image=nginx --replicas=3
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-dep name: my-dep spec: replicas: 3 selector: matchLabels: app: my-dep template: metadata: labels: app: my-dep spec: containers: - image: nginx name: nginx
|
3.2扩缩容
扩容和缩容的时候,就指定replicas的值就可以
1
| kubectl scale --replicas=5 deployment/my-dep
|
1
| kubectl scale --replicas=3 deployment/my-dep
|
还可以通过yaml来修改
1 2 3
| kubectl edit deployment/my-dep
|
自愈和故障转移
自愈:重启pod
故障转移:一个node机器失联了,在另一个node机器重启一个同样的pod
3.3滚动更新
更新一个Pod下线旧一个Pod,这是其他的旧Pod都还在。直到更新完。
1 2 3 4 5
| kubectl set image deployment/my-dep nginx=nginx:1.16.1 --record
kubectl get pod -w
|
3.5版本回退
1 2 3 4 5 6 7 8 9 10 11
| kubectl rollout history deployment/my-dep
kubectl rollout history deployment/my-dep --revision=2
kubectl rollout undo deployment/my-dep
kubectl rollout undo deployment/my-dep --to-revision=2
|
更多:
除了Deployment,k8s还有 StatefulSet
、DaemonSet
、Job
等 类型资 源。我们都称为 工作负载
。
有状态应用使用 StatefulSet
部署,无状态应用使用 Deployment
部署
https://kubernetes.io/zh/docs/concepts/workloads/controllers/
4、Service:Pod的服务发现与负载均衡
将一组 Pods 统一暴露为一个IP供外界访问。
实验:
进入每个nginx的pod将它们的index分别改为111,222,333
访问三个pod:
4.1ClusterIP
暴露的只能在同一个集群内部访问
1 2 3 4 5 6 7 8 9
| kubectl expose deployment my-dep --port=8000 --target-port=80
kubectl get service
kubectl delete svc my-dep
kubectl get pod -l app=my-dep
|
这里的service地址就是kubeadm init时候的指定的范围。
这时候访问这个service的8000端口:
还有一种访问的方式:服务名.名称空间.svc:8000
my-dep.default.svc.8000,只能在集群内访问
1 2 3 4 5 6 7 8 9 10 11 12 13
| apiVersion: v1 kind: Service metadata: labels: app: my-dep name: my-dep spec: selector: app: my-dep ports: - port: 8000 protocol: TCP targetPort: 80
|
4.2、NodePort
暴露的可以在集群外访问
1
| kubectl expose deployment my-dep --port=8000 --target-port=80 --type=NodePort
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| apiVersion: v1 kind: Service metadata: labels: app: my-dep name: my-dep spec: ports: - port: 8000 protocol: TCP targetPort: 80 selector: app: my-dep type: NodePort
|
NodePort范围在 30000-32767 之间,所以要设置安全组
使用http://139.198.163.80:32080/可以访问
5、Ingress
官网地址:https://kubernetes.github.io/ingress-nginx/
Service的统一网关入口
5.1安装
1 2 3 4 5 6 7 8 9
| wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml
vi deploy.yaml
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
kubectl get pod,svc -n ingress-nginx
|
把ingress的svc暴露的端口要放行
5.2使用
https://139.198.163.80:32616/
http://139.198.163.80:30674/
测试
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
| apiVersion: apps/v1 kind: Deployment metadata: name: hello-server spec: replicas: 2 selector: matchLabels: app: hello-server template: metadata: labels: app: hello-server spec: containers: - name: hello-server image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server ports: - containerPort: 9000 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-demo name: nginx-demo spec: replicas: 2 selector: matchLabels: app: nginx-demo template: metadata: labels: app: nginx-demo spec: containers: - image: nginx name: nginx --- apiVersion: v1 kind: Service metadata: labels: app: nginx-demo name: nginx-demo spec: selector: app: nginx-demo ports: - port: 8000 protocol: TCP targetPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: hello-server name: hello-server spec: selector: app: hello-server ports: - port: 8000 protocol: TCP targetPort: 9000
|
1.域名访问
现在设计
hello.tt.com:31405
请求转给hello-server进行处理
demo.tt.com:31405
请求转给nginx-demo进行处理
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.tt.com" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.tt.com" http: paths: - pathType: Prefix path: "/" backend: service: name: nginx-demo port: number: 8000
|
在电脑的的host文件添加
http://hello.tt.com:30674/
http://demo.tt.com:30674/
问题: path: “/nginx” 与 path: “/“ 为什么会有不同的效果?
path: “/nginx”:http://demo.tt.com:30674/nginx会把请求会转给下面的服务,下面的服务一定要能**处理这个路径**(有这个文件/usr/share/nginx/html/nginx.html),不能处理就是404
路径重写
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: ingress-host-bar spec: ingressClassName: nginx rules: - host: "hello.tt.com" http: paths: - pathType: Prefix path: "/" backend: service: name: hello-server port: number: 8000 - host: "demo.tt.com" http: paths: - pathType: Prefix path: "/nginx(/|$)(.*)" backend: service: name: nginx-demo port: number: 8000
|
rewrite.bar.com/something
rewrites to rewrite.bar.com/
rewrite.bar.com/something/
rewrites to rewrite.bar.com/
rewrite.bar.com/something/new
rewrites to rewrite.bar.com/new
流量控制
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-limit-rate annotations: nginx.ingress.kubernetes.io/limit-rps: "1" spec: ingressClassName: nginx rules: - host: "haha.tt.com" http: paths: - pathType: Exact path: "/" backend: service: name: nginx-demo port: number: 8000
|
Exact:表示必须访问haha.tt.com/
才到nginx-demo。haha.tt.com/*
就不会
6.存储抽象
6.1环境准备
1.所有节点安装NFS
1 2
| yum install -y nfs-utils
|
2、主节点
1 2 3 4 5 6 7 8
| echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
mkdir -p /nfs/data systemctl enable rpcbind --now systemctl enable nfs-server --now
exportfs -r
|
3、从节点
1 2 3 4 5 6 7 8 9 10 11
| showmount -e 172.31.0.2
mkdir -p /nfs/data
mount -t nfs 172.31.0.2:/nfs/data /nfs/data
echo "hello nfs server" > /nfs/data/test.txt
|
node1节点
node2节点
主节点
4、原生方式数据挂载
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-pv-demo name: nginx-pv-demo spec: replicas: 2 selector: matchLabels: app: nginx-pv-demo template: metadata: labels: app: nginx-pv-demo spec: containers: - image: nginx name: nginx volumeMounts: - name: html mountPath: /usr/share/nginx/html volumes: - name: html nfs: server: 172.31.0.2 path: /nfs/data/nginx-pv
|
注意,一定要创建nginx-pv目录,不然会报错。
6.2.PV&PVC
PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置
PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格
1、创建pv池
静态供应
1 2 3 4
| mkdir -p /nfs/data/01 mkdir -p /nfs/data/02 mkdir -p /nfs/data/03
|
创建PV
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
| apiVersion: v1 kind: PersistentVolume metadata: name: pv01-10m spec: capacity: storage: 10M accessModes: - ReadWriteMany storageClassName: nfs nfs: path: /nfs/data/01 server: 172.31.0.2 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv02-1gi spec: capacity: storage: 1Gi accessModes: - ReadWriteMany storageClassName: nfs nfs: path: /nfs/data/02 server: 172.31.0.2 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv03-3gi spec: capacity: storage: 3Gi accessModes: - ReadWriteMany storageClassName: nfs nfs: path: /nfs/data/03 server: 172.31.0.2
|
2、PVC创建与绑定
创建PVC
1 2 3 4 5 6 7 8 9 10 11
| kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi storageClassName: nfs
|
创建Pod绑定PVC
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-deploy-pvc name: nginx-deploy-pvc spec: replicas: 2 selector: matchLabels: app: nginx-deploy-pvc template: metadata: labels: app: nginx-deploy-pvc spec: containers: - image: nginx name: nginx volumeMounts: - name: html mountPath: /usr/share/nginx/html volumes: - name: html persistentVolumeClaim: claimName: nginx-pvc
|
6.3ConfigMap
抽取应用配置,并且可以自动更新
1、redis示例
写入
把配置文件创建为配置集
1 2
| kubectl create cm redis-conf --from-file=redis.conf
|
这时可以删除redis.conf文件
查看redis.conf中的信息
1
| kubectl get cm redis-conf -oyaml
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| apiVersion: v1 data: redis.conf: | appendonly yes kind: ConfigMap metadata: creationTimestamp: "2022-05-13T07:50:13Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:redis.conf: {} manager: kubectl-create operation: Update time: "2022-05-13T07:50:13Z" name: redis-conf namespace: default resourceVersion: "33290" uid: 5eabe44f-c109-4d24-9fcb-c7482d391197
|
2.使用ConfigMap抽取配置
1.创建Pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| apiVersion: v1 kind: Pod metadata: name: redis spec: containers: - name: redis image: redis command: - redis-server - "/redis-master/redis.conf" ports: - containerPort: 6379 volumeMounts: - mountPath: /data name: data - mountPath: /redis-master name: config volumes: - name: data emptyDir: {} - name: config configMap: name: redis-conf items: - key: redis.conf path: redis.conf
|
2.检查默认配置
进入redis的pod
1
| docker exec -it redis /bin/bash
|
1 2 3
| kubectl exec -it redis -- redis-cli
127.0.0.1:6379> CONFIG GET appendonly
|
3.修改ConfigMap
4、检查配置是否更新
1 2 3 4
| kubectl exec -it redis -- redis-cli
127.0.0.1:6379> CONFIG GET maxmemory 127.0.0.1:6379> CONFIG GET maxmemory-policy
|
配置值未更改,因为需要重新启动 Pod 才能从关联的 ConfigMap 中获取更新的值。
原因:我们的Pod部署的中间件自己本身没有热更新能力
6.4.Secret
Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。
1 2 3 4 5 6 7 8 9 10 11
| kubectl create secret docker-registry leifengyang-docker \ --docker-username=leifengyang \ --docker-password=Lfy123456 \ --docker-email=534096094@qq.com
kubectl create secret docker-registry regcred \ --docker-server=<你的镜像仓库服务器> \ --docker-username=<你的用户名> \ --docker-password=<你的密码> \ --docker-email=<你的邮箱地址>
|
1 2 3 4 5 6 7 8 9 10
| apiVersion: v1 kind: Pod metadata: name: private-nginx spec: containers: - name: private-nginx image: leifengyang/guignginx:v1.0 imagePullSecrets: - name: leifengyang-docker
|