1 Pod模板

# 版本号
apiVersion: v1
# 对象类型:Pod、ConfigMap、StatefulSet、Service等
kind: Pod
# 元数据
metadata:
  # 对象名称
  name: string
  # 对象命名空间,默认为default
  namespace: string
  # 对象标签
  labels:
    name: string
  # 对象注解
  annotations:
    - name: string

# 详细定义
spec:
  # 容器列表
  containers:
    # 空器名称
    - name: string
      # 镜像地址
      image: string
      # 镜像拉取策略:Always每次都尝试重新拉取,Never使用本地镜像,IfNotPresent优先使用本地镜像本地没有则从远程拉取
      imagePullPolicy: [Always | Never | IfNotPresent]
      # 容器启动命令
      command: [array]
      # 容器启动命令参数
      args: [array]
      # 容器工作目录
      workingDir: string
      # 容器内部存储卷
      volumeMounts:
        # 引用Pod中定义的Volume名称
        - name: string
          # 存储卷在容器内的挂载路径
          mountPath: string
          # 是否为只读模式,默认为读写模式
          readOnly: boolean
      # 暴露的端口列表
      ports:
        # 端口名称
        - name: string
          # 容器需要监听的端口号
          containerPort: int
          # 主机需要监听的端口号
          hostPort: int
          # 端口协议:TCP、UDP,默认为TCP
          protocol: string
      # 容器环境变量
      env:
        # 环境变量名称
        - name: string
          # 环境变量值
          value: string
      # 资源限制
      resources:
        limits:
          # cpu限制
          cpu: string
          # 内存限制
          memory: string

2 Pod创建

2.1 单容器

1)编写pod-redis.yaml文件

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    name: redis
spec:
  containers:
    - name: redis
      image: kubeguide/redis-master
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 6379
          hostPort: 6379

2)根据yaml创建pod

$ kubectl create -f pod-redis.yaml

3)查看创建结果

$ kubectl get pods -o wide

NAME    READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
redis   1/1     Running   0          2m54s   10.244.1.7   node-1   <none>           <none>

2.2 多容器

apiVersion: v1
kind: Pod
metadata:
  name: redis-php
  labels:
    name: redis-php
spec:
  containers:
    - name: frontend
      image: kubeguide/guestbook-php-frontend:localredis
      ports:
        - containerPort: 80
          hostPort: 8000
    - name: redis
      image: kubeguide/redis-master
      ports:
        - containerPort: 6379
          hostPort: 6379

4 Pod存储

常用的Volume类型有

4.1 emptyDir

当Pod被销毁时,存储卷会被一起删除,但是Pod重启不影响

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    name: redis
spec:
  containers:
    - name: redis
      image: kubeguide/redis-master
      imagePullPolicy: IfNotPresent
      ports:
        - name: port
          containerPort: 6379
          hostPort: 6379
      volumeMounts:
        - mountPath: /data/
          name: redis-data
  volumes:
    - name: redis-data
      emptyDir: {}

4.2 hostPath

在宿主机上的目录映射

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    name: redis
spec:
  containers:
    - name: redis
      image: kubeguide/redis-master
      imagePullPolicy: IfNotPresent
      ports:
        - name: port
          containerPort: 6379
          hostPort: 6379
      volumeMounts:
        - mountPath: /data/
          name: redis-data
  volumes:
    - name: redis-data
      hostPath:
        path: /opt/data/redis-data
        type: DirectoryOrCreate

volumes.hostPath.type 取值
1)DirectoryOrCreate:目录,宿主机不存在目录的时候,则创建此目录
2)Directory:目录,宿主机必须存在挂载目录
3)FileOrCreate:文件,宿主机不存在挂载文件则创建
4)File:文件,宿主机必须存在该文件

# 查看Pod详情,可以得知该Pod分配在node-1上
$ kubectl get pod -o wide

NAME    READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
redis   1/1     Running   0          5m11s   10.244.1.9   node-1   <none>           <none>

# 登陆node-1服务器查看指定目录下是否已创建挂载目录
$ ssh root@node-1
$ cd /opt/data/redis-data
$ ls -l
-rw-r--r-- 1 root root 57 Feb  4 14:10 appendonly.aof
-rw-r--r-- 1 root root 32 Feb  4 14:10 dump.rdb

4.3 nfs

NFS是通过将其中一个Node的目录共享,通过在每个Node上安装nfs工具,实现数据的同步。

master安装nfs服务

$ yum install -y nfs-utils
$ mkdir /data/volumes -pv
$ echo "/data/volumes 192.168.0.1/24(rw,no_root_squash)" >> /etc/exports
$ systemctl start nfs
$ showmount -e
Export list for master:
/data/volumes 192.168.0.1/24

node-1 & node-2 安装nfs utils

$ yum install -y nfs-utils
$ mount -t nfs master:/data/volumes /mnt
$ mount

新建nfs volumes pod

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    name: redis
spec:
  containers:
    - name: redis
      image: kubeguide/redis-master
      imagePullPolicy: IfNotPresent
      ports:
        - name: port
          containerPort: 6379
          hostPort: 6379
      volumeMounts:
        - mountPath: /data/
          name: redis-data
  volumes:
    - name: redis-data
      nfs:
        path: /data/volumes
        server: node-1

5 Pod配置

在Kubernetes中使用ConfigMap来实现应用的配置管理,实现配置与应用的分离

5.1 创建ConfigMap

5.1.1 通过yaml文件创建

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap
data:
  application.properties: "datasource=oracle"

5.1.2 通过字符串创建

$ kubectl create configmap literal-configmap --from-literal=datasource=oracle

5.1.3 通过文件创建

$ kubectl create configmap file-configmap --from-file=application.properties

5.2 使用ConfigMap

5.2.1 通过volumeMount的方式使用

通过file的方式创建configmap

$ mkdir redis
$ cd redis
$ vim redis.conf

# 核心配置如下
port 6379
requirepass 123456
dbfilename dump.rdb
databases 16


$ kubectl create configmap redis-conf --from-file=redis

通过volumeMounts方式使用configmap

apiVersion: v1
kind: Pod
metadata:
  name: redis
  labels:
    name: redis
spec:
  containers:
    - name: redis
      image: kubeguide/redis-master
      imagePullPolicy: IfNotPresent
      ports:
        - name: port
          containerPort: 6379
          hostPort: 6379
      volumeMounts:
        - mountPath: /etc/redis
          name: redis-conf
  volumes:
    - name: redis-conf
      configMap:
        name: redis-conf

5.2.2 通过环境变量的方式使用

创建一个ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: env-configmap
data:
  log-level: info
  log-dir: /var/data

创建一个Pod引用该ConfigMap

apiVersion: v1
kind: Pod
metadata:
  name: env-pod
spec:
  containers:
    - name: env-pod
      image: busybox
      command: ["/bin/sh", "-c", "env | grep log"]
      env:
        - name: log-level
          valueFrom:
            configMapKeyRef:
              name: env-configmap
              key: log-level
        - name: log-dir
          valueFrom:
            configMapKeyRef:
              name: env-configmap
              key: log-dir

查看Pod的输出日志

$ kubectl logs env-pod

log-level=info
log-dir=/var/data

6 Pod状态

状态描述
PendingApi Server已创建Pod,但Pod中容器未全部创建完毕
RunningPod内所有容器均已创建,且至少有一个处理运行、启动或重启状态
SuccessedPod内所有的容器均成功执行后退出,且不重启
FailedPod内所有的容器均已退出,且至少一个容器以失败的状态退出
Unknown由于某种原因无法获取Pod的当前状态

7 Pod重启策略

策略描述
Always当容器失效时,由kubelet自动重启该容器
OnFailure当退出码不为0时,由kubelet自动重启该容器
NeverPod内所有的容器均成功执行后退出,且不重启

8 Pod健康检查

8.1 探针的类型

Kubernetes通过两种探针对Pod进行健康检查

8.1.1 LivenessProbe

用于判断容器是否存活,如果探测到不健康,kubelet则会杀死容器,根据容器的重启策略做相应的处理

8.1.2 ReadinessProbe

用于判断容器是否服务可用,如果可用该Pod才会接收请求,否则请求将不会转发到该Pod

8.2 探针的配置

8.2.1 ExecAction

通过容器内部执行一个命令,返回码为0,则表明健康

8.2.2 TCPSocketAction

通过容器的IP地址和端口号,如果能建立TCP连接,则表明健康

8.2.3 HTTPGetAction

通过容器的IP地址、端口号、请求路径进行调用HTTP Get方法,如果响应码(>= 200 && < 400),则表明健康

8.3 通用配置

8.3.1 initialDelaySeconds

启动容器后进行的首次健康检查等待时间,单位为秒

8.3.2 timeoutSeconds

健康检查发送请求后等待响应的超时时间,单位为秒

8.4 探针的示例

8.4.1 LivenessProbe + ExecAction

新建一个livenessProbe类型的探针Pod,使用的是ExecAction的方式检测,initialDelaySeconds设置为15秒,timeoutSeconds设置为1秒
pod-liveness-exec-action.yaml

apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-action
spec:
  containers:
    - name: liveness-exec-action
      image: busybox
      args:
        - /bin/sh
        - -c
        # 将 ok 写入 /tmp/health 文件中,随后休眠10秒,接着删除 /tmp/health 文件,随后休眠 600秒
        - echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/health
        # 探针在容器启动后,延迟15秒检测,此时文件已被删除,故会触发一次失败的检测
        initialDelaySeconds: 15
        timeoutSeconds: 1

执行新建Pod的操作

# 新建Pod
$ kubectl create -f pod-liveness-exec-action.yaml

# 查看Pod,此时可以看到Restarts为1,代表重启了1次
$ kubectl get pod
NAME       READY   STATUS    RESTARTS   AGE
liveness   1/1     Running   1          107s

# 查看Pod状态
$ kubectl describe liveness-exec-action

# 找到Events,可以看到在启动容器后,Liveness Probe检测失败,触发了重启操作
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  42s               default-scheduler  Successfully assigned default/liveness-exec-action to node-1
  Normal   Pulling    41s               kubelet, node-1    Pulling image "busybox"
  Normal   Pulled     38s               kubelet, node-1    Successfully pulled image "busybox"
  Normal   Created    38s               kubelet, node-1    Created container liveness-exec-action
  Normal   Started    38s               kubelet, node-1    Started container liveness-exec-action
  Warning  Unhealthy  9s (x2 over 19s)  kubelet, node-1    Liveness probe failed: cat: can't open '/tmp/health': No such file or directory

8.4.2 LivenessProbe + TCPSocketAction

新建一个LivenessProbe类型的探针Pod,使用的是TCPSocketAction的方式检测,initialDelaySeconds设置为30秒,timeoutSeconds设置为1秒
pod-livenss-tcp-action.yaml

apiVersion: v1
kind: Pod
metadata:
  name: liveness-tcp-action
spec:
  containers:
    - name: liveness-tcp-action
      image: nginx
      ports:
        - containerPort: 80
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 30
        timeoutSeconds: 1

8.4.3 LivenessProbe + HTTPGetAction

新建一个LivenessProbe类型的探针Pod,使用的是HTTPGetAction的方式检测,initialDelaySeconds设置为30秒,timeoutSeconds设置为1秒
pod-livenss-http-get-action.yaml

apiVersion: v1
kind: Pod
metadata:
  name: liveness-http-get-action
spec:
  containers:
    - name: livenss-http-get-action
      image: nginx
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          # 将健康检测地址写成一个不存在的地址,即可触发检测失败
          path: /_status/xxx
          port: 80
        initialDelaySeconds: 30
        timeoutSeconds: 1

执行新建Pod的操作

$ kubectl create -f pod-liveness-http-get-action.yaml
$ kubectl describe pod liveness-http-get-action

Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  43s   default-scheduler  Successfully assigned default/liveness-http-get-action to node-2
  Normal   Pulling    42s   kubelet, node-1    Pulling image "nginx"
  Normal   Pulled     39s   kubelet, node-1    Successfully pulled image "nginx"
  Normal   Created    39s   kubelet, node-1    Created container livenss-http-get-action
  Normal   Started    39s   kubelet, node-1    Started container livenss-http-get-action
  Warning  Unhealthy  2s    kubelet, node-1    Liveness probe failed: HTTP probe failed with statuscode: 404

8.4.4 LivenessProbe + TCPAction

新建一个LivenessProbe类型的探针Pod,使用的是TCPAction的方式检测,initialDelaySeconds设置为30秒,timeoutSeconds设置为1秒
pod-livenss-tcp-action.yaml

apiVersion: v1
kind: Pod
metadata:
  name: liveness-tcp-action
spec:
  containers:
    - name: liveness-tcp-action
      image: nginx
      ports:
        - containerPort: 80
      livenessProbe:
        # TCP探针监听81端口,故将触发健康检测失败
        tcpSocket:
          port: 81
        initialDelaySeconds: 30
        timeoutSeconds: 1

执行新建Pod的操作

$ kubectl create -f pod-liveness-tcp-action.yaml
$ kubectl describe pod liveness-tcp-action

Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  78s               default-scheduler  Successfully assigned default/liveness-tcp-action to node-1
  Normal   Pulled     62s               kubelet, node-1    Successfully pulled image "nginx"
  Normal   Created    61s               kubelet, node-1    Created container liveness-tcp-action
  Normal   Started    61s               kubelet, node-1    Started container liveness-tcp-action
  Normal   Pulling    2s (x2 over 78s)  kubelet, node-1    Pulling image "nginx"
  Warning  Unhealthy  2s (x3 over 22s)  kubelet, node-1    Liveness probe failed: dial tcp 10.244.1.15:81: connect: connection refused
  Normal   Killing    2s                kubelet, node-1    Container liveness-tcp-action failed liveness probe, will be restarted

9 Pod调度

Kubernetes针对那些对Pod调度到指定Node有相应需求的场景,例如:MySql数据库调度到拥有SSD磁盘的节点上,则可以使用NodeSelector、NodeAffinity、PodAffinity等调度策略来实现对Pod的精准调度需求

9.1 NodeSelector

Kubernetes中可以通过对Node进行打标签,然后对Pod中的NodeSelector属性相匹配,即可实现定向调度

对Node进行打标签

$ kubectl label nodes node-1 zone=north
$ kubectl label nodes node-2 zone=south

新建Pod
pod-node-selector-south.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-node-selector
spec:
  containers:
    - name: pod-node-selector
      image: nginx
      ports:
        - containerPort: 80
  nodeSelector:
    zone: south
$ kubectl create -f pod-node-selector-south.yaml
$ kubectl get pod -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod-node-selector   1/1     Running   0          8s    10.244.2.6    node-2   <none>           <none>

可以看到Pod被正常调度到了node-2节点

9.2 NodeAffinity

定义pod对node的亲和性,指pod更愿意运行或不运行在指定的节点上

  • requiredDuringSchedulingIgnoredDuringExecution:硬性要求,必须满足指定的规则可以调度到指定的Node
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity
spec:
  containers:
    - name: pod-node-affinity
      image: nginx
      ports:
        - containerPort: 80
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: zone
                operator: In
                values:
                  - south

其中operator支持In、NotIn、Exists、DoesNotExist、Gt、Lt

  • preferredDuringSchedulingIgnoredDuringExecution:软性要求,尽量满足,如不满足,也可以调度到指定Node
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity
spec:
  containers:
    - name: pod-node-affinity
      image: nginx
      ports:
        - containerPort: 80
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - preference:
            matchExpressions:
              - key: disk
                operator: In
                values:
                  - ssd
          weight: 1

9.3 PodAffinity

定义pod与pod之间的亲和性,指pod更愿意或不愿意与某些pod一起运行在某个节点

定义一个基本的nginx pod,该pod带有标签security = S1,并指定运行在zone = south节点上(node-2)

apiVersion: v1
kind: Pod
metadata:
  name: pod-flag
  labels:
    security: S1
spec:
  containers:
    - name: nginx
      image: nginx
  nodeSelector:
    zone: south

9.3.1 亲和性

pod-pod-affinity.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-affinity
spec:
  containers:
    - name: pod-affinity
      image: nginx
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - topologyKey: zone
          labelSelector:
            matchExpressions:
              - key: security
                operator: In
                values:
                  - S1
$ kubectl create -f pod-pod-affinity.yaml
$ kubectl get pod -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
pod-affinity   1/1     Running   0          9s    10.244.2.9    node-2   <none>           <none>
pod-flag       1/1     Running   0          16s   10.244.2.8    node-2   <none>           <none>

可以看到两个pod均运行在node-2节点上

9.3.2 互斥性

pod-pod-anti-affinity.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-anti-affinity
spec:
  containers:
    - name: pod-anti-affinity
      image: nginx
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - topologyKey: zone
          labelSelector:
            matchExpressions:
              - key: security
                operator: In
                values:
                  - S1
$ kubectl create -f pod-pod-anti-affinity.yaml
NAME                READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
pod-anti-affinity   1/1     Running   0          12s     10.244.1.18   node-1   <none>           <none>
pod-flag            1/1     Running   0          3m47s   10.244.2.8    node-2   <none>           <none>

可以看到pod-anti-affinity运行在和pod-flag不同的节点上

9.4 Taints和Tolerations

taints用来定义节点上的污点,tolerations用来定义pod上对污点的容忍度,如果pod能容忍节点上的污点,则可以在上面运行,反之则不行。