前两篇(1.volume2.pv&pvc)通过部署redis学习实战了k8s的来Volume、PV和PVC。但是,应⽤程序存在“有状态”和“⽆状态”两种类别,显然redis属于读写磁盘需求的有状态应⽤程序,如⽀持事务功能的RDBMS存储系统,所以,本文学习实战k8s有状态应用的部署。

Stateful基础

StatefulSet是Pod资源控制器的⼀种实现,⽤于部署和扩展有状态应⽤的Pod资源,确保它们的运⾏顺序及每个Pod资源的唯⼀性。适用以下需求的应用:

  • 稳定且唯⼀的⽹络标识符。
  • 稳定且持久的存储。
  • 有序、优雅地部署和扩展。
  • 有序、优雅地删除和终⽌。
  • 有序⽽⾃动地滚动更新。

部署

接着,这里把之前的redis存储修改为stateful的方式,修改后的步骤:

  1. 创建 ConfigMap (参考第一篇)
  2. 修改 Deployment 为StatefulSets

修改部署StatefulSets

mkdir my-redis-statefulsets.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
  namespace: my-ns
spec:
  replicas: 1
  serviceName: redis
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      labels:
        name: redis
    spec:
      containers:
        - name: redis
          image: redis
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
#          command: ["redis-server","/etc/redis/redis.conf"]
          command:
            - redis-server
            - /etc/redis/redis.conf
          ports:
            - containerPort: 6379
          volumeMounts:
            - name: my-redis-config
              mountPath: /etc/redis/redis.conf
              subPath: redis.conf
            - name: my-redis-storage
              mountPath: /data
      volumes:
        - name: my-redis-storage
          emptyDir: {}
        - name: my-redis-config
          configMap:
            name: my-redis-config
            items:
              - key: redis.conf
                path: redis.conf
 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    name: redis
  name: redis
  namespace: my-ns
spec:
  type: NodePort
  ports:
  - name: redis
    port: 6379
    targetPort: 6379
    nodePort: 30379
  selector:
    name: redis          

其中:

  1. Headless Service:⽤于为Pod资源标识符⽣成可解析的DNS资源记录
  2. StatefulSet ⽤于管控Pod资源
  3. volumeClaimTemplate则基于静态或动态的PV供给⽅式为Pod资源提供 专有且固定的存储(这里我们直接使用了第二篇创建的pv)

测试

redis-client 连接 NodeId:NodePort

# redis-cli -h YourNodeIp-p 30379 -a 123456
YourNodeIp:30379> info
# Serverredis_version:7.0.4

连接成功!

参考

之前学习实践使用熟悉卷(Volume)来存储利用k8s储存卷来部署redis,本文接着学习实践k8s的存储,主要通过redis存储例子学习实战PV和PVC。

PV & PVC

Kubernetes为例⽤户和开发隐藏底层架构的⽬标,在用户和存储服务之间添加了一个中间层,也就是PersistentVolume和PersistentVolumeClaim。

RedisVolume修改为PV&PVC

接着,这里把之前的redis存储修改为pv-pvc的方式,修改后的步骤:

  1. 创建 ConfigMap (参考第一篇)
  2. 增加声明PV和PVC (新增)
  3. 增加 Deployment (把Volume修改为 步骤2 的PVC)
  4. 暴露 Service (参考第一篇)

步骤2,增加声明PV和PVC

mkdir my-redis-pv-pvc.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-redis-pv
  labels:
     app: my-redis
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: "/mnt/data/my-redis"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-pvc
  namespace: my-ns
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

执行创建

master# kubectl apply -f my-redis-pv-pvc.yaml

查看pv状态

master# kubectl get pv 
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM     STORAGECLASS    REASON   AGE
my-redis-pv     1Gi        RWO            Retain           Bound    my-ns/redis-pvc                                                 50m

查看pvc状态

master# kubectl get pvc -n my-ns
NAME               STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-pvc          Bound    my-redis-pv    1Gi             RWO                                          54m

pv和pvc已经Bound成功。

步骤3,增加 Deployment (把Volume修改为 步骤2 的PVC)

mkdir my-redis-deployment-pvc.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-redis # Unique name for the deployment
  namespace: myns
  labels:
    app: my-redis       # Labels to be applied to this deployment
spec:
  selector:
    matchLabels:     # This deployment applies to the Pods matching these labels
      app: my-redis
      role: master
      tier: backend
  replicas: 1        # Run a single pod in the deployment
  template:          # Template for the pods that will be created by this deployment
    metadata:
      labels:        # Labels to be applied to the Pods in this deployment
        app: my-redis
        role: master
        tier: backend
    spec:            # Spec for the container which will be run inside the Pod.
      containers:
        - name: my-redis
          image: redis
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
#          command: ["redis-server","/etc/redis/redis.conf"]
          command:
            - redis-server
            - /etc/redis/redis.conf
          ports:
            - containerPort: 6379
          volumeMounts:
            - name: my-redis-config
              mountPath: /etc/redis/redis.conf
              subPath: redis.conf
            - name: my-redis-storage
              mountPath: /data
      volumes:
        - name: my-redis-persistent-storage 
          persistentVolumeClaim:
          claimName: redis-pvc # 这里修改为步骤2声明的pvc
        - name: my-redis-config
          configMap:
            name: my-redis-config
            items:
              - key: redis.conf
                path: redis.conf

执行创建deployment

master# kubectl apply -f my-redis-deployment-pvc.yaml

检查状态

master# kubectl get pod -n my-ns
NAME                                   READY   STATUS    RESTARTS   AGE 
my-redis-6565459689-mbptf              1/1     Running   0          53m

测试

redis-client 连接 NodeId:NodePort

# redis-cli -h YourNodeIp-p 30379 -a 123456
YourNodeIp:30379> info
# Server
redis_version:7.0.4

连接成功。

参考

本文主要通过利用k8s如何部署Redis,来学习使用k8s的存储卷Volume。

Pod本⾝具有⽣命周期,故其内部运⾏的容器及其相关数据⾃⾝均⽆法持久存在。Kubernetes也⽀持类似Docker的存储卷功能,不过,其存储卷Volume是与Pod资源绑定⽽⾮容器。

Pod Volume

如何要在一个Pod里声明 Volume

  1. ⼀是通过.spec.volumes字段定义在Pod之上的存储卷列表,其⽀持使⽤多种不同类型的存储卷且配置参数差别很⼤;
spec:

volumes:
* name: logdata
emptyDir: {}
* name: example
gitRepo:
repository: https://github.com/iKubernetes/k8s_book.git
revision: master
directory: .
  1. 另⼀个是通过.spec.containers.volumeMounts字段在容器上定义的存储卷挂载列表,它只能挂载当前Pod资源中定义的具体存储卷,当然,也可以不挂载任何存储卷
spec:

containers:
* name: <String>

volumeMounts:
* name <string> -required-
mountPath <string> -required-

在之前(声明式对象配置)有介绍过nginx的部署,接着来部署Redis,和Nginx有所不同的是,这里多了一个 ConfigMap 和Volume ,用来配置管理redis和储存。

环境

  • 一个k8s 集群(mater* 1,node* 1)
  • 一个正常连接k8smater的主机的终端

创建 Config-Map

使用 ConfigMap 来配置 Redis ,包含了Redis配置文件里需要的配置项,在创建Pod时会作为配置文件挂载到应用所在的容器中。 my-config-map.yaml 具体如下:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-redis-config
  namespace: my-ns
data:
  redis.conf: |
    requirepass 123456
    protected-mode no
    port 6379
    tcp-backlog 511
    timeout 0
    tcp-keepalive 300
    daemonize no
    supervised no
    pidfile /var/run/redis_6379.pid
    loglevel notice
    logfile ""
    databases 16
    always-show-logo yes
    save 900 1
    save 300 10
    save 60 10000
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    rdbchecksum yes
    dbfilename dump.rdb
    dir /data
    slave-serve-stale-data yes
    slave-read-only yes
    repl-diskless-sync no
    repl-diskless-sync-delay 5
    repl-disable-tcp-nodelay no
    slave-priority 100
    lazyfree-lazy-eviction no
    lazyfree-lazy-expire no
    lazyfree-lazy-server-del no
    slave-lazy-flush no
    appendonly no
    appendfilename "appendonly.aof"
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    aof-load-truncated yes
    aof-use-rdb-preamble no
    lua-time-limit 5000
    slowlog-log-slower-than 10000
    slowlog-max-len 128
    latency-monitor-threshold 0
    notify-keyspace-events Ex
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64
    list-max-ziplist-size -2
    list-compress-depth 0
    set-max-intset-entries 512
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64
    hll-sparse-max-bytes 3000
    activerehashing yes
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit slave 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60
    hz 10
    aof-rewrite-incremental-fsync yes    

执行命令

# kubectl apply -f my-redis-config.yaml

创建 Deployment

创建Deployment 作为调度Pod运行 Redis 的载体。my-redis-deployment.yaml具体如下

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-redis # Unique name for the deployment
  namespace: my-ns
  labels:
    app: my-redis       # Labels to be applied to this deployment
spec:
  selector:
    matchLabels:     # This deployment applies to the Pods matching these labels
      app: my-redis
      role: master
      tier: backend
  replicas: 1        # Run a single pod in the deployment
  template:          # Template for the pods that will be created by this deployment
    metadata:
      labels:        # Labels to be applied to the Pods in this deployment
        app: my-redis
        role: master
        tier: backend
    spec:            # Spec for the container which will be run inside the Pod.
      containers:
        - name: my-redis
          image: redis
          resources:
            requests:
              cpu: 100m
              memory: 100Mi
#          command: ["redis-server","/etc/redis/redis.conf"]
          command:
            - redis-server
            - /etc/redis/redis.conf
          ports:
            - containerPort: 6379
          volumeMounts:
            - name: my-redis-config
              mountPath: /etc/redis/redis.conf
              subPath: redis.conf
            - name: my-redis-storage
              mountPath: /data
      volumes:
        - name: my-redis-storage
          emptyDir: {}
        - name: my-redis-config
          configMap:
            name: my-redis-config
            items:
              - key: redis.conf
                path: redis.conf

执行

# kubectl apply -f my-redis-deployment.yaml

创建 service

NodePort 方式向外暴露服务。my-redis-service.yaml 具体如下

apiVersion: v1
kind: Service        # Type of Kubernetes resource
metadata:
  name: my-redis-svc # Name of the Kubernetes resource
  namespace: my-ns
  labels:            # Labels that will be applied to this resource
    app: my-redis
    role: master
    tier: backend
spec:
  type: NodePort
  ports:
    - port: 6379       # Map incoming connections on port 6379 to the target port 6379 of the Pod
      targetPort: 6379
      nodePort: 30379
  selector:          # Map any Pod with the specified labels to this service
    app: my-redis
    role: master
    tier: backend

执行

# kubectl apply -f my-redis-service.yaml

测试

redis-client 测试NodeId:NodePort

redis-cli -h YourNodeIp -p 30379 -a 123456
YourNodeIp:30379> info
# Server
redis_version:7.0.4
...

连接成功。

参考

本文主要记录Kubernetes的资源管理,包括资源配置,声明式管理。

Kubernetes提供了RESTful风格的API,它将各类组件均抽象为“资源”,并通过属性赋值完成实例化 。API主要由资源类型和控制器两部分组成,资源通常以json、yaml格式并写入集群的对象,控制器则在集群资源存储完成后自动创建并启动。

常用的K8s资源有

  • Pod
  • Deployment
  • Service
  • Ingress

资源分类

依据资源的主要功能作为分类标准,Kubernetes的API对象⼤体可分为

  • ⼯作负载 (Workload)

    • ReplicationController
    • ReplicaSet
    • Deploymen
    • StatefulSet
    • DaemonSet
    • Job
  • 发现和负载均衡 (Discovery&LB)

  • 配置和存储 (Config&Storage

  • 集群 (Cluster)

    • Namespace
    • Node
    • Role
    • ClusterRole
  • 元数据 (Metadata)

它们基本上都是围绕⼀个核⼼⽬的⽽设计:如何更好地运⾏和丰富Pod资源,从⽽为容器化应⽤提供更灵活、更完善的操作与管理组件。

资源配置

标准格式一般包括一级字段

  • kind
  • apiVersion
  • metadata (对象元数据)
  • spec(描述所期望的对象应该具有的状态)
  • status(字段在对象创建后由系统⾃⾏维护)

以一个nginx的pod.yaml为例


apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.7.9
      ports:
          - containerPort: 80
  • metadata:
    • name 当前的对象名称
    • labels: 当前对象的标签(键值对)
  • spec
    • containers,它的值是⼀个容器对象列表,⽀持嵌套创建⼀到 多个容器。

声明式对象配置

提供配置清单文件给k8s系统,并委托系统来跟踪活动对象的状态变动。管理操作的命令通过apply

  • 创建
$kubectl apply -f <directory>/
  • 更新
kubectl apply -f <directory>/
  • 删除
kubectl apply -f <directory>/ --prune -l your-label 

建议使用命令式的方法

kubectl delete -f  <filename>

实战声明式部署nginx

定义的deployment文件: nginx_dp.yaml


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx-dp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
        - name: nginx
          image: nginx:1.7.9
          ports:
            - containerPort: 80

暴露服务,定义service文件nginx_svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    app: nginx-pod
  ports:
    - name: default
      protocol: TCP
      port: 80
      targetPort: 80

访问nginx,output

参考

jefffff

Stay hungry. Stay Foolish COOL

Go backend developer

China Amoy