登录
首页 >  数据库 >  MySQL

mysql有状态服务部署

来源:SegmentFault

时间:2023-02-25 09:19:49 453浏览 收藏

你在学习数据库相关的知识吗?本文《mysql有状态服务部署》,主要介绍的内容就涉及到MySQL、kubernetes,如果你想提升自己的开发能力,就不要错过这篇文章,大家要知道编程理论基础和实战操作都是不可或缺的哦!

概述

本文对

mysql cluster on kubernetes with ceph
的集成使用做了部署和测试。

这个测试案例来源于kubernetes官网,有兴趣的话大家可以看下原文

我在k8s上部署了一套mysql集群,这个集群包含一个master,两个slave,mysql的数据目录/var/lib/mysql通过数据卷pv挂载到ceph rbd镜像上,当mysql pod迁移时,能无法对接原有的mysql数据。

在mysql主从数据同步方面,使用的是xtrabackup工具,本文不打算对其展开论述。

环境说明

  • kubernetes 1.8.2
  • mysql 5.7,一主两从3个节点组成master-slave集群
  • ceph集群

创建ConfigMap

kubectl create -f https://k8s.io/docs/tasks/run-application/mysql-configmap.yaml

# mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only

创建Service

kubectl create -f https://k8s.io/docs/tasks/run-application/mysql-services.yaml

# mysql-services.yaml
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql

创建Statefulset

kubectl create -f https://k8s.io/docs/tasks/run-application/mysql-statefulset.yaml

apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: 172.16.18.100:5000/mysql:5.7
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: 172.16.18.100:5000/gcr.io/google-samples/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: 172.16.18.100:5000/mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: 172.16.18.100:5000/gcr.io/google-samples/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql

          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave.
            mv xtrabackup_slave_info change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm xtrabackup_binlog_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi

          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done

            echo "Initializing replication from clone position"
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
            mysql -h 127.0.0.1 

检查创建的
k8s api
对象

检查pv,pvc

[root@172 ~]# kubectl get pv,pvc | grep mysql
pv/pvc-2b89e760-d64a-11e7-9581-000c29f99475   10Gi       RWO            Delete           Bound     default/data-mysql-0   ceph                     1m
pv/pvc-41126384-d64a-11e7-9581-000c29f99475   10Gi       RWO            Delete           Bound     default/data-mysql-1   ceph                     39s
pv/pvc-5122d058-d64a-11e7-9581-000c29f99475   10Gi       RWO            Delete           Bound     default/data-mysql-2   ceph                     12s

pvc/data-mysql-0   Bound     pvc-2b89e760-d64a-11e7-9581-000c29f99475   10Gi       RWO            ceph           1m
pvc/data-mysql-1   Bound     pvc-41126384-d64a-11e7-9581-000c29f99475   10Gi       RWO            ceph           39s
pvc/data-mysql-2   Bound     pvc-5122d058-d64a-11e7-9581-000c29f99475   10Gi       RWO            ceph           12s

检查pod

[root@172 ~]# kubectl get po -owide
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
mysql-0   2/2       Running   0          1m        192.168.5.188   172.16.20.10
mysql-1   2/2       Running   0          1m        192.168.3.24    172.16.20.12
mysql-2   2/2       Running   0          35s       192.168.2.165   172.16.20.11

测试

从mysql master写入数据

kubectl run mysql-client --image=172.16.18.100:5000/mysql:5.7 -i --rm --restart=Never --\
  mysql -h mysql-0.mysql 

从mysql slave读取数据

kubectl run mysql-client --image=172.16.18.100:5000/mysql:5.7 -i -t --rm --restart=Never --\
  mysql -h mysql-read -e "SELECT * FROM test.messages"

mysql master迁移

将节点

172.16.20.10
设置为维护状态

kubectl cordon 172.16.20.10

[root@172 ~]# kubectl get no
NAME           STATUS                     ROLES     AGE       VERSION
172.16.20.10   Ready,SchedulingDisabled       3d        v1.8.2
172.16.20.11   Ready                          4d        v1.8.2
172.16.20.12   Ready                          4d        v1.8.2

迁移mysql-0

kubectl delete pod/mysql-0

[root@172 mysql]# kubectl get po -l app=mysql -owide -w 
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
mysql-0   2/2       Running   0          9m        192.168.5.188   172.16.20.10
mysql-1   2/2       Running   0          9m        192.168.3.24    172.16.20.12
mysql-2   2/2       Running   0          8m        192.168.2.165   172.16.20.11
mysql-0   2/2       Terminating   0         9m        192.168.5.188   172.16.20.10
mysql-0   1/2       Terminating   0         10m       192.168.5.188   172.16.20.10
mysql-0   0/2       Terminating   0         10m           172.16.20.10
mysql-0   0/2       Terminating   0         11m           172.16.20.10
mysql-0   0/2       Terminating   0         11m           172.16.20.10
mysql-0   0/2       Pending   0         0s        
mysql-0   0/2       Pending   0         0s            172.16.20.12
mysql-0   0/2       Init:0/2   0         0s            172.16.20.12
mysql-0   0/2       Init:1/2   0         3s        192.168.3.25   172.16.20.12
mysql-0   0/2       PodInitializing   0         4s        192.168.3.25   172.16.20.12
mysql-0   1/2       Running   0         5s        192.168.3.25   172.16.20.12
mysql-0   2/2       Running   0         9s        192.168.3.25   172.16.20.12

验证数据

kubectl run mysql-client --image=172.16.18.100:5000/mysql:5.7 -i --rm --restart=Never --\
mysql -h mysql-0.mysql -e "SELECT * FROM test.messages"

message
hello

可见,mysql-0从

172.16.20.10
迁移到
172.16.20.12
后,依然能够查询出迁移前写入的数据。
恢复节点

[root@172 ~]# kubectl uncordon 172.16.20.10
node "172.16.20.10" uncordoned

master slave迁移

[root@172 ~]# kubectl get po -owide
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
mysql-0   2/2       Running   0          2h        192.168.3.25    172.16.20.12
mysql-1   2/2       Running   0          3h        192.168.3.24    172.16.20.12
mysql-2   2/2       Running   0          3h        192.168.2.165   172.16.20.11

迁移mysql-1

[root@172 ~]# kubectl delete pod/mysql-1
pod "mysql-1" deleted

mysql-1
172.16.20.12
迁到
172.16.20.10

[root@172 ~]# kubectl get pod -l app=mysql -owide -w
NAME      READY     STATUS    RESTARTS   AGE       IP              NODE
mysql-0   2/2       Running   0          2h        192.168.3.25    172.16.20.12
mysql-1   2/2       Running   0          3h        192.168.3.24    172.16.20.12
mysql-2   2/2       Running   0          3h        192.168.2.165   172.16.20.11
mysql-1   2/2       Terminating   0         3h        192.168.3.24   172.16.20.12
mysql-1   0/2       Terminating   0         3h            172.16.20.12
mysql-1   0/2       Terminating   0         3h            172.16.20.12
mysql-1   0/2       Terminating   0         3h            172.16.20.12
mysql-1   0/2       Terminating   0         3h            172.16.20.12
mysql-1   0/2       Pending   0         0s        
mysql-1   0/2       Pending   0         0s            172.16.20.10
mysql-1   0/2       Init:0/2   0         0s            172.16.20.10
mysql-1   0/2       Init:1/2   0         2s        192.168.5.192   172.16.20.10
mysql-1   0/2       PodInitializing   0         3s        192.168.5.192   172.16.20.10
mysql-1   1/2       Running   0         4s        192.168.5.192   172.16.20.10
mysql-1   2/2       Running   0         8s        192.168.5.192   172.16.20.10

mysql-1
验证数据

kubectl run mysql-client --image=172.16.18.100:5000/mysql:5.7 -i --rm --restart=Never --\
mysql -h mysql-1.mysql -e "SELECT * FROM test.messages"

message
hello

参考

终于介绍完啦!小伙伴们,这篇关于《mysql有状态服务部署》的介绍应该让你收获多多了吧!欢迎大家收藏或分享给更多需要学习的朋友吧~golang学习网公众号也会发布数据库相关知识,快来关注吧!

声明:本文转载于:SegmentFault 如有侵犯,请联系study_golang@163.com删除
相关阅读
更多>
最新阅读
更多>
课程推荐
更多>
评论列表