-
KinD 로 Vitess 구현하기.Database/Vitess 2023. 9. 5. 09:54728x90반응형
- 목차
함께 보면 좋은 글.
https://westlife0615.tistory.com/407
https://westlife0615.tistory.com/80
KinD 로 Kubernetes 실행하기.
KinD 를 통해서 Local 환경에 Kubernetes 클러스터를 구축해보려고 합니다.
아래 명령어를 실행하여 Kubernetes Cluster yaml 파일을 생성합니다.
< cat k8s-cluster.yaml >
cat <<EOF> /tmp/k8s-cluster.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker - role: worker EOF
k8s-cluster.yaml 파일을 생성하였다면,
kind CLI 를 통해서 Kubernetes Cluster 를 생성합니다.
< create cluster >
kind create cluster \ --name test-cluster \ --image kindest/node:v1.24.0 \ --config /tmp/k8s-cluster.yaml
< create cluster 출력 내용 >
Creating cluster "test-cluster" ... ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 ✓ Preparing nodes 📦 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 Set kubectl context to "kind-test-cluster" You can now use your cluster with: kubectl cluster-info --context kind-test-cluster Thanks for using kind! 😊
위와 같은 출력이 나타나고 성공적으로 Kubernetes Cluster 가 생성되었습니다.
kubectl get node NAME STATUS ROLES AGE VERSION test-cluster-control-plane Ready control-plane 98s v1.24.0 test-cluster-worker Ready <none> 78s v1.24.0 test-cluster-worker2 Ready <none> 77s v1.24.0 test-cluster-worker3 Ready <none> 78s v1.24.0
Vitess Cluster 생성하기.
KinD 로 생성한 Kubernetes Cluster 에 Vitess Cluster 를 생성해보도록 하겠습니다.
Vitess Cluster 는 Custom Resource 로 생성할 예정입니다.
대부분의 진행 과정은 vitess github 사이트에 게시된 내용들을 토대로 진행합니다.
그리고 사용하는 Vitess 버전은 15 버전입니다.
https://github.com/vitessio/vitess
1. Vitess CRD 생성.
먼저 Vitess 의 Custom Resource Definition 을 생성하겠습니다.
VITESS_CRD_URL=https://raw.githubusercontent.com/vitessio/vitess/v15.0.2/examples/operator/operator.yaml curl $VITESS_CRD_URL -o /tmp/vitess-crd.yaml
먼저 KUBECONFIG 가 테스트를 위한 Kubernetes Cluster 와 연결되어있는지 확인합니다.
kubectl config current-context kind-test-cluster
Vitess 관련 CRD 를 생성합니다.
< Vitess CRD 생성 >
kubectl apply -f /tmp/vitess-crd.yaml
< Vitess CRD 생성 로그 >
customresourcedefinition.apiextensions.k8s.io/etcdlockservers.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessbackups.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessbackupstorages.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitesscells.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessclusters.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitesskeyspaces.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessshards.planetscale.com created serviceaccount/vitess-operator created role.rbac.authorization.k8s.io/vitess-operator created rolebinding.rbac.authorization.k8s.io/vitess-operator created deployment.apps/vitess-operator created priorityclass.scheduling.k8s.io/vitess-operator-control-plane created priorityclass.scheduling.k8s.io/vitess created
아래와 같이 Custom Resource Definition 들이 생성됩니다.
2. Vitess Cluster 생성.
Vitess Cluster 를 생성합니다.
< Vitess Cluster yaml 생성 >
cat <<EOF> /tmp/vitess-cluster.yaml # The following example is minimalist. The security policies # and resource specifications are not meant to be used in production. # Please refer to the operator documentation for recommendations on # production settings. apiVersion: planetscale.com/v2 kind: VitessCluster metadata: name: example spec: images: vtctld: vitess/lite:v15.0.2-mysql80 vtadmin: vitess/vtadmin:v15.0.0-rc1 vtgate: vitess/lite:v15.0.2-mysql80 vttablet: vitess/lite:v15.0.2-mysql80 vtbackup: vitess/lite:v15.0.2-mysql80 vtorc: vitess/lite:v15.0.2-mysql80 mysqld: mysql80Compatible: vitess/lite:v15.0.2-mysql80 mysqldExporter: prom/mysqld-exporter:v0.11.0 cells: - name: zone1 gateway: authentication: static: secret: name: example-cluster-config key: users.json replicas: 1 extraFlags: mysql_server_version: 8.0.23 resources: requests: cpu: 100m memory: 256Mi limits: memory: 256Mi vitessDashboard: cells: - zone1 extraFlags: security_policy: read-only replicas: 1 resources: limits: memory: 128Mi requests: cpu: 100m memory: 128Mi vtadmin: rbac: name: example-cluster-config key: rbac.yaml cells: - zone1 apiAddresses: - http://localhost:14001 replicas: 1 readOnly: false apiResources: limits: memory: 128Mi requests: cpu: 100m memory: 128Mi webResources: limits: memory: 128Mi requests: cpu: 100m memory: 128Mi keyspaces: - name: commerce durabilityPolicy: none turndownPolicy: Immediate vitessOrchestrator: resources: limits: memory: 128Mi requests: cpu: 100m memory: 128Mi extraFlags: recovery-period-block-duration: 5s partitionings: - equal: parts: 1 shardTemplate: databaseInitScriptSecret: name: example-cluster-config key: init_db.sql replication: enforceSemiSync: false tabletPools: - cell: zone1 type: replica replicas: 2 vttablet: extraFlags: disable_active_reparents: "true" db_charset: utf8mb4 resources: limits: memory: 256Mi requests: cpu: 100m memory: 256Mi mysqld: resources: limits: memory: 512Mi requests: cpu: 100m memory: 512Mi dataVolumeClaimTemplate: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi updateStrategy: type: Immediate --- apiVersion: v1 kind: Secret metadata: name: example-cluster-config type: Opaque stringData: users.json: | { "user": [{ "UserData": "user", "Password": "" }] } init_db.sql: | # This file is executed immediately after mysql_install_db, # to initialize a fresh data directory. ############################################################################### # Equivalent of mysql_secure_installation ############################################################################### # Changes during the init db should not make it to the binlog. # They could potentially create errant transactions on replicas. SET sql_log_bin = 0; # Remove anonymous users. DELETE FROM mysql.user WHERE User = ''; # Disable remote root access (only allow UNIX socket). DELETE FROM mysql.user WHERE User = 'root' AND Host != 'localhost'; # Remove test database. DROP DATABASE IF EXISTS test; ############################################################################### # Vitess defaults ############################################################################### # Vitess-internal database. CREATE DATABASE IF NOT EXISTS _vt; # Note that definitions of local_metadata and shard_metadata should be the same # as in production which is defined in go/vt/mysqlctl/metadata_tables.go. CREATE TABLE IF NOT EXISTS _vt.local_metadata ( name VARCHAR(255) NOT NULL, value VARCHAR(255) NOT NULL, db_name VARBINARY(255) NOT NULL, PRIMARY KEY (db_name, name) ) ENGINE=InnoDB; CREATE TABLE IF NOT EXISTS _vt.shard_metadata ( name VARCHAR(255) NOT NULL, value MEDIUMBLOB NOT NULL, db_name VARBINARY(255) NOT NULL, PRIMARY KEY (db_name, name) ) ENGINE=InnoDB; # Admin user with all privileges. CREATE USER 'vt_dba'@'localhost'; GRANT ALL ON *.* TO 'vt_dba'@'localhost'; GRANT GRANT OPTION ON *.* TO 'vt_dba'@'localhost'; # User for app traffic, with global read-write access. CREATE USER 'vt_app'@'localhost'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'vt_app'@'localhost'; # User for app debug traffic, with global read access. CREATE USER 'vt_appdebug'@'localhost'; GRANT SELECT, SHOW DATABASES, PROCESS ON *.* TO 'vt_appdebug'@'localhost'; # User for administrative operations that need to be executed as non-SUPER. # Same permissions as vt_app here. CREATE USER 'vt_allprivs'@'localhost'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'vt_allprivs'@'localhost'; # User for slave replication connections. # TODO: Should we set a password on this since it allows remote connections? CREATE USER 'vt_repl'@'%'; GRANT REPLICATION SLAVE ON *.* TO 'vt_repl'@'%'; # User for Vitess filtered replication (binlog player). # Same permissions as vt_app. CREATE USER 'vt_filtered'@'localhost'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'vt_filtered'@'localhost'; # User for Orchestrator (https://github.com/openark/orchestrator). # TODO: Reenable when the password is randomly generated. CREATE USER 'orc_client_user'@'%' IDENTIFIED BY 'orc_client_user_password'; GRANT SUPER, PROCESS, REPLICATION SLAVE, RELOAD ON *.* TO 'orc_client_user'@'%'; GRANT SELECT ON _vt.* TO 'orc_client_user'@'%'; FLUSH PRIVILEGES; RESET SLAVE ALL; RESET MASTER; rbac.yaml: | rules: - resource: "*" actions: - "get" - "create" - "put" - "ping" subjects: ["*"] clusters: ["*"] - resource: "Shard" actions: - "emergency_reparent_shard" - "planned_reparent_shard" subjects: ["*"] clusters: - "local" EOF
< Vitess Cluster 생성 >
kubectl apply -f /tmp/vitess-cluster.yaml
< Vitess Cluster 생성 로그 >
vitesscluster.planetscale.com/example created secret/example-cluster-config created
아래와 같이 관련된 Pod 들이 생성됩니다.
마무리..
이어지는 글들에서 Vitess 와 관련된 여러가지 실험을 진행할 예정입니다.
반응형'Database > Vitess' 카테고리의 다른 글
[Vitess] PlannedReparentShard 알아보기 ( PRS , Planned Reparent Shard ) (0) 2024.01.02 [Vitess] Etcd 알아보기 (0) 2023.12.30 [Vitess] Topology Service 알아보기 (0) 2023.12.30 [Vitess & Kubernetes] Cell 알아보기 (0) 2023.12.30