The first MySQL Operator for Kubernetes General Availability (GA) release improved the security configuration and this complicates the upgrade process from pre-GA preview releases. A migration from a preview release (v8.0.28-2.0.3 and earlier) to a GA release without downtime is not possible.
This guide does not apply to future upgrades, say from 8.0.30-2.0.6 to a future release.
If you run multiple clusters then these operations must be done for all clusters in order. In other words, perform step #1 for all clusters, then step #2 for all clusters, and so on.
Prerequisites
A successful migration assumes InnoDB Cluster is managed by MySQL Operator 8.0.28-2.0.3, and that you have credentials to an account (rootUser) with root-style privileges. The InnoDB Cluster must have a minimum of two running MySQL instances. Have a backup before proceeding.
The InnoDBCluster is named mycluster
in this
guide, and assumes it's in the default namespace.
Summary of the Upgrade
The upgrade process described here terminates the old InnoDB Cluster, and removes all data directories except for one. The old MySQL Operator for Kubernetes is then shut down. Then, a temporary MySQL server is configured to use the remaining data directory, which then becomes the donor to initialize a new InnoDB Cluster with a single instance. After the data is cloned, the temporary (donor) server is shutdown, the old data directory is deleted, and the new InnodB Cluster is scaled up to three instances.
Perform the Upgrade
Step #1
: Terminate the old InnoDB Cluster and
remove all data directories except for one. This example uses
kubectl and observes the termination process:
$> kubectl delete innodbcluster mycluster
$> kubectl get pods -w
Deleting an InnoDB Cluster does not remove its associated PersistentVolumeClaims, as seen with:
$> kubectl get pvc
Delete all but the one with the highest number. For example, with a three-node cluster keep the one with index 2:
$> kubectl delete pvc datadir-mycluster-0 datadir-mycluster-1
Be careful to keep one as otherwise the data can not be recovered; in this case, do not delete datadir-mycluster-2.
Step #2
: Terminate the MySQL Operator for Kubernetes; do so by
deleting the associated Deployment that controls it:
$> kubectl delete deployment -n mysql-operator mysql-operator
Step #3
: Create a temporary MySQL Server using
the datadir PersistentVolumeClaim, but first store the credentials
in a Secret:
$> kubectl create secret generic myfixer \
--from-literal=rootUser="root" \
--from-literal=rootPassword="YOUR_PASSWORD"
This Secret is used to set up the temporary server, and also used to clone the data into the new InnoDB Cluster.
Save the following manifest to a file, say to
myfixer.yaml
, and then apply it. If
datadir-mycluster-2 is not the datadir PVC you kept then modify the
name accordingly.
apiVersion: v1
kind: ConfigMap
metadata:
name: mycnf
data:
my.cnf: |
[mysqld]
plugin_load_add=auth_socket.so
loose_auth_socket=FORCE_PLUS_PERMANENT
skip_log_error
log_error_verbosity=3
skip_log_bin
skip_slave_start=1
---
apiVersion: v1
kind: Pod
metadata:
name: myfixer
spec:
restartPolicy: Never
containers:
- image: mysql/mysql-server:8.0.30
imagePullPolicy: IfNotPresent
name: myfixer
args: [ 'mysqld', '--defaults-file=/mycnf/my.cnf' ]
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: rootPassword
name: myfixer
volumeMounts:
- name: datadir
mountPath: /var/lib/mysql
- name: mycnf
mountPath: /mycnf
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir-mycluster-2
- name: mycnf
configMap:
name: mycnf
Apply it:
$> kubectl apply -f myfixer.yaml
Retrieve the IP of the Pod when it's ready:
$> kubectl get pod -o wide myfixer
The sixth column has the IP address, you can verify the server started correctly by using a shell session using the credentials you stored (enter rootUser's password when you see "If you don't see a command prompt, try pressing enter."):
$> kubectl run testshell --restart=Never --rm --image=mysql/mysql-operator:8.0.30-2.0.6 -it -- mysqlsh -uroot -h{IP_HERE}
Step #4
: Deploy a new MySQL Operator for Kubernetes, as describe in
the installation documentation. For example:
$> kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
$> kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
Step #5
: Deploy your new InnoDB Cluster following
the installation
documentation, and by using the temporary server created
earlier as the donor; but first create a secret for your
administrative user as described in the manual. For example:
$> kubectl create secret generic mypwds \
--from-literal=rootUser="root" \
--from-literal=rootHost="%" \
--from-literal=rootPassword="YOUR_PASSWORD"
The following manifest shows the initDB definition, and this example
assumes to a file named ic.yaml
.
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mycluster
spec:
tlsUseSelfSigned: true
instances: 1 # Important deploy a single instance only
secretName: mypwds # Name of Secret for new InnoDBCluster
version: 8.0.30 # Important must equal the server used as temporary donor
router:
instances: 1
initDB:
clone:
donorUrl: root@{IP_HERE}:3306 # Use your myfixer user/ip here
secretKeyRef:
name: myfixer # Secret on the temporary server
# Add custom options like configuration settings, storage configuration etc. as needed
Apply it:
$> kubectl apply -f ic.yaml
Now observe the status until the single MySQL instance cluster is ready. The time needed depends on the amount of data to clone.
$> kubectl get ic mycluster -w
Step #6
: Remove the temporary server and scale up
the InnoDB Cluster.
After confirming your new InnoDB Cluster is online and has your cloned data, remove the temporary server and its associated configuration, and also the old data directory's PersistentVolumeClaim:
$> kubectl delete -f myfixer.yaml
$> kubectl delete secret myfixer
$> kubectl delete pvc datadir-mycluster-2
Scale up the InnoDB Cluster MySQL instances as desired, for example:
$> kubectl patch ic mycluster --type=merge -p '{"spec": {"instances": 3}}'