Version control systems, and Git in particular, are essential tools for tracking code changes, collaborating with your team, and ensuring the stability of your codebase. While Git is primarily designed for source code, you can also use it in combination with MySQL databases for version control and schema change management.
In this article, we'll explore how to integrate Git with MySQL for version control using Git hooks, with specific examples in a guide format. All the scripts given in the listings are fully functional and complete. You can reproduce them sequentially in your test environment.
First of all, let's create a test database and user:
create database testdb_remote;
create user 'user_remote'@'localhost' identified WITH mysql_native_password by 'remote123';
grant all on testdb_remote.* to 'user_remote'@'localhost';
Next, we will create a remote repository. This can be a repository on any remote server, but for simplicity, we will create it locally. For the convenience of executing commands, I use git bash. My local machine already has a git folder, so I use it:
cd /c/git
mkdir testdb.remote
cd testdb.remote
git init --bare
And create a local repository as a clone of the remote one:
cd /c/git
git clone /c/git/testdb.remote testdb.local
cd testdb.local
git ls-files
There are no files in the repository; let’s create one and push our changes to the remote repo:
echo "Test DB repo" > readme.md
git status
git add .
git commit -m "1st commit"
git push
Let's check the contents of the remote repository:
cd /c/git/testdb.remote
git ls-tree --full-tree -r HEAD
There is a hooks folder in the remote repository, which contains several files with examples:
ls -1 /c/git/testdb.remote/hooks
Hooks are scripts that are executed when specific events occur. Git has client-side and server-side hooks. Client-side hooks are triggered by operations such as committing and merging. Server-side hooks run on network operations such as receiving pushed commits. Hooks are described in detail here. There are different options for implementing the logic; I'll give an example of using the post-receive server-side hook.
In the hooks folder, we need to create a file named "post-receive,” this is a regular bash script:
#!/bin/sh
while read oval nval ref
do
echo List of files changed in the commit:
git diff --name-only $oval $nval
done
The above script will be executed on the server whenever a push is successfully completed and will output a list of modified files. Let's check how it works by adding a line to the readme.md and pushing the changes to the remote repository:
cd /c/git/testdb.local
echo "New line" >> readme.md
git add .
git commit -m "Line added"
git push
You can see that when executing the git push command, the output now contains lines starting with remote:
- this is the output of the post-receive script that was executed on the server.
Speaking of basic changes, files can be added, modified, and deleted. You can take different approaches to how to apply these changes to the database:
- Make changes to the database only when adding new files. In this case, you can save all changes for a specific task in a single file. This file may contain the name of the task from the bug tracking system. This may not be very convenient during development, but such changes are easier to handle.
- Keep a separate file in the repository for each object (tables, procedures) and apply changes to the database when changing or adding files.
- A more comprehensive approach is to create a separate folder for each task (change request) and place in it the files that should be executed as part of the installation of a given release. At the same time, create an additional file describing the list of files and the order in which they should be installed. This approach is more flexible but, at the same time, more complex both for the development and the pipeline implementation.
Assume you chose the second option, so you need to execute files that have been added or changed. We can filter such files as described here by adding the parameter --diff-filter=AM
:
#!/bin/sh
while read oval nval ref
do
echo List of files added or changed in the commit:
git diff --name-only --diff-filter=AM $oval $nval
done
Add a few files, and also change the readme.md again:
echo "SELECT 1;" > test1.sql
echo "SELECT 2;" > test2.sql
echo "SELECT 3;" > test3.sql
echo "New line 2" >> readme.md
git add .
git commit -m "New files"
git push
The output of the hook script contains 4 files:
We edit the test1.sql and readme.md files, delete test2.sql, and add another file:
echo "SELECT 11;" > test1.sql
echo "New line 2" >> readme.md
rm test2.sql
echo "SELECT 4;" > test4.sql
git add .
git commit -m "Modify, remove and add"
git push
Only modified and added files are displayed:
Our goal is to execute SQL scripts after each successful push, so we need to filter files of only this type; in our case, we will set the requirement that they all must have the “.sql” extension. To filter, add the parameter -- "*.sql"
to the git diff
command:
#!/bin/sh
while read oval nval ref
do
echo List of files added or changed in the commit:
git diff --name-only --diff-filter=AM $oval $nval -- "*.sql"
done
To execute scripts, we must also be able to connect to the database. For this, we will create credentials and test the connection:
mysql_config_editor set --login-path=testdb_remote --host=localhost --port=3306 --user=user_remote --password
mysql --login-path=testdb_remote --database=testdb_remote
Modify our script to iterate through the “.sql” files, execute each file, and check the result. We also need to sort the list output to execute the files in the required order. With the git show command, we display the contents of the SQL script and pass it through the pipe for execution by MySQL.
The variable “$?” will contain 0 if the SQL script was executed successfully and another value if there was an error:
#!/bin/sh
while read oval nval ref
do
echo List of files added or changed in the commit:
for file in $(git diff --name-only --diff-filter=AM $oval $nval -- "*.sql" | sort); do
git show master:${file} | mysql --login-path=testdb_remote --database=testdb_remote
echo "FILE: ${file} - result $?"
done
done
Perform one more test - delete all previously created “.sql” files and create scripts for:
- table creation
- inserting data into the table
- creating a procedure for receiving data from the table
- script containing an error
We also need to add a prefix (1_, 2_, etc.) to each file name to ensure the desired execution order of the files:
rm *.sql
echo "DB Initialization" >> readme.md
echo "
DROP TABLE IF EXISTS customers;
CREATE TABLE customers (
id int UNSIGNED NOT NULL AUTO_INCREMENT,
name varchar(255) DEFAULT NULL,
PRIMARY KEY (id)
);
" > 1_customers.sql
echo "
INSERT INTO customers (id, name)
VALUES (1, 'John Doe'), (2, 'Jane Smith') AS new
ON DUPLICATE KEY UPDATE customers.name = new.name;
" > 2_customers_init.sql
echo "
DROP PROCEDURE IF EXISTS get_customer;
DELIMITER $$
CREATE PROCEDURE get_customer(IN customer_id int UNSIGNED)
BEGIN
SELECT c.id, c.name FROM customers c WHERE c.id = customer_id;
END
$$
" > 3_get_customer.sql
echo "SELECT FROM customers;" > 4_error_select.sql
ls -1
So we have four “.sql” files that need to be executed:
We make changes to the repository:
And we see that when git push
is performed, the files are executed sequentially, and the execution result (MySQL command exit code) of each file is displayed. The file “4_error_select.sql” contains a syntax error, so the result of its execution is 1.
And finally, let’s check what we have in the database:
mysql --login-path=testdb_remote --database=testdb_remote
show tables;
call get_customer(1);
call get_customer(2);
As you can see, the table and procedure were created in the remote database. The procedure executes successfully and returns data.
To improve the readability of the hook script output, you can suppress the MySQL CLI output or redirect it to a log file. You can also analyze the result of the MySQL command execution and add more logic to the hook script.
For example, you can execute SQL scripts on a test database and then run some tests on it (as I described here). If the tests are successfully completed on the test database, run SQL scripts on the production database and probably run some tests on it as well.
By analyzing the results of each step, you can create pipelines of any configuration.
Of course, each approach has a number of advantages and limitations. With the second approach, it is necessary to ensure the order in which the scripts are executed because we cannot, for example, insert data into a table until it is created. It is also necessary to ensure that you can re-execute scripts, i.e., correctly handle situations when the object being created is already in the database, or the database already contains the data to be added.
When using a versioning system, the development process becomes a little more complicated because it is necessary to additionally formalize changes in scripts of a predetermined format. However, you can achieve some flexibility by using an installation file.
The main advantage of the described technique is the implementation of versioning in the database with very little effort, as well as the ability to implement CI/CD pipelines.
To evaluate whether this could be useful for you, you can start by asking yourself: How often do you write code directly in production?
The DevOps Writing Contest is sponsored by Aptible. Aptible’s hosting platform automates the work of provisioning, managing, and scaling infrastructure, so developers can focus on what actually matters: their product. Get started for free with Aptible.