The MySQL Enterprise Audit plug-in is part of the MySQL Enterprise Edition (available through a paid license). Basically, Enterprise Audit tracks everything that is happening on your MySQL server, and can be used to protect/detect the misuse of information, and to meet popular compliance regulations including HIPAA, Sarbanes-Oxley, and the PCI Data Security Standard.
MySQL Enterprise Audit uses the open MySQL Audit API to enable standard, policy-based monitoring and logging of connection and query activity executed on specific MySQL servers. Designed to meet the Oracle audit specification, MySQL Enterprise Audit provides an out of box, easy to use auditing and compliance solution for applications that are governed by both internal and external regulatory guidelines.
When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.
(from https://dev.mysql.com/doc/refman/5.6/en/audit-log-plugin.html)
When you enable MySQL Enterprise Audit, log files are generated in your MySQL data directory. You can then use tools like MySQL Workbench (Enterprise Edition), mysqlauditgrep, or Oracle Audit Vault to import the log data, view the contents, perform searches, and generate reports.
I was talking with a client, and he wanted to know if the audit data could be stored in a table. Currently (as of MySQL 5.6.25), the audit information is stored as XML in the audit log files. So, I decided to write a quick Perl script that would parse the XML log files and insert the information into a MySQL database. You will need to set the size of your audit log files in your my.cnf or my.ini configuration file via the audit_log_rotate_on_size variable. You might need to adjust the size of your log files based upon database activity and how well the script parses the log files. If your log files are very large, the Perl script might have issues processing it, and you might want to decrease the size of your log files and run the script more frequently.
CAVEAT
Enterprise Audit does require a license from MySQL. If you are interested in an Enterprise subscription, contact me via the comment section below. If you are an Enterprise customer, you will need to configure Enterprise Audit first. See the Enterprise Audit online documentation page for more information, or contact MySQL Support.
For the data fields, I used the audit log file format information found at The Audit Log File page on MySQL.com.
My MySQL server doesn’t have a lot of activity, so I tried to configure the size of the data fields as best as possible to accommodate the possible size of the data in each field. There may be instances where you will have to increase the size of these fields or change their data types. The largest field is the SQL_TEXT field which will contain your SQL statements. Every table has a max row size of 65,535 bytes. So, the largest possible size of the SQL_TEXT field could be for this example is around 63,200 bytes (65,535 bytes minus the sum of the size of all of the other fields, and minus the 1-byte or 2-byte length prefix used for each varchar field). In this example, the SQL_TEXT field is set to 8,096 bytes, so you may need to increase or decrease this value.
I used varchar data types for each field, excluding the primary key field named ID. I did not spend a lot of time on the database schema, so you might want to modify it a little. I am sure that some of the fields are integers, but I did not have enough data in my log files to positively determine all of the possible values for each field. I did read the online manual, and it stated that CONNECTION_ID, SERVER_ID, STATUS, STATUS_CODE and VERSION were unsigned integers – but I left them as varchar.
NOTICE
This script requires the use of the new format for the audit log files, which is available in MySQL versions 5.6.20 or later.
I created a database along with two tables; one to store the log file information, and a history table to keep track of what files had already been parsed and inserted into MySQL, as well as the number of log file entries. The CREATE DATABASE and CREATE TABLE syntax is as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
CREATE DATABASE `audit_information` /*!40100 DEFAULT CHARACTER SET latin1 */ CREATE TABLE `audit_parsed` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `COMMAND_CLASS` varchar(64) DEFAULT NULL, `CONNECTIONID` varchar(32) DEFAULT NULL, `DB_NAME` varchar(64) DEFAULT NULL, `HOST_NAME` varchar(256) DEFAULT NULL, `IP_ADDRESS` varchar(16) DEFAULT NULL, `MYSQL_VERSION` varchar(64) DEFAULT NULL, `COMMAND_NAME` varchar(64) DEFAULT NULL, `OS_LOGIN` varchar(64) DEFAULT NULL, `OS_VERSION` varchar(256) DEFAULT NULL, `PRIV_USER` varchar(16) DEFAULT NULL, `PROXY_USER` varchar(16) DEFAULT NULL, `RECORD_ID` varchar(64) DEFAULT NULL, `SERVER_ID` varchar(32) DEFAULT NULL, `SQL_TEXT` varchar(8096) DEFAULT NULL, `STARTUP_OPTIONS` varchar(1024) DEFAULT NULL, `COMMAND_STATUS` varchar(64) DEFAULT NULL, `STATUS_CODE` varchar(11) DEFAULT NULL, `DATE_TIMESTAMP` varchar(24) DEFAULT NULL, `USER_NAME` varchar(128) DEFAULT NULL, `LOG_VERSION` varchar(11) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 CREATE TABLE `audit_history` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `AUDIT_LOG_NAME` varchar(64) DEFAULT NULL, `PARSED_DATE_TIME` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `LOG_ENTRIES` int(11) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 |
The Perl script finds the non-active log files (which end in .xml – example: audit.log.14357895017796690.xml), parses the data, creates an SQL file with INSERT statements, imports the data via the mysql command-line program, and then moves the log file(s) and SQL file(s) to a directory. The history table records what files have been processed, so you don’t accidentally process the same file twice.
In the beginning of the Perl script, there are several values you need to replace to match your system. The values are under the section titled “values needed”. Here is the Perl script (named audit.pl):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
|
#!/usr/bin/perl # audit.pl use DBI; use CGI; use XML::Simple; #---------------------------------------------------------- # values needed $Database = "audit_information"; $MYSQL_DATA_DIR = "/usr/local/mysql/data"; $MySQL_Host_IP_Name = "192.168.1.2"; $mysql_user = "root"; $mysql_password = "password_needed"; # directory to store old audit files after parsing $audit_directory = "$MYSQL_DATA_DIR/audit_files"; # make an audit_files directory if one does not exist mkdir($audit_directory) unless(-d $audit_directory); #---------------------------------------------------------- #---------------------------------------------------------- #for each file do this @files = @files = ;; foreach $file_name_to_parse (@files) { #---------------------------------------------------------- # check to see if file has already been parsed $dbh1 = ConnectToMySql($Database); $query1 = "select AUDIT_LOG_NAME from audit_history where AUDIT_LOG_NAME = '$file_name_to_parse'"; $sth1 = $dbh1->prepare($query1); $sth1->execute(); while (@data = $sth1->fetchrow_array()) { $audit_log_name = $data[0]; } # if length of audit_log_name is less than 1, process file if (length($audit_log_name) $PARSED_FILE") or die print "Couldn't open log_file: $!"; $count = 0; # XML::Simple variable - SuppressEmpty => 1 ignore empty values $xml = XML::Simple->new(SuppressEmpty => 1); $data = $xml->XMLin("$file_name_to_parse"); foreach $info (@{$data->{AUDIT_RECORD}}) { # replace tick marks ' with \' in the SQL TEXT $info->{"SQLTEXT"} =~ s/'/\\'/g; print LOGOUT "INSERT INTO audit_information.AUDIT_PARSED (COMMAND_CLASS, CONNECTIONID, DB_NAME, HOST_NAME, IP_ADDRESS, MYSQL_VERSION, COMMAND_NAME, OS_LOGIN, OS_VERSION, PRIV_USER, PROXY_USER, RECORD_ID, SERVER_ID, SQL_TEXT, STARTUP_OPTIONS, COMMAND_STATUS, STATUS_CODE, DATE_TIMESTAMP, USER_NAME, LOG_VERSION) values ('" . $info->{"COMMAND_CLASS"} . "', '" . $info->{"CONNECTION_ID"} . "', '" . $info->{"DB"} . "', '" . $info->{"HOST"} . "', '" . $info->{"IP"} . "', '" . $info->{"MYSQL_VERSION"} . "', '" . $info->{"NAME"} . "', '" . $info->{"OS_LOGIN"} . "', '" . $info->{"OS_VERSION"} . "', '" . $info->{"PRIV_USER"} . "', '" . $info->{"PROXY_USER"} . "', '" . $info->{"RECORD_ID"} . "', '" . $info->{"SERVER_ID"} . "', '" . $info->{"SQLTEXT"} . "', '" . $info->{"STARTUP_OPTIONS"} . "', '" . $info->{"STATUS"} . "', '" . $info->{"STATUS_CODE"} . "', '" . $info->{"TIMESTAMP"} . "', '" . $info->{"USER"} . "', '" . $info->{"VERSION"} . "');\n"; $count++; # end foreach $info (@{$data->{AUDIT_RECORD}}) } # load parsed file into MySQL - hide warnings system("mysql -u$mysql_user -p$mysql_password /dev/null 2>&1"); $dbh2 = ConnectToMySql($Database); $query2 = "insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('$file_name_to_parse', '$count')"; # optional print output - uncomment if desired # print "$query2\n"; $sth2 = $dbh2->prepare($query2); $sth2->execute(); # close audit log file close(INFILE); # optional print output - uncomment if desired # print "Moving audit log ($file_name_to_parse) and log file ($PARSED_FILE) to $audit_directory.\n"; # strip directories off $file_name_to_parse @file_name_to_move_array = split("\/",$file_name_to_parse); $directory_count = $#file_name_to_move_array; $file_name_to_move = $file_name_to_move_array[$directory_count]; # optional print output - uncomment if desired # print "mv $file_name_to_move $file_name_to_parse\n"; # print "mv $PARSED_FILE $audit_directory\n"; # move audit log files and parsed log files to $audit_directory system("mv $file_name_to_parse $audit_directory"); system("mv $PARSED_FILE $audit_directory"); # end - if (length($audit_log_name) < 1) } else { # optional print output - uncomment if desired # print "$audit_log_name already processed\n"; system("mv $file_name_to_parse $audit_directory"); } # end - foreach $file_name_to_parse (@files) { } sub ConnectToMySql { $connectionInfo="dbi:mysql:$Database;$MySQL_Host_IP_Name:3306"; # make connection to database $l_dbh = DBI->connect($connectionInfo,$mysql_user,$mysql_password); return $l_dbh; } |
It should not matter where you execute audit.pl, as long as you have correctly entered the required values in the script. You might get errors if you try to run this script on a log file that has not been rotated, which is the current log file in your MySQL data directory. The current log file is named audit.log.
1
2
3
4
|
# pwd /usr/local/mysql/data # ls -l audit.log -rw-rw---- 1 mysql _mysql 9955118 Jul 2 15:25 audit.log |
The script will only work on files ending in .xml. For testing, I used four small (and identical) audit log files:
1
2
3
4
5
6
7
|
# pwd /usr/local/mysql/data # ls -l *xml -rw-rw---- 1 mysql wheel 15508 Jul 2 12:20 audit.log.14357895017796690.xml -rw-r----- 1 mysql _mysql 15508 Jul 2 13:46 audit.log.14357895017796691.xml -rw-r----- 1 mysql _mysql 15508 Jul 2 13:46 audit.log.14357895017796692.xml -rw-r----- 1 mysql _mysql 15508 Jul 2 13:46 audit.log.14357895017796693.xml |
I have commented-out the print statements in the Perl script, but if you uncomment them, running the script gives you this output for each log file:
1
2
3
4
5
6
7
|
# perl audit.pl Parsing - /usr/local/mysql/data/audit.log.14357895017796690.xml insert into audit_information.audit_history (AUDIT_LOG_NAME, LOG_ENTRIES) values ('/usr/local/mysql/data/audit.log.14357895017796690.xml', '34') Moving audit log (/usr/local/mysql/data/audit.log.14357895017796690.xml) and log file (/usr/local/mysql/data/audit.log.14357895017796690_parsed.sql) to /usr/local/mysql/data/audit_files. mv audit.log.14357895017796690.xml /usr/local/mysql/data/audit.log.14357895017796690.xml mv /usr/local/mysql/data/audit.log.14357895017796690_parsed.sql /usr/local/mysql/data/audit_files .... |
After running my test script, the following data is what is in the audit_history table:
1
2
3
4
5
6
7
8
9
10
11
12
|
mysql> use audit_information Database changed mysql> select * from audit_history; +----+-------------------------------------------------------+---------------------+-------------+ | ID | AUDIT_LOG_NAME | PARSED_DATE_TIME | LOG_ENTRIES | +----+-------------------------------------------------------+---------------------+-------------+ | 1 | /usr/local/mysql/data/audit.log.14357895017796690.xml | 2015-07-02 15:25:07 | 34 | | 2 | /usr/local/mysql/data/audit.log.14357895017796691.xml | 2015-07-02 15:25:08 | 34 | | 3 | /usr/local/mysql/data/audit.log.14357895017796692.xml | 2015-07-02 15:25:08 | 34 | | 4 | /usr/local/mysql/data/audit.log.14357895017796693.xml | 2015-07-02 15:25:09 | 34 | +----+-------------------------------------------------------+---------------------+-------------+ 4 rows in set (0.00 sec) |
And here is an example of one line from the audit_parsed table.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
mysql> select * from audit_parsed limit 1 \G *************************** 1. row *************************** ID: 1 COMMAND_CLASS: select CONNECTIONID: 10093 DB_NAME: HOST_NAME: localhost IP_ADDRESS: 127.0.0.1 MYSQL_VERSION: COMMAND_NAME: Query OS_LOGIN: OS_VERSION: PRIV_USER: PROXY_USER: RECORD_ID: 1614933_2015-07-01T22:08:58 SERVER_ID: SQL_TEXT: SELECT (UNIX_TIMESTAMP(now()) - CAST(variable_value AS SIGNED))*1000 as serverStartMillis FROM information_schema.global_status WHERE variable_name='uptime' STARTUP_OPTIONS: COMMAND_STATUS: 0 STATUS_CODE: 0 DATE_TIMESTAMP: 2015-07-01T22:08:58 UTC USER_NAME: root[root] @ localhost [127.0.0.1] LOG_VERSION: 1 row in set (0.00 sec) |
After parsing the log files, you can then write your own queries for searching through your audit data. You can even include this script in cron, so it runs and parses your files automatically. But as always, test this script and use it with caution before putting it in a production environment. You could also modify the Perl script to filter out values you did not want or need to store.
If you do use this script or if you have any suggestions or other questions, please leave a comment below.
Hopefully this has been interesting and serves as a quick example of what’s possible in processing the MySQL Enterprise Audit logs. That’s it for now, as always, THANK YOU for using MySQL!
Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn. |