Audit Log Plugin Notes
MySQL Enterprise Edition now includes MySQL Enterprise Audit,
implemented using a server plugin named
audit_log. MySQL Enterprise Audit uses the
open MySQL Audit API to enable standard, policy-based monitoring
and logging of connection and query activity executed on
specific MySQL servers. Designed to meet the Oracle audit
specification, MySQL Enterprise Audit provides an out of box,
easy to use auditing and compliance solution for applications
that are governed by both internal and external regulatory
When installed, the audit plugin enables MySQL Server to produce a log file containing an audit record of server activity. The log contents include when clients connect and disconnect, and what actions they perform while connected, such as which databases and tables they access.
For more information, see MySQL Enterprise Audit Log Plugin.
Functionality Added or Changed
The internal interface of the Thread Pool plugin has changed. Old versions of the plugin will work with current versions of the server, but versions of the server older than 5.5.28 will not work with current versions of the plugin.
Inserting data of varying record lengths into an
InnoDB table that used
compression could cause
the server to halt with an error.
(Bug #14554000, Bug #13523839, Bug #63815, Bug #12845774, Bug #61456, Bug #12595091, Bug #61208)
Under heavy load of concurrent
DML and queries, an
InnoDB table with a unique index could return
nonexistent duplicate rows to a query.
(Bug #14399148, Bug #66134)
Deleting from an
InnoDB table containing a
prefix index, and
subsequently dropping the index, could cause a crash with an
INFORMATION_SCHEMA tables originally
introduced in MySQL 5.6 are now also available in MySQL 5.5 and
SELECT ... FOR UPDATE,
UPDATE, or other SQL statement scanned rows
InnoDB table using a
<= operator in
WHERE clause, the next row after the
affected range could also be locked. This issue could cause a
lock wait timeout for a row that was not expected to be locked.
The issue occurred under various isolation levels, such as
READ COMMITTED and
When used with a table having multiple columns in its primary
key, but partitioned by
KEY using a column
that was not part of the primary key as the partitioning column,
a query using an aggregate function and
DISTINCT such as
was not handled
For tables using
PARTITION BY HASH or
PARTITION BY KEY, when the partition pruning
mechanism encountered a multi-range list or inequality using a
column from the partitioning key, it continued with the next
partitioning column and tried to use it for pruning, even if the
previous column could not be used. This caused partitions which
possibly matched one or more of the previous partitioning
columns to be pruned away, leaving partitions that matched only
the last column of the partitioning key.
This issue was triggered when both of the following conditions were met:
The columns making up the table's partitioning key were
used in the same order as in the partitioning key definition
WHERE clause as in the column
WHERE condition used with the last
column of the partitioning key was satisfied only by a
single value, while the condition testing some previous
column from the partitioning key was satisfied by a range of
An example of a statement creating a partitioned table and a query against this for which the issue described above occurred is shown here:
CREATE TABLE t1 ( c1 INT, c2 INT, PRIMARY KEY(c2, c1) ) PARTITION BY KEY() # Use primary key as partitioning key PARTITIONS 2; SELECT * FROM t1 WHERE c2 = 2 AND c1 <> 2;
This issue is resolved by ensuring that partition pruning skips any remaining partitioning key columns once a partition key column that cannot be used in pruning is encountered. (Bug #14342883)
Partitioning: The buffer for the row currently read from each partition used for sorted reads was allocated on open and freed only when the partitioning handler was closed or destroyed. For SELECT statements on tables with many partitions and large rows, this could cause the server to use excessive amounts of memory.
This issue has been addressed by allocating buffers for reads from partitioned tables only when they are needed and freeing them immediately once they are no longer needed. As part of this fix, memory is now allocated for reading from rows only in partitions that have not been pruned (see Partition Pruning). (Bug #13025132)
References: See also Bug #11764622, Bug #14537277.
Replication; Microsoft Windows:
On 64-bit Windows platforms, values greater than 4G for the
system variables were truncated to 4G. This caused
INFILE to fail when trying to load a file larger than
4G in size, even when
was set to a value greater than this.
In master-master replication with
setting a user variable and then performing inserts using this
variable caused the
column in the output of
STATUS not to be updated.
When resolving outer fields,
Item_field::fix_outer_fields() creates new
Item_refs for each execution of a prepared
statement, so these must be allocated in the runtime memroot.
The memroot switching before resolving
JOIN::having caused these to be allocated in
the statement root, leaking memory for each prepared statement
The RPM spec file now also runs the test suite on the new binaries, before packaging them. (Bug #14318456)
The argument for
LIMIT must be an integer,
but if the argument was given by a placeholder in a prepared
statement, the server did not reject noninteger values such as
The Thread Pool plugin did not respect the
wait_timeout timeout for client sessions.
A query for a
FEDERATED table could
return incorrect results when the underlying table had a
compound index on two columns and the query included an
AND condition on the columns.
mysqlhotcopy failed for databases containing views. (Bug #62472, Bug #13006947, Bug #12992993)
LIMIT clause to a query containing
GROUP BY and
could cause the optimizer to choose an incorrect index for
processing the query, and return more rows than required.
(Bug #54599, Bug #11762052)
mysqlbinlog did not accept input on the standard input when the standard input was a pipe. (Bug #49336, Bug #11757312)
The argument to the
option was not verified to exist and be a valid key. The
resulting connection used SSL, but the key was not used.
(Bug #62743, Bug #13115401)