BLACKHOLE storage engine acts as a
“black hole” that accepts data but throws it away and
does not store it. Retrievals always return an empty result:
CREATE TABLE test(i INT, c CHAR(10)) ENGINE = BLACKHOLE;Query OK, 0 rows affected (0.03 sec) mysql>
INSERT INTO test VALUES(1,'record one'),(2,'record two');Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql>
SELECT * FROM test;Empty set (0.00 sec)
To enable the
BLACKHOLE storage engine if you
build MySQL from source, invoke CMake with the
To examine the source for the
look in the
sql directory of a MySQL source
When you create a
BLACKHOLE table, the server
creates a table format file in the database directory. The file
begins with the table name and has an
extension. There are no other files associated with the table.
BLACKHOLE storage engine supports all kinds
of indexes. That is, you can include index declarations in the table
You can check whether the
engine is available with the
Inserts into a
BLACKHOLE table do not store any
data, but if statement based binary logging is enabled, the SQL
statements are logged and replicated to slave servers. This can be
useful as a repeater or filter mechanism.
When using the row based format for the binary log, updates and deletes are skipped, and neither logged nor applied. For this reason, you should use STATEMENT for the binary logging format, and not ROW or MIXED.
Suppose that your application requires slave-side filtering rules,
but transferring all binary log data to the slave first results in
too much traffic. In such a case, it is possible to set up on the
master host a “dummy” slave process whose default
storage engine is
BLACKHOLE, depicted as follows:
The master writes to its binary log. The “dummy”
mysqld process acts as a slave, applying the
desired combination of
replicate-ignore-* rules, and writes a new,
filtered binary log of its own. (See
Section 16.1.3, “Replication and Binary Logging Options and Variables”.) This filtered log is
provided to the slave.
The dummy process does not actually store any data, so there is little processing overhead incurred by running the additional mysqld process on the replication master host. This type of setup can be repeated with additional replication slaves.
INSERT triggers for
BLACKHOLE tables work as expected. However,
BLACKHOLE table does not actually
store any data,
DELETE triggers are not activated:
FOR EACH ROW clause in the trigger definition
does not apply because there are no rows.
Other possible uses for the
Verification of dump file syntax.
Measurement of the overhead from binary logging, by comparing
BLACKHOLE with and without
binary logging enabled.
BLACKHOLE is essentially a
“no-op” storage engine, so it could be used for
finding performance bottlenecks not related to the storage
BLACKHOLE engine is transaction-aware, in the
sense that committed transactions are written to the binary log and
rolled-back transactions are not.
Blackhole Engine and Auto Increment Columns
The Blackhole engine is a no-op engine. Any operations performed on a table using Blackhole will have no effect. This should be born in mind when considering the behavior of primary key columns that auto increment. The engine will not automatically increment field values, and does not retain auto increment field state. This has important implications in replication.
Consider the following replication scenario where all three of the following conditions apply:
On a master server there is a blackhole table with an auto increment field that is a primary key.
On a slave the same table exists but using the MyISAM engine.
Inserts are performed into the master's table without explicitly
setting the auto increment value in the
INSERT statement itself or through using a
SET INSERT_ID statement.
In this scenario replication will fail with a duplicate entry error on the primary key column.
In statement based replication, the value of
INSERT_ID in the context event will always be the
same. Replication will therefore fail due to trying insert a row
with a duplicate value for a primary key column.
In row based replication, the value that the engine returns for the row always be the same for each insert. This will result in the slave attempting to replay two insert log entries using the same value for the primary key column, and so replication will fail.
When using row-based replication,
binlog_format=ROW), a slave where
the last columns are missing from a table is supported, as described
in the section
Section 126.96.36.199, “Replication with Differing Table Definitions on Master and Slave”.
This filtering works on the slave side, that is, the columns are copied to the slave before they are filtered out. There are at least two cases where it is not desirable to copy the columns to the slave:
If the data is confidential, so the slave server should not have access to it.
If the master has many slaves, filtering before sending to the slaves may reduce network traffic.
Master column filtering can be achieved using the
BLACKHOLE engine. This is carried out in a way
similar to how master table filtering is achieved - by using the
BLACKHOLE engine and the
The setup for the master is:
CREATE TABLE t1 (public_col_1, ..., public_col_N, secret_col_1, ..., secret_col_M) ENGINE=MyISAM;
The setup for the trusted slave is:
CREATE TABLE t1 (public_col_1, ..., public_col_N) ENGINE=BLACKHOLE;
The setup for the untrusted slave is:
CREATE TABLE t1 (public_col_1, ..., public_col_N) ENGINE=MyISAM;