Documentation Home
MySQL Replication
Related Documentation Download this Excerpt
PDF (US Ltr) - 1.5Mb
PDF (A4) - 1.5Mb
HTML Download (TGZ) - 384.8Kb
HTML Download (Zip) - 399.1Kb

MySQL Replication  /  ...  /  Replication and max_allowed_packet

4.1.20 Replication and max_allowed_packet

max_allowed_packet sets an upper limit on the size of any single message between the MySQL server and clients, including replication slaves. If you are replicating large column values (such as might be found in TEXT or BLOB columns) and max_allowed_packet is too small on the master, the master fails with an error, and the slave shuts down the I/O thread. If max_allowed_packet is too small on the slave, this also causes the slave to stop the I/O thread.

Row-based replication currently sends all columns and column values for updated rows from the master to the slave, including values of columns that were not actually changed by the update. This means that, when you are replicating large column values using row-based replication, you must take care to set max_allowed_packet large enough to accommodate the largest row in any table to be replicated, even if you are replicating updates only, or you are inserting only relatively small values.

On a multi-threaded slave (with slave_parallel_workers > 0), ensure that the slave_pending_jobs_size_max system variable is set to a value equal to or greater than the setting for the max_allowed_packet system variable on the master. The default setting for slave_pending_jobs_size_max, 128M, is twice the default setting for max_allowed_packet, which is 64M. max_allowed_packet limits the packet size that the master will send, but the addition of an event header can produce a binary log event exceeding this size. Also, in row-based replication, a single event can be significantly larger than the max_allowed_packet size, because the value of max_allowed_packet only limits each column of the table.

The replication slave actually accepts packets up to the limit set by its slave_max_allowed_packet setting, which defaults to the maximum setting of 1GB, to prevent a replication failure due to a large packet. However, the value of slave_pending_jobs_size_max controls the memory that is made available on the slave to hold incoming packets. The specified memory is shared among all the slave worker queues.

The value of slave_pending_jobs_size_max is a soft limit, and if an unusually large event (consisting of one or multiple packets) exceeds this size, the transaction is held until all the slave workers have empty queues, and then processed. All subsequent transactions are held until the large transaction has been completed. So although unusual events larger than slave_pending_jobs_size_max can be processed, the delay to clear the queues of all the slave workers and the wait to queue subsequent transactions can cause lag on the replication slave and decreased concurrency of the slave workers. slave_pending_jobs_size_max should therefore be set high enough to accommodate most expected event sizes.

User Comments
User comments in this section are, as the name implies, provided by MySQL users. The MySQL documentation team is not responsible for, nor do they endorse, any of the information provided here.
  Posted by Tim Jones on November 2, 2012
Ran into a bit of an issue updating max_allowed_packet dynamically on a master/slave pair. I thought I was being smart by updating the slave setting first and then the master but that's not quite enough. The slave immediately reported:

"Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master'"

After a slave start I got this:

"Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave."

Yikes!. The good news is that it's just your relay logs that are borked, not the master logs. I think this may have something to do with the fact that the max_allowed_packet is not updated for existing connections, only new ones, and the slave needs a bit of a bump to keep it going properly (?).

The solution is to reconfigure the connection to the slave with the RELAY_MASTER_LOG_FILE AND EXEC_MASTER_LOG_POS from SHOW SLAVE STATUS. You just need to plug RELAY_MASTER_LOG_FILE and EXEC_MASTER_LOG_POS into MASTER_LOG_FILE AND MASTER_LOG_POS a CHANGE MASTER TO statement and things should be good to go. Note that you only need to specify those two values (no need to include host, password, etc) as the existing settings will be reused by default.

Credit to