FLUSH [NO_WRITE_TO_BINLOG | LOCAL] {
flush_option [, flush_option] ...
| tables_option
}
flush_option: {
BINARY LOGS
| DES_KEY_FILE
| ENGINE LOGS
| ERROR LOGS
| GENERAL LOGS
| HOSTS
| LOGS
| PRIVILEGES
| OPTIMIZER_COSTS
| QUERY CACHE
| RELAY LOGS [FOR CHANNEL channel]
| SLOW LOGS
| STATUS
| USER_RESOURCES
}
tables_option: {
TABLES
| TABLES tbl_name [, tbl_name] ...
| TABLES WITH READ LOCK
| TABLES tbl_name [, tbl_name] ... WITH READ LOCK
| TABLES tbl_name [, tbl_name] ... FOR EXPORT
}
The FLUSH statement has several
variant forms that clear or reload various internal caches,
flush tables, or acquire locks. To execute
FLUSH, you must have the
RELOAD privilege. Specific flush
options might require additional privileges, as described later.
It is not possible to issue
FLUSH statements within stored
functions or triggers. However, you may use
FLUSH in stored procedures, so
long as these are not called from stored functions or
triggers. See Section C.1, “Restrictions on Stored Programs”.
By default, the server writes
FLUSH statements to the binary
log so that they replicate to replication slaves. To suppress
logging, specify the optional
NO_WRITE_TO_BINLOG keyword or its alias
LOCAL.
FLUSH LOGS,
FLUSH TABLES WITH READ LOCK
(with or without a table list), and
FLUSH
TABLES are not written to the binary log in any case
because they would cause problems if replicated to a slave.
tbl_name ... FOR
EXPORT
The FLUSH statement causes an
implicit commit. See Section 13.3.3, “Statements That Cause an Implicit Commit”.
The mysqladmin utility provides a
command-line interface to some flush operations, using commands
such as flush-hosts,
flush-logs,
flush-privileges,
flush-status, and
flush-tables. See
Section 4.5.2, “mysqladmin — Client for Administering a MySQL Server”.
Sending a SIGHUP signal to the server causes
several flush operations to occur that are similar to various
forms of the FLUSH statement. See
Section 5.1.11, “Server Response to Signals”.
The RESET statement is similar to
FLUSH. See
Section 13.7.6.6, “RESET Syntax”, for information about using the
RESET statement with replication.
The following list describes the permitted
FLUSH statement
flush_option values. For descriptions
of FLUSH
TABLES variants, see
FLUSH TABLES Syntax.
Closes and reopens any binary log file to which the server is writing. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file.
Reloads the DES keys from the file that was specified with the
--des-key-fileoption at server startup time.NoteThe
DES_ENCRYPT()andDES_DECRYPT()functions are deprecated as of MySQL 5.7.6, will be removed in a future MySQL release, and should no longer be used. Consequently,--des-key-fileandDES_KEY_FILEalso are deprecated and will be removed.Closes and reopens any flushable logs for installed storage engines. This causes
InnoDBto flush its logs to disk.Closes and reopens any error log file to which the server is writing.
Closes and reopens any general query log file to which the server is writing.
Empties the host cache. Flush the host cache if some of your hosts change IP address or if the error message
Host 'occurs for connections from legitimate hosts. (See Section B.5.2.5, “Host 'host_name' is blocked”.) When more thanhost_name' is blockedmax_connect_errorserrors occur successively for a given host while connecting to the MySQL server, MySQL assumes that something is wrong and blocks the host from further connection requests. Flushing the host cache enables further connection attempts from the host. The default value ofmax_connect_errorsis 100. To avoid this error message, start the server withmax_connect_errorsset to a large value.Closes and reopens any log file to which the server is writing. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file. If relay logging is enabled, the sequence number of the relay log file is incremented by one relative to the previous file.
FLUSH LOGShas no effect on tables used for the general query log or for the slow query log (see Section 5.4.1, “Selecting General Query and Slow Query Log Output Destinations”).Rereads the cost model tables so that the optimizer starts using the current cost estimates stored in them. The server writes a warning to the error log for any unrecognized entries. (For information about these tables, see Section 8.9.5, “The Optimizer Cost Model”.) This operation affects only sessions that begin subsequent to the flush. Existing sessions continue to use the cost estimates that were current when they began.
Reloads the privileges from the grant tables in the
mysqlsystem database.The server caches information in memory as a result of
GRANT,CREATE USER,CREATE SERVER, andINSTALL PLUGINstatements. This memory is not released by the correspondingREVOKE,DROP USER,DROP SERVER, andUNINSTALL PLUGINstatements, so for a server that executes many instances of the statements that cause caching, there will be an increase in memory use. This cached memory can be freed withFLUSH PRIVILEGES.Defragment the query cache to better utilize its memory.
FLUSH QUERY CACHEdoes not remove any queries from the cache, unlikeFLUSH TABLESorRESET QUERY CACHE.NoteThe query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0. Deprecation includes
FLUSH QUERY CACHE.FLUSH RELAY LOGS [FOR CHANNELchannel]Closes and reopens any relay log file to which the server is writing. If relay logging is enabled, the sequence number of the relay log file is incremented by one relative to the previous file.
The
FOR CHANNELclause enables you to name which replication channel the statement applies to. ExecutechannelFLUSH RELAY LOGS FOR CHANNELto flush the relay log for a specific replication channel. If no channel is named and no extra replication channels exist, the statement applies to the default channel. If no channel is named and multiple replication channels exist, the statement applies to all replication channels. For more information, see Section 16.2.3, “Replication Channels”.channelCloses and reopens any slow query log file to which the server is writing.
-
Note
The value of the
show_compatibility_56system variable affects the operation of this option. For details, see the description of that variable in Section 5.1.5, “Server System Variables”.This option adds the current thread's session status variable values to the global values and resets the session values to zero. Some global variables may be reset to zero as well. It also resets the counters for key caches (default and named) to zero and sets
Max_used_connectionsto the current number of open connections. This information may be of use when debugging a query. See Section 1.7, “How to Report Bugs or Problems”. Resets all per-hour user resources to zero. This enables clients that have reached their hourly connection, query, or update limits to resume activity immediately.
FLUSH USER_RESOURCESdoes not apply to the limit on maximum simultaneous connections that is controlled by themax_user_connectionssystem variable. See Section 6.3.5, “Setting Account Resource Limits”.
FLUSH TABLES Syntax
FLUSH TABLES flushes tables, and,
depending on the variant used, acquires locks. Any
TABLES variant used in a
FLUSH statement must be the only
option used. FLUSH
TABLE is a synonym for FLUSH
TABLES.
The descriptions here that indicate tables are flushed by
closing them apply differently for InnoDB,
which flushes table contents to disk but leaves them open.
This still permits table files to be copied while the tables
are open, as long as other activity does not modify them.
Closes all open tables, forces all tables in use to be closed, and flushes the query cache and prepared statement cache.
FLUSH TABLESalso removes all query results from the query cache, like theRESET QUERY CACHEstatement. For information about query caching and prepared statement caching, see Section 8.10.3, “The MySQL Query Cache”. and Section 8.10.4, “Caching of Prepared Statements and Stored Programs”.FLUSH TABLESis not permitted when there is an activeLOCK TABLES ... READ. To flush and lock tables, useFLUSH TABLESinstead.tbl_name... WITH READ LOCKFLUSH TABLEStbl_name[,tbl_name] ...With a list of one or more comma-separated table names, this statement is like
FLUSH TABLESwith no names except that the server flushes only the named tables. If a named table does not exist, no error occurs.Closes all open tables and locks all tables for all databases with a global read lock. This is a very convenient way to get backups if you have a file system such as Veritas or ZFS that can take snapshots in time. Use
UNLOCK TABLESto release the lock.FLUSH TABLES WITH READ LOCKacquires a global read lock rather than table locks, so it is not subject to the same behavior asLOCK TABLESandUNLOCK TABLESwith respect to table locking and implicit commits:UNLOCK TABLESimplicitly commits any active transaction only if any tables currently have been locked withLOCK TABLES. The commit does not occur forUNLOCK TABLESfollowingFLUSH TABLES WITH READ LOCKbecause the latter statement does not acquire table locks.Beginning a transaction causes table locks acquired with
LOCK TABLESto be released, as though you had executedUNLOCK TABLES. Beginning a transaction does not release a global read lock acquired withFLUSH TABLES WITH READ LOCK.
FLUSH TABLES WITH READ LOCKis not compatible with XA transactions.FLUSH TABLES WITH READ LOCKdoes not prevent the server from inserting rows into the log tables (see Section 5.4.1, “Selecting General Query and Slow Query Log Output Destinations”).FLUSH TABLEStbl_name[,tbl_name] ... WITH READ LOCKThis statement flushes and acquires read locks for the named tables. The statement first acquires exclusive metadata locks for the tables, so it waits for transactions that have those tables open to complete. Then the statement flushes the tables from the table cache, reopens the tables, acquires table locks (like
LOCK TABLES ... READ), and downgrades the metadata locks from exclusive to shared. After the statement acquires locks and downgrades the metadata locks, other sessions can read but not modify the tables.Because this statement acquires table locks, you must have the
LOCK TABLESprivilege for each table, in addition to theRELOADprivilege that is required to use anyFLUSHstatement.This statement applies only to existing base (non-
TEMPORARY)tables. If a name refers to a base table, that table is used. If it refers to aTEMPORARYtable, it is ignored. If a name applies to a view, anER_WRONG_OBJECTerror occurs. Otherwise, anER_NO_SUCH_TABLEerror occurs.Use
UNLOCK TABLESto release the locks,LOCK TABLESto release the locks and acquire other locks, orSTART TRANSACTIONto release the locks and begin a new transaction.This
FLUSH TABLESvariant enables tables to be flushed and locked in a single operation. It provides a workaround for the restriction thatFLUSH TABLESis not permitted when there is an activeLOCK TABLES ... READ.This statement does not perform an implicit
UNLOCK TABLES, so an error results if you use the statement while there is any activeLOCK TABLESor use it a second time without first releasing the locks acquired.If a flushed table was opened with
HANDLER, the handler is implicitly flushed and loses its position.FLUSH TABLEStbl_name[,tbl_name] ... FOR EXPORTThis
FLUSH TABLESvariant applies toInnoDBtables. It ensures that changes to the named tables have been flushed to disk so that binary table copies can be made while the server is running.The statement works like this:
It acquires shared metadata locks for the named tables. The statement blocks as long as other sessions have active transactions that have modified those tables or hold table locks for them. When the locks have been acquired, the statement blocks transactions that attempt to update the tables, while permitting read-only operations to continue.
It checks whether all storage engines for the tables support
FOR EXPORT. If any do not, anER_ILLEGAL_HAerror occurs and the statement fails.The statement notifies the storage engine for each table to make the table ready for export. The storage engine must ensure that any pending changes are written to disk.
The statement puts the session in lock-tables mode so that the metadata locks acquired earlier are not released when the
FOR EXPORTstatement completes.
The
FLUSH TABLES ... FOR EXPORTstatement requires that you have theSELECTprivilege for each table. Because this statement acquires table locks, you must also have theLOCK TABLESprivilege for each table, in addition to theRELOADprivilege that is required to use anyFLUSHstatement.This statement applies only to existing base (non-
TEMPORARY) tables. If a name refers to a base table, that table is used. If it refers to aTEMPORARYtable, it is ignored. If a name applies to a view, anER_WRONG_OBJECTerror occurs. Otherwise, anER_NO_SUCH_TABLEerror occurs.InnoDBsupportsFOR EXPORTfor tables that have their own.ibdfile file (that is, tables created with theinnodb_file_per_tablesetting enabled).InnoDBensures when notified by theFOR EXPORTstatement that any changes have been flushed to disk. This permits a binary copy of table contents to be made while theFOR EXPORTstatement is in effect because the.ibdfile is transaction consistent and can be copied while the server is running.FOR EXPORTdoes not apply toInnoDBsystem tablespace files, or toInnoDBtables that haveFULLTEXTindexes.FLUSH TABLES ...FOR EXPORTis supported for partitionedInnoDBtables.When notified by
FOR EXPORT,InnoDBwrites to disk certain kinds of data that is normally held in memory or in separate disk buffers outside the tablespace files. For each table,InnoDBalso produces a file namedin the same database directory as the table. Thetable_name.cfg.cfgfile contains metadata needed to reimport the tablespace files later, into the same or different server.When the
FOR EXPORTstatement completes,InnoDBwill have flushed all dirty pages to the table data files. Any change buffer entries are merged prior to flushing. At this point, the tables are locked and quiescent: The tables are in a transactionally consistent state on disk and you can copy the.ibdtablespace files along with the corresponding.cfgfiles to get a consistent snapshot of those tables.For the procedure to reimport the copied table data into a MySQL instance, see Section 14.7.6, “Copying File-Per-Table Tablespaces to Another Instance”.
After you are done with the tables, use
UNLOCK TABLESto release the locks,LOCK TABLESto release the locks and acquire other locks, orSTART TRANSACTIONto release the locks and begin a new transaction.While any of these statements is in effect within the session, attempts to use
FLUSH TABLES ... FOR EXPORTproduce an error:FLUSH TABLES ... WITH READ LOCK FLUSH TABLES ... FOR EXPORT LOCK TABLES ... READ LOCK TABLES ... WRITEWhile
FLUSH TABLES ... FOR EXPORTis in effect within the session, attempts to use any of these statements produce an error:FLUSH TABLES WITH READ LOCK FLUSH TABLES ... WITH READ LOCK FLUSH TABLES ... FOR EXPORT
So many people use this command to FLUSH DATA.
However, for InnoDB tables if you expect FLUSH TABLE xx to do something you are in for a shock, this does not happen. It appears that even the .bfd files are not closed (if using file_per_table) so it would be useful if the documentation for this command made some reference to exceptions and that with some engines the behaviour may not be what is expected.
Biggest oddity is that it will not occur on all servers in your system/cloud at once but only on a
number of servers. Some servers seem to be able to reuse their persistent connections pretty well while other servers will be creating multiple new connections (without discarding the used ones).
Simple solutions like doubling the amount of allowed connections does not work, it will only delay the moment until it will happen again.
You can either refrain from using persistent connections or flush all hosts when you notice that you can't reach the database anymore.
The error that you will notice is the one below :
DB connection error: Host 'your.mysql-server.url' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
Also the number of connections to the mysql server will be big (netstat -a | grep mysqld) while just a few of these connections appear to be active. You might want to flush your hosts in advance in such situations.
New user account user1@newclient with same grants and password is created.
Try to connect from newclient: mysql -u user1 -p
The server might reject connections with "ERROR 1045 (28000): Access denied for user 'user1'@'oldclient' ...".
Connection from other clients, that have not changed hostnames, still works. New Account with IP instead of hostname works too.
The goal is to flush hosts!
m.s.
This should rarely be a problem. Persistent connections may require a higher connection limit per application server though and that might be a small problem. It's debatable how the connection limit should be set. It's a crude way to limit things. The idea is to prevent resource contention nuking the database with long queues or really large amounts of current work. It would be better to have something else like that. I'm not sure what the options are but if you're having problems like this then you really need to be looking for something more robust. Apache has a similar problem with things like the client limit. Worse case resource contention means that requests stack up, the list grows, more resources are demanded concurrently, a longer list of work takes longer to clear and for old requests the client is probably gone anyway then in the worst case you can kill the box where it goes into swap and bottlenecks on thrashing into oblivion. You normally set the limit to the maximum number of concurrent requests you can support. That's really high for mysql compared to apache especially if the requests are CGI or similar. The best you can do with mysql is set it much higher than what would naturally occur (such as twice your biggest spike) so you at least have some cut off point if not having false positives is very important to you. If false positives are less important and false negatives are more important then you can set it lower. It's still not a perfect art though. The way MySQL works, I haven't seen it do much that has the capacity to kill a box unless configured very badly (although it might bury CPU/IO in some circumstances, nothing is as bad as swap thrash). I would suggest just setting the limit really high but keeping in mind that can propagate a load issue to your webservers more easily.
Persistent connections are also risky in terms of acquiring a fresh state. A better approach is to manage a pool of pre-established connections.