WL#9703: Defaults change table_open_cache=4000

Affects: Server-8.0   —   Status: Complete

Information on the table cache and settings related to it is described in

From docs:
The table_open_cache and max_connections system variables affect the maximum
number of files the server keeps open. If you increase one or both of these
values, you may run up against a limit imposed by your operating system on the
per-process number of open file descriptors.

table_open_cache is related to max_connections. For example, for 200 concurrent
running connections, specify a table cache size of at least 200 * N, where N is
the maximum number of tables per join in any of the queries which you execute.
You must also reserve some extra file descriptors for temporary tables and files.

Make sure that your operating system can handle the number of open file
descriptors implied by the table_open_cache setting. If table_open_cache is set
too high, MySQL may run out of file descriptors and refuse connections, fail to
perform queries, and be very unreliable. 

 You can determine whether your table cache is too small by checking the mysqld
status variable Opened_tables, which indicates the number of table-opening
operations since the server started:

mysql> SHOW GLOBAL STATUS LIKE 'Opened_tables';
| Variable_name | Value |
| Opened_tables | 2741  |

If the value is very large or increases rapidly, even when you have not issued
many FLUSH TABLES statements, increase the table cache size.

End from docs.

The number of open tables stored in the cache, is a function of the number of
tables in the schema and the number of users using the tables. It should not be
set to a value that is higher than the OS "number of open file descriptors" can

Wrt memory consumption per cache entry, this depends on the SE, and table
structure e.g. if row buffers allocated for the table are big (e.g. close to the
limit) you can easily get around 200K per table instance in the cache, like we
see in BUG#86279 (lots of medium-sized varchar fields). In most situations the
actual memory consumed per entry is a much smaller.

The search in the cache generally uses hash-lookup. For DROP DATABASE a linear
scan is done.

Considerations for finding a good default value for table_open_cache:

So the default value for table_open_cache should be set as high as possible, but
consider OS resource constraints on memory usage and OS limit of open file

What to think about:

1. The default OS limit for the number of open file descriptors differs for the
MySQL supported platforms. The value should not exceed any of these OS specific
platform limits.

2. The MySQL usage of open file descriptors depends on the number of connections
and the number of tables that are opened. Multiple connections opening the same
table, still requires an open file descriptor per user. 

3. The SEs can handle use of file descriptors differently. For MyISAM 2 file
descriptors per table are opened (MYD/MYI). TODO, see how many InnoDB uses.

4. How many tables are opened simultaneously affect the number of needed open
file descriptors. The number of connections that access tables simultaneously
also affect the number of needed open file descriptors. In addition these are SE

For MyISAM you will then have the following:
a) A schema with 2K tables which are all kept open due to schema structure, and
1 connection will require: 2k handles for MYD + 2K*1 handles for MYI
b) A schema with 14 table being accessed by 151 connections (default
MAX_CONNECTION) will require: 14 handles for MYD + 14*151 handles for MYI  

5. The memory used per TABLE instance is dependent on the table structure. It is
hard for users to understand how to calculate this. In corner cases, we have
seen 200K per TABLE instance, but generally the size is much smaller.

Change default value of table_open_cache:
  Old default value : 2000
  New default value : 4000
Change default value of table_open_cache:
  Old default value : 2000
  New default value : 4000

The default value of table_definition_cache variable is autosized and determined
at runtime from the value of the table_open_cache by the formula :
  default_value= min (400 + table_cache_size / 2, 2000);

The change in the default value of table_open_cache will change the default
value of table_definition_cache too from 1400 to 2000.

Doc links:

Mechanism and trade-offs:
  The TABLE cache is handled by a cache manager.
  Table cache manager creates instances of table cache. 

  Each thread is assigned a cached by mod division thread id by number of cache 
  instances. The problem with this approach is that two caches instances can 
  have same TABLE cached.

  Each cache instance maintains two linear lists of tables, used and free.
  When opening the table first time, it is added to used table list.
  When table is closed (at the end of every statement, it is moved to free 
  list). The list is linear and traversing time increases linearly with the 
  increase in  cache size.

- The cache manager and cache itself do not take big memory. Server 
  initialiation will not consume extra memory. As tables are opened and cached, 
  the memory size will increase. Average one TABLE takes around 2000 
  bytes to 4000 bytes. It might vary significantly depending on the table 
  structure. The memory consumption will increase with the increase in cache 
Test and Result files changed in due to change in default value: