Documentation Home
MySQL 8.0 Reference Manual
Related Documentation Download this Manual
PDF (US Ltr) - 45.6Mb
PDF (A4) - 45.7Mb
PDF (RPM) - 41.2Mb
HTML Download (TGZ) - 10.5Mb
HTML Download (Zip) - 10.6Mb
HTML Download (RPM) - 9.1Mb
Man Pages (TGZ) - 209.9Kb
Man Pages (Zip) - 312.0Kb
Info (Gzip) - 4.1Mb
Info (Zip) - 4.1Mb
Excerpts from this Manual

MySQL 8.0 Reference Manual  /  ...  /  Enabling Large Page Support Enabling Large Page Support

Some hardware/operating system architectures support memory pages greater than the default (usually 4KB). The actual implementation of this support depends on the underlying hardware and operating system. Applications that perform a lot of memory accesses may obtain performance improvements by using large pages due to reduced Translation Lookaside Buffer (TLB) misses.

In MySQL, large pages can be used by InnoDB, to allocate memory for its buffer pool and additional memory pool.

Standard use of large pages in MySQL attempts to use the largest size supported, up to 4MB. Under Solaris, a super large pages feature enables uses of pages up to 256MB. This feature is available for recent SPARC platforms. It can be enabled or disabled by using the --super-large-pages or --skip-super-large-pages option.

MySQL also supports the Linux implementation of large page support (which is called HugeTLB in Linux).

Before large pages can be used on Linux, the kernel must be enabled to support them and it is necessary to configure the HugeTLB memory pool. For reference, the HugeTBL API is documented in the Documentation/vm/hugetlbpage.txt file of your Linux sources.

The kernel for some recent systems such as Red Hat Enterprise Linux appear to have the large pages feature enabled by default. To check whether this is true for your kernel, use the following command and look for output lines containing huge:

shell> cat /proc/meminfo | grep -i huge
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       4096 kB

The nonempty command output indicates that large page support is present, but the zero values indicate that no pages are configured for use.

If your kernel needs to be reconfigured to support large pages, consult the hugetlbpage.txt file for instructions.

Assuming that your Linux kernel has large page support enabled, configure it for use by MySQL using the following commands. Normally, you put these in an rc file or equivalent startup file that is executed during the system boot sequence, so that the commands execute each time the system starts. The commands should execute early in the boot sequence, before the MySQL server starts. Be sure to change the allocation numbers and the group number as appropriate for your system.

# Set the number of pages to be used.
# Each page is normally 2MB, so a value of 20 = 40MB.
# This command actually allocates memory, so this much
# memory must be available.
echo 20 > /proc/sys/vm/nr_hugepages

# Set the group number that is permitted to access this
# memory (102 in this case). The mysql user must be a
# member of this group.
echo 102 > /proc/sys/vm/hugetlb_shm_group

# Increase the amount of shmem permitted per segment
# (12G in this case).
echo 1560281088 > /proc/sys/kernel/shmmax

# Increase total amount of shared memory.  The value
# is the number of pages. At 4KB/page, 4194304 = 16GB.
echo 4194304 > /proc/sys/kernel/shmall

For MySQL usage, you normally want the value of shmmax to be close to the value of shmall.

To verify the large page configuration, check /proc/meminfo again as described previously. Now you should see some nonzero values:

shell> cat /proc/meminfo | grep -i huge
HugePages_Total:      20
HugePages_Free:       20
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       4096 kB

The final step to make use of the hugetlb_shm_group is to give the mysql user an unlimited value for the memlock limit. This can be done either by editing /etc/security/limits.conf or by adding the following command to your mysqld_safe script:

ulimit -l unlimited

Adding the ulimit command to mysqld_safe causes the root user to set the memlock limit to unlimited before switching to the mysql user. (This assumes that mysqld_safe is started by root.)

Large page support in MySQL is disabled by default. To enable it, start the server with the --large-pages option. For example, you can use the following lines in the server my.cnf file:


With this option, InnoDB uses large pages automatically for its buffer pool and additional memory pool. If InnoDB cannot do this, it falls back to use of traditional memory and writes a warning to the error log: Warning: Using conventional memory pool

To verify that large pages are being used, check /proc/meminfo again:

shell> cat /proc/meminfo | grep -i huge
HugePages_Total:      20
HugePages_Free:       20
HugePages_Rsvd:        2
HugePages_Surp:        0
Hugepagesize:       4096 kB

User Comments
User comments in this section are, as the name implies, provided by MySQL users. The MySQL documentation team is not responsible for, nor do they endorse, any of the information provided here.
  Posted by Rainer Stumbaum on August 23, 2011

following system:
Debian 6 4CPU 20Gbyte RAM dedicated MySQL server

innodb_buffer_pool_size = 12000M
innodb_additional_mem_pool_size = 16M

Therefore the following /etc/sysctl.d/mysql.conf
# Set the number of pages to be used:
# Add innodb_buffer_pool_size and
# innodb_additional_mem_pool_size
# and divide by Hugepagesize.
# Each page is normally 2MB, so a value of 6100 = 12200MB.
# This command actually allocates memory, so this much
# memory must be available.
# Important:
# ulimit -l unlimited
# and set in my.cnf:
# ...
# [mysqld]
# large-pages
# ...
vm.nr_hugepages = 6100

# Set the group number that is permitted to access this
# memory (110 in this case). The mysql user must be a
# member of this group.
vm.hugetlb_shm_group = 110

# Set the amount of shmem permitted per segment in bytes
# (12199Mb in this case).
kernel.shmmax = 12791578624

# Set the total amount of shared memory. The value
# is the size in pages. At 4KB/page, 3122944 = 12199MB.
kernel.shmall = 3122944

I added the ulimit to the startup script and added the large-pages in the mysqld section.

But I get the following error:
110823 13:41:00 mysqld_safe Starting mysqld daemon with databases from /mnt/mysql/data
110823 13:41:00 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead.
110823 13:41:00 [Note] Plugin 'FEDERATED' is disabled.
InnoDB: HugeTLB: Warning: Failed to allocate 12582928384 bytes. errno 28
InnoDB HugeTLB: Warning: Using conventional memory pool

The existing samples are not very clear - I would like to know if my example is right and clear enough for calculation the size correctly.
  Posted by Rainer Stumbaum on August 23, 2011
Got it to work:
Instead of using the calculated 12200MB I used 15GB.
root@mysql04.dc1:~# ipcs

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x6c1009eb 0 zabbix 600 693504 8
0x00000000 32769 mysql 600 1279262720 1 dest
0x00000000 65538 mysql 600 12585009152 1 dest

root@mysql04.dc1:~# grep -i huge /proc/meminfo
HugePages_Total: 7680
HugePages_Free: 7394
HugePages_Rsvd: 6325
HugePages_Surp: 0
Hugepagesize: 2048 kB

So the calculation seeems somehow to be wrong...

  Posted by DANGLADE JEAN-SEBASTIEN on November 13, 2012
I hope this comment will save severals hours and white nights on production launching...
After folowing every How-to and all's documentation over Google, to enable huge pages... i must give you this post.

For enabling huge pages with Linux Debian 6.0.5 on
Linux 2.6.32-5-amd64 #x86_64 GNU/Linux (64Bits)
and MySQL 5.1, you got to add this your /etc/sysctl.conf :

# Total of allowed memory
vm.nr_hugepages = YYYYYY
# total amount of memory that can be allocated to shared memory, huge pages or not, on the box
kernel.shmall = XXXXXXXXXX
# maximum single shared memory segment, which for me was basically innodb_buffer_pool+1%
kernel.shmmax = XXXXXXXXXX
# Groupe autorisé
vm.hugetlb_shm_group = `id -g mysql`

XXXXX is given by this script shell in bash :

##### SCRIPT START #########
# keep 2go memory for system
# (i got 68Go on this one ans 128Go RAM on other one)

mem=$(free|grep Mem|awk '{print$2}')
mem=$(echo "$mem-$marge"|bc)
totmem=$(echo "$mem*1024"|bc)
huge=$(grep Hugepagesize /proc/meminfo|awk '{print $2}')
max=$(echo "$totmem*75/100"|bc)
all=$(echo "$max/$huge"|bc)
echo "kernel.shmmax = $max"
echo "kernel.shmall = $all"
######### SCRIPT END #########

check memory usage before reboot by command :
cat /proc/meminfo | grep -i huge

Reboot your system.
and check memory usage again.

It works !
  Posted by John Anderson on May 13, 2015
A bit of a note on the math here, some articles and blogs say that you should add your innodb_buffer_pool size to your innodb_additional_mem_pool_size, and divide that by your hugetlb page size. Then add a few on to that. Unfortunately, that doesn't seem to be the whole story.

For those who want to allocate as little RAM as possible to HugeTLB while still satisfying the requirements outlined in my.cnf, this formula might be a little better. This is after some experimentation led me to put some effort behind finding out why I always had to allocate many more pages than the math suggested.

The real formula should be:

(innodb_buffer_pool_size in kb +
innodb_additional_mem_pool_size in kb +
tmp_table_size in kb +
innodb_log_buffer_size in kb) / hugetlb size in kb

Then to that, add an additional 11 - 15 pages until MySQL starts. I give my best guess as to why these pages are unaccounted for below.

First, a note on why tmp_table_size is included: I'm not sure if it *should* be tmp_table_size * max_tmp_tables, but MySQL starts and runs with only tmp_table_size included. I think this only applies if default_tmp_storage_engine is InnoDB. If a tmp table needs to be created for a sort or order, and that table is going to be InnoDB in RAM, then hugetlb will need to be used.

Secondly, I noticed in the source code that the InnoDB buffer log uses the 'os_mem_alloc_large' function. So I think that should be included in the calculation as well. In my experimentation, I had 22 pages unaccounted for until I found that, then my unaccounted for pages went down to 11.

As for the pages which don't seem to be accounted for, I think that is the overhead cost of the nature of pages. For instance, if you have an innodb_buffer_pool size of 256 MB, and you have 8 buffer instances then you have:

(268435456 bytes / 8 instances ) = 33554.4 kilobtes to allocate per page.

At 2048 KB per page, that comes to 16.4 pages per buffer. That .4 of a page means an entire page must be allocated, or 17 pages per buffer instead of 16.4. That would account for 8 pages right there. So if one is really picky, declaring buffer sizes that meet the page size exactly would theoretically leave no overhead to absorb. I don't know why but MySQL and google convert have differing opinions on how to convert megabytes to bytes, and vice versa. So if you want to cut it as close as possible, fill out your my.cnf. Start mysql without large-pages, and take note of the values of these 4 variables. Then convert those values into kilobytes for the page count calculation.