WL#4252: NDB transporter send buffer unification

Status: Complete   —   Priority: Medium

Existing NDB kernel code uses one (2Mb) send buffer for every configured node.
The memory is allocated at data node start.

This is an inefficient use of memory, as most will be unused, yet it is necessary
to configure the size sufficient to cater for max load on any transporter socket.

It also makes it expensive to configure (but not run) extra mysqld nodes for
later expansion (ie. this worklog may facilitate removing the need to
pre-configure mysqld nodes at all).

Change send buffer memory to be dynamically allocated from a pool shared among
all the transporters. Send buffer size will then be adjusted dynamically.
New configuration parameters with this worklog:

1. TotalSendBufferMemory (NDBD, API, and MGM sections). This is the total amount
of memory to allocate for use as shared send buffer memory among all configured

If not set, it defaults to be the sum of the maximum send buffer size of all
configured transporters, plus an extra 32k (one page) per transporter. This
maximum is the value of SendBufferMemory for TCP transporters, for SCI it is
SendLimit + 16 k, and for SHM it is 20k. This is done as a backward
compatibility measure, as this way old configurations will work more or less
unchanged, allocating the same amount of memory and with the same amount of send
buffer available to each transporter (but without the benefit of being able to
have memory that is unused in one transporter available for another one).

2. ReservedSendBufferMemory (NDBD section). This optional parameter, if set,
gives an amount of memory (in bytes) that is reserved for connections between
data nodes. Ie. this memory will never be allocated to send buffers towards
management server or API nodes. This provides a way to protect the cluster
against badly behaving API nodes hogging all the send memory causing failure in
communications internally in the kernel.

3. OverloadLimit (TCP section). This parameter denotes the amount of unsent data
that must be present in the send buffer before the connection is considered
overloaded. When overload occurs, transactions that affect the overloaded
connection start failing with the error ZTRANSPORTER_OVERLOADED_ERROR (1218)
until the overload status passes.

Additionally, the TCP configuration parameter "SendBufferMemory" changes its
meaning. Before, it was the amount of memory allocated at startup for each
configured TCP connection. With this worklog implemented, memory is not
allocated dedicated to each transporter. Instead, the value denotes the hard
limit for how much memory (out of the total available memory,
TotalSendBufferMemory) that may be used by a single transporter.

Thus if the sum of SendBufferMemory for all configured transporters may be
greater than TotalSendBufferMemory. This is a way to save memory when lots of
nodes are configured, as long as not all transporters ever need to use the
maximum amount of memory at the same time.