WL#4081: NDB: compressed backup and LCP

Status: Complete   —   Priority: Medium

Use compression library (adapted azio... which is an adapted zlib gzio) to on
the fly compress/uncompress NDB Backups and local checkpoints.

Two new configuration parameters are introduced for NDBD nodes:

(they can be specified per node, or for all nodes in the default section)

Note that a compressed backup/lcp can only be restored by an NDB knowledgable in
compressed backup/lcp. If you try and restore a backup or LCP using old tools,
you'll get an error indicating some form of corruption.

A new nbdb/ndb_restore can restore either compressed or non-compressed backups
and LCPs.

Each compressed file being written is compressed in another thread. If
compressed Backup and compressed LCP are both ongoing, it is possible for them
to use a CPU core each - in addition to the CPU core used by the main DB thread
and whatever is used by additional helper threads.

It is possible that the speed in which backups and LCPs can be written (in
MB/sec) is limited by CPU speed.

The compression is equivilent to "gzip --fast".

All speed throttling is still in units of *uncompressed* data. It is kept this
way as backup and LCP does have an impact on CPU usage in the main DB thread.

The primary goal of compressed backup/lcp is to reduce disk space used.

Tests with hugoLoad and its "random" data show at least a 50% space saving. With
some real data, much higher compression ratios can be acheived.

Both CompressedLCP and CompressedBackup work with ODirect.
class AsyncFile;

class PosixAsyncFile : public AsyncFile

class Win32AsyncFile : public AsyncFile

This is designed so that (in future) NDBFS can choose which type of AsyncFile to
use per file. It also removes that dirty Win32 code from the clean world of
posix IO (except for all the dirty bits to support !linux).