MySQL Shell API 8.0.40
Unified development interface for MySQL Products
|
Global object that groups miscellaneous tools like upgrade checker and JSON import. More...
Functions | |
Undefined | checkForServerUpgrade (ConnectionData connectionData, Dictionary options) |
Performs series of tests on specified MySQL server to check if the upgrade process will succeed. More... | |
Undefined | importJson (String file, Dictionary options) |
Import JSON documents from file to collection or table in MySQL Server using X Protocol session. More... | |
Undefined | importTable (List files, Dictionary options) |
Import table dump stored in files to target table using LOAD DATA LOCAL INFILE calls in parallel connections. More... | |
Undefined | loadDump (String url, Dictionary options) |
Loads database dumps created by MySQL Shell. More... | |
Undefined | exportTable (String table, String outputUrl, Dictionary options) |
Exports the specified table to the data dump file. More... | |
Undefined | dumpTables (String schema, List tables, String outputUrl, Dictionary options) |
Dumps the specified tables or views from the given schema to the files in the target directory. More... | |
Undefined | dumpSchemas (List schemas, String outputUrl, Dictionary options) |
Dumps the specified schemas to the files in the output directory. More... | |
Undefined | dumpInstance (String outputUrl, Dictionary options) |
Dumps the whole database to files in the output directory. More... | |
Global object that groups miscellaneous tools like upgrade checker and JSON import.
Undefined checkForServerUpgrade | ( | ConnectionData | connectionData, |
Dictionary | options | ||
) |
Performs series of tests on specified MySQL server to check if the upgrade process will succeed.
connectionData | Optional the connection data to server to be checked |
options | Optional dictionary of options to modify tool behaviour. |
If no connectionData is specified tool will try to establish connection using data from current session.
Tool behaviour can be modified with following options:
The connection data may be specified in the following formats:
A basic URI string has the following format:
[scheme://][user[:password]@]<host[:port]|socket>[/schema][?option=value&option=value...]
Connection Options
The following options are valid for use either in a URI or in a dictionary:
When these options are defined in a URI, their values must be URL encoded.
The following options are also valid when a dictionary is used:
Base Connection Options
SSH Tunnel Connection Options
The connection options are case insensitive and can only be defined once.
If an option is defined more than once, an error will be generated.
Detailed description of the connection data format is available at Connection Data
Undefined importJson | ( | String | file, |
Dictionary | options | ||
) |
Import JSON documents from file to collection or table in MySQL Server using X Protocol session.
file | Path to JSON documents file |
options | Optional dictionary with import options |
This function reads standard JSON documents from a file, however, it also supports converting BSON Data Types represented using the MongoDB Extended Json (strict mode) into MySQL values.
The options dictionary supports the following options:
The following options are valid only when convertBsonTypes is enabled. They are all boolean flags. ignoreRegexOptions is enabled by default, rest are disabled by default.
If the schema is not provided, an active schema on the global session, if set, will be used.
The collection and the table options cannot be combined. If they are not provided, the basename of the file without extension will be used as target collection name.
If the target collection or table does not exist, they are created, otherwise the data is inserted into the existing collection or table.
The tableColumn implies the use of the table option and cannot be combined with the collection option.
BSON Data Type Processing.
If only convertBsonOid is enabled, no conversion will be done on the rest of the BSON Data Types.
To use extractOidTime, it should be set to a name which will be used to insert an additional field into the main document. The value of the new field will be the timestamp obtained from the ObjectID value. Note that this will be done only for an ObjectID value associated to the '_id' field of the main document.
NumberLong and NumberInt values will be converted to integer values.
NumberDecimal values are imported as strings, unless decimalAsDouble is enabled.
Regex values will be converted to strings containing the regular expression. The regular expression options are ignored unless ignoreRegexOptions is disabled. When ignoreRegexOptions is disabled the regular expression will be converted to the form: /<regex>/<options>.
Throws ArgumentError when:
Throws LogicError when:
Throws RuntimeError when:
Throws InvalidJson when:
Undefined importTable | ( | List | files, |
Dictionary | options | ||
) |
Import table dump stored in files to target table using LOAD DATA LOCAL INFILE calls in parallel connections.
files | Path or list of paths to files with user data. Path name can contain a glob pattern with wildcard '*' and/or '?'. All selected files must be chunks of the same target table. |
options | Optional dictionary with import options |
The scheme part of a filename contains infomation about the transport backend. Supported transport backends are: file://, http://, https://. If the scheme part of a filename is omitted, then file:// transport backend will be chosen.
Supported filename formats:
If the osBucketName option is given, the path argument must specify a plain path in that OCI (Oracle Cloud Infrastructure) Object Storage bucket.
The OCI configuration profile is located through the oci.profile and oci.configFile global shell options and can be overridden with ociProfile and ociConfigFile, respectively.
If the s3BucketName option is given, the path argument must specify a plain path in that AWS S3 bucket.
If the azureContainerName option is given, the path argument must specify a plain path in that Azure container.
Options dictionary:
OCI Object Storage Options
AWS S3 Object Storage Options
If the s3BucketName option is used, the dump is stored in the specified AWS S3 bucket. Connection is established using default local AWS configuration paths and profiles, unless overridden. The directory structure is simulated within the object name.
The s3CredentialsFile, s3ConfigFile, s3Profile, s3Region and s3EndpointOverride options cannot be used if the s3BucketName option is not set or set to an empty string.
Handling of the AWS settings
The AWS options are evaluated in the order of precedence, the first available value is used.
The AWS credentials are fetched from the following providers, in the order of precedence:
The items specified above correspond to the following credentials:
The process/command line specified by the credential_process setting must write a JSON object to the standard output in the following form:
The Expiration key, if given, specifies when the credentials are going to expire, they will be automatically refreshed before this happens.
The following credential handling rules apply:
Azure Blob Storage Options
If the azureContainerName option is used, the dump is stored in the specified Azure container. Connection is established using the configuration at the local Azure configuration file.The directory structure is simulated within the blob name.
The azureConfigFile option cannot be used if the azureContainerName option is not set or set to an empty string.
Handling of the Azure settings
If a connection string is defined either case in the environment variable or the configuration option, the individual configuration values for account and key will be ignored.
If a SAS Token is defined, it will be used for the authorization (ignoring any defined account key).
The default Azure Blob Endpoint to be used in the operations is defined by:
https://<account>.blob.core.windows.net
Unless a different EndPoint is defined in the connection string.
dialect predefines following set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in following manner:
If the schema is not provided, an active schema on the global session, if set, will be used.
If the input values are not necessarily enclosed within fieldsEnclosedBy, set fieldsOptionallyEnclosed to true.
If you specify one separator that is the same as or a prefix of another, LOAD DATA INFILE cannot interpret the input properly.
Connection options set in the global session, such as compression, ssl-mode, etc. are used in parallel connections.
Each parallel connection sets the following session variables:
Note: because of storage engine limitations, table locks held by MyISAM will cause imports of such tables to be sequential, regardless of the number of threads used.
Example input data for dialects:
Examples of decodeColumns usage:
Undefined loadDump | ( | String | url, |
Dictionary | options | ||
) |
Loads database dumps created by MySQL Shell.
url | defines the location of the dump to be loaded |
options | Optional dictionary with load options |
Depending on how the dump was created, the url identifies the location and in some cases the access method to the dump, i.e. for dumps to be loaded using pre-authenticated requests (PAR). Allowed values:
loadDump() will load a dump from the specified path. It transparently handles compressed files and directly streams data when loading from remote storage (currently HTTP, OCI Object Storage, AWS S3 Object Storage and Azure Containers). If the 'waitDumpTimeout' option is set, it will load a dump on-the-fly, loading table data chunks as the dumper produces them.
Table data will be loaded in parallel using the configured number of threads (4 by default). Multiple threads per table can be used if the dump was created with table chunking enabled. Data loads are scheduled across threads in a way that tries to maximize parallelism, while also minimizing lock contention from concurrent loads to the same table. If there are more tables than threads, different tables will be loaded per thread, larger tables first. If there are more threads than tables, then chunks from larger tables will be proportionally assigned more threads.
LOAD DATA LOCAL INFILE is used to load table data and thus, the 'local_infile' MySQL global setting must be enabled.
Resuming
The load command will store progress information into a file for each step of the loading process, including successfully completed and interrupted/failed ones. If that file already exists, its contents will be used to skip steps that have already been completed and retry those that failed or didn't start yet.
When resuming, table chunks that have started being loaded but didn't finish are loaded again. Duplicate rows are discarded by the server. Tables that do not have unique keys are truncated before the load is resumed.
IMPORTANT: Resuming assumes that no changes have been made to the partially loaded data between the failure and the retry. Resuming after external changes has undefined behavior and may lead to data loss.
The progress state file has a default name of load-progress.<server_uuid>.json and is written to the same location as the dump. If 'progressFile' is specified, progress will be written to either a local file at the given path, or, if the HTTP(S) scheme is used, to a remote file using HTTP PUT requests. Setting it to '' will disable progress tracking and resuming.
If the 'resetProgress' option is enabled, progress information from previous load attempts of the dump to the destination server is discarded and the load is restarted. You may use this option to retry loading the whole dump from the beginning. However, changes made to the database are not reverted, so previously loaded objects should be manually dropped first.
Options dictionary:
Connection options set in the global session, such as compression, ssl-mode, etc. are inherited by load sessions.
Examples:
Loading a dump using Pre-authenticated Requests (PAR)
When a dump is created in OCI Object Storage, it is possible to load it using a single pre-authenticated request which gives access to the location of the dump. The requirements for this PAR include:
Given a dump located at a bucket root and a PAR created for the bucket, the dump can be loaded by providing the PAR as the url parameter:
Example:
Given a dump located at some folder within a bucket and a PAR created for the given folder, the dump can be loaded by providing the PAR and the prefix as the url parameter:
Example:
In both of the above cases the load is done using pure HTTP GET requests and the progressFile option is mandatory.
A legacy method to create a dump loadable through PAR is still supported, this is done by using the ociParManifest option when creating the dump. When this is enabled, a manifest file "@.manifest.json" will be generated, to be used as the entry point to load the dump using a PAR to this file.
When using a Manifest PAR to load a dump, the progressFile option is mandatory.
To store the progress on dump location, create an ObjectReadWrite PAR to the desired progress file (it does not need to exist), it should be located on the same location of the "@.manifest.json" file. Finally specify the PAR URL on the progressFile option.
Example:
Undefined exportTable | ( | String | table, |
String | outputUrl, | ||
Dictionary | options | ||
) |
Exports the specified table to the data dump file.
table | Name of the table to be exported. |
outputUrl | Target file to store the data. |
options | Optional dictionary with the export options. |
The value of table parameter should be in form of table or schema.table, quoted using backtick characters when required. If schema is omitted, an active schema on the global Shell session is used. If there is none, an exception is raised.
The outputUrl specifies where the dump is going to be stored.
By default, a local file is used, and in this case outputUrl can be prefixed with file:// scheme. If a relative path is given, the absolute path is computed as relative to the current working directory. The parent directory of the output file must exist. If the output file exists, it is going to be overwritten. The output file is created with the following access rights (on operating systems which support them): rw-r-----.
The following options are supported:
Requirements
Details
This operation writes table data dump to the specified by the user files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
The maxRate option supports unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
Dumping to a Bucket in the OCI Object Storage
If the osBucketName option is used, the dump is stored in the specified OCI bucket, connection is established using the local OCI profile. The directory structure is simulated within the object name.
The osNamespace, ociConfigFile and ociProfile options cannot be used if the osBucketName option is set to an empty string.
The osNamespace option overrides the OCI namespace obtained based on the tenancy ID from the local OCI profile.
Dumping to a Bucket in the AWS S3 Object Storage
If the s3BucketName option is used, the dump is stored in the specified AWS S3 bucket. Connection is established using default local AWS configuration paths and profiles, unless overridden. The directory structure is simulated within the object name.
The s3CredentialsFile, s3ConfigFile, s3Profile, s3Region and s3EndpointOverride options cannot be used if the s3BucketName option is not set or set to an empty string.
Handling of the AWS settings
The AWS options are evaluated in the order of precedence, the first available value is used.
The AWS credentials are fetched from the following providers, in the order of precedence:
The items specified above correspond to the following credentials:
The process/command line specified by the credential_process setting must write a JSON object to the standard output in the following form:
The Expiration key, if given, specifies when the credentials are going to expire, they will be automatically refreshed before this happens.
The following credential handling rules apply:
Dumping to a Container in the Azure Blob Storage
If the azureContainerName option is used, the dump is stored in the specified Azure container. Connection is established using the configuration at the local Azure configuration file.The directory structure is simulated within the blob name.
The azureConfigFile option cannot be used if the azureContainerName option is not set or set to an empty string.
Handling of the Azure settings
If a connection string is defined either case in the environment variable or the configuration option, the individual configuration values for account and key will be ignored.
If a SAS Token is defined, it will be used for the authorization (ignoring any defined account key).
The default Azure Blob Endpoint to be used in the operations is defined by:
https://<account>.blob.core.windows.net
Unless a different EndPoint is defined in the connection string.
ArgumentError | in the following scenarios:
|
RuntimeError | in the following scenarios:
|
Undefined dumpTables | ( | String | schema, |
List | tables, | ||
String | outputUrl, | ||
Dictionary | options | ||
) |
Dumps the specified tables or views from the given schema to the files in the target directory.
schema | Name of the schema that contains tables/views to be dumped. |
tables | List of tables/views to be dumped. |
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The tables parameter cannot be an empty list.
The outputUrl specifies where the dump is going to be stored.
By default, a local directory is used, and in this case outputUrl can be prefixed with file:// scheme. If a relative path is given, the absolute path is computed as relative to the current working directory. If the output directory does not exist but its parent does, it is created. If the output directory exists, it must be empty. All directories are created with the following access rights (on operating systems which support them): rwxr-x---. All files are created with the following access rights (on operating systems which support them): rw-r-----.
The following options are supported:
Requirements
Details
This operation writes SQL files per each table and view dumped, along with some global SQL files. The information about the source schema is also saved, meaning that when using the util.loadDump() function to load the dump, it is automatically recreated. Alternatively, dump can be loaded into another existing schema using the schema option.
Table data dumps are written to text files using the specified file format, optionally splitting them into multiple chunk files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
If the all option is set to true and the tables parameter is set to an empty array, all views and tables from the specified schema are going to be dumped. If the tables parameter is not set to an empty array, an exception is thrown.
The names given in the exclude{object}, include{object}, where or partitions options should be valid MySQL identifiers, quoted using backtick characters when required.
If the exclude{object}, include{object}, where or partitions options contain an object which does not exist, or an object which belongs to a schema which does not exist, it is ignored.
The tzUtc option allows dumping TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones.
If the consistent option is set to true, a global read lock is set using the FLUSH TABLES WITH READ LOCK statement, all threads establish connections with the server and start transactions using:
Once all the threads start transactions, the instance is locked for backup and the global read lock is released.
If the account used for the dump does not have enough privileges to execute FLUSH TABLES, LOCK TABLES will be used as a fallback instead. All tables being dumped, in addition to DDL and GRANT related tables in the mysql schema will be temporarily locked.
The ddlOnly and dataOnly options cannot both be set to true at the same time.
The chunking option causes the the data from each table to be split and written to multiple chunk files. If this option is set to false, table data is written to a single file.
If the chunking option is set to true, but a table to be dumped cannot be chunked (for example if it does not contain a primary key or a unique index), data is dumped to multiple files using a single thread.
The value of the threads option must be a positive number.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
Both the bytesPerChunk and maxRate options support unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
The value of the bytesPerChunk option cannot be smaller than "128k".
MySQL HeatWave Service Compatibility
The MySQL HeatWave Service has a few security related restrictions that are not present in a regular, on-premise instance of MySQL. In order to make it easier to load existing databases into the Service, the dump commands in the MySQL Shell has options to detect potential issues and in some cases, to automatically adjust your schema definition to be compliant. For best results, always use the latest available version of MySQL Shell.
The ocimds option, when set to true, will perform schema checks for most of these issues and abort the dump if any are found. The loadDump() command will also only allow loading dumps that have been created with the "ocimds" option enabled.
Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL HeatWave Service. However, the compatibility option can be used to automatically modify the dumped schema SQL scripts, resolving some of these compatibility issues. You may pass one or more of the following values to the "compatibility" option.
create_invisible_pks - Each table which does not have a Primary Key will have one created when the dump is loaded. The following Primary Key is added to the table:
Dumps created with this value can be used with Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, as long as target instance has version 8.0.32 or newer. Mutually exclusive with the ignore_missing_pks value.
force_innodb - The MySQL HeatWave Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB. It will also remove the ROW_FORMAT=FIXED option, as it is not supported by the InnoDB storage engine.
ignore_missing_pks - Ignore errors caused by tables which do not have Primary Keys. Dumps created with this value cannot be used in MySQL HeatWave Service DB System instance with High Availability. Mutually exclusive with the create_invisible_pks value.
ignore_wildcard_grants - Ignore errors from grants on schemas with wildcards, which are interpreted differently in systems where partial_revokes system variable is enabled.
skip_invalid_accounts - Skips accounts which do not have a password or use authentication methods (plugins) not supported by the MySQL HeatWave Service.
strip_definers - Strips the "DEFINER=account" clause from views, routines, events and triggers. The MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. If this characteristic is missing, SQL SECURITY INVOKER clause will be added. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.
Please refer to the MySQL manual for details about DEFINER and SQL SECURITY.
strip_invalid_grants - Strips grant statements which would fail when users are loaded, i.e. grants referring to a specific routine which does not exist.
strip_restricted_grants - Certain privileges are restricted in the MySQL HeatWave Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges.
strip_tablespaces - Tablespaces have some restrictions in the MySQL HeatWave Service. If you'd like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.
Additionally, the following changes will always be made to DDL scripts when the ocimds option is enabled:
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability where instance has version older than 8.0.32, all tables at the source server need to have Primary Keys. This needs to be fixed manually before running the dump. Starting with MySQL 8.0.23 invisible columns may be used to add Primary Keys without changing the schema compatibility, for more information see: https://dev.mysql.com/doc/refman/en/invisible-columns.html.
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, please see https://docs.oracle.com/en-us/iaas/mysql-database/doc/creating-replication-channel.html.
In order to use MySQL HeatWave Service DB Service instance with High Availability, all tables must have a Primary Key. This can be fixed automatically using the create_invisible_pks compatibility value.
Please refer to the MySQL HeatWave Service documentation for more information about restrictions and compatibility.
Dumping to a Bucket in the OCI Object Storage
If the osBucketName option is used, the dump is stored in the specified OCI bucket, connection is established using the local OCI profile. The directory structure is simulated within the object name.
The osNamespace, ociConfigFile and ociProfile options cannot be used if the osBucketName option is set to an empty string.
The osNamespace option overrides the OCI namespace obtained based on the tenancy ID from the local OCI profile.
Enabling dump loading using pre-authenticated requests
The loadDump utility supports loading a dump using a pre-authenticated request (PAR). The simplest way to do this is by providing a PAR to the location of the dump in a bucket, the PAR must be created with the following permissions:
The generated URL can be used to load the dump, see \? loadDump for more details.
Another way to enable loading a dump without requiring an OCI Profile, is to execute the dump operations enabling the ociParManifest option which will cause the dump operation automatically generates a PAR for every file in the dump, and will store them as part of the dump in a file named "@.manifest.json". The manifest is updated as the dump operation progresses.
Using a PAR with permissions to read the manifest is another option to load the dump using PAR.
The ociParManifest option cannot be used if osBucketName is not set.
When creating PARs, an expiration time is required, it can be defined through the ociParExpireTime option. If the option is not used, a predefined expiration time will be used equivalent to a week after the dump operation started. The values assigned to this option should be conformant to RFC3339.
The ociParExpireTime option cannot be used if the ociParManifest option is not enabled.
Dumping to a Bucket in the AWS S3 Object Storage
If the s3BucketName option is used, the dump is stored in the specified AWS S3 bucket. Connection is established using default local AWS configuration paths and profiles, unless overridden. The directory structure is simulated within the object name.
The s3CredentialsFile, s3ConfigFile, s3Profile, s3Region and s3EndpointOverride options cannot be used if the s3BucketName option is not set or set to an empty string.
Handling of the AWS settings
The AWS options are evaluated in the order of precedence, the first available value is used.
The AWS credentials are fetched from the following providers, in the order of precedence:
The items specified above correspond to the following credentials:
The process/command line specified by the credential_process setting must write a JSON object to the standard output in the following form:
The Expiration key, if given, specifies when the credentials are going to expire, they will be automatically refreshed before this happens.
The following credential handling rules apply:
Dumping to a Container in the Azure Blob Storage
If the azureContainerName option is used, the dump is stored in the specified Azure container. Connection is established using the configuration at the local Azure configuration file.The directory structure is simulated within the blob name.
The azureConfigFile option cannot be used if the azureContainerName option is not set or set to an empty string.
Handling of the Azure settings
If a connection string is defined either case in the environment variable or the configuration option, the individual configuration values for account and key will be ignored.
If a SAS Token is defined, it will be used for the authorization (ignoring any defined account key).
The default Azure Blob Endpoint to be used in the operations is defined by:
https://<account>.blob.core.windows.net
Unless a different EndPoint is defined in the connection string.
ArgumentError | in the following scenarios:
|
RuntimeError | in the following scenarios:
|
Undefined dumpSchemas | ( | List | schemas, |
String | outputUrl, | ||
Dictionary | options | ||
) |
Dumps the specified schemas to the files in the output directory.
schemas | List of schemas to be dumped. |
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The schemas parameter cannot be an empty list.
The outputUrl specifies where the dump is going to be stored.
By default, a local directory is used, and in this case outputUrl can be prefixed with file:// scheme. If a relative path is given, the absolute path is computed as relative to the current working directory. If the output directory does not exist but its parent does, it is created. If the output directory exists, it must be empty. All directories are created with the following access rights (on operating systems which support them): rwxr-x---. All files are created with the following access rights (on operating systems which support them): rw-r-----.
The following options are supported:
Requirements
Details
This operation writes SQL files per each schema, table and view dumped, along with some global SQL files.
Table data dumps are written to text files using the specified file format, optionally splitting them into multiple chunk files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
Data dumps cannot be created for the following tables:
The names given in the exclude{object}, include{object}, where or partitions options should be valid MySQL identifiers, quoted using backtick characters when required.
If the exclude{object}, include{object}, where or partitions options contain an object which does not exist, or an object which belongs to a schema which does not exist, it is ignored.
The tzUtc option allows dumping TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones.
If the consistent option is set to true, a global read lock is set using the FLUSH TABLES WITH READ LOCK statement, all threads establish connections with the server and start transactions using:
Once all the threads start transactions, the instance is locked for backup and the global read lock is released.
If the account used for the dump does not have enough privileges to execute FLUSH TABLES, LOCK TABLES will be used as a fallback instead. All tables being dumped, in addition to DDL and GRANT related tables in the mysql schema will be temporarily locked.
The ddlOnly and dataOnly options cannot both be set to true at the same time.
The chunking option causes the the data from each table to be split and written to multiple chunk files. If this option is set to false, table data is written to a single file.
If the chunking option is set to true, but a table to be dumped cannot be chunked (for example if it does not contain a primary key or a unique index), data is dumped to multiple files using a single thread.
The value of the threads option must be a positive number.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
Both the bytesPerChunk and maxRate options support unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
The value of the bytesPerChunk option cannot be smaller than "128k".
MySQL HeatWave Service Compatibility
The MySQL HeatWave Service has a few security related restrictions that are not present in a regular, on-premise instance of MySQL. In order to make it easier to load existing databases into the Service, the dump commands in the MySQL Shell has options to detect potential issues and in some cases, to automatically adjust your schema definition to be compliant. For best results, always use the latest available version of MySQL Shell.
The ocimds option, when set to true, will perform schema checks for most of these issues and abort the dump if any are found. The loadDump() command will also only allow loading dumps that have been created with the "ocimds" option enabled.
Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL HeatWave Service. However, the compatibility option can be used to automatically modify the dumped schema SQL scripts, resolving some of these compatibility issues. You may pass one or more of the following values to the "compatibility" option.
create_invisible_pks - Each table which does not have a Primary Key will have one created when the dump is loaded. The following Primary Key is added to the table:
Dumps created with this value can be used with Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, as long as target instance has version 8.0.32 or newer. Mutually exclusive with the ignore_missing_pks value.
force_innodb - The MySQL HeatWave Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB. It will also remove the ROW_FORMAT=FIXED option, as it is not supported by the InnoDB storage engine.
ignore_missing_pks - Ignore errors caused by tables which do not have Primary Keys. Dumps created with this value cannot be used in MySQL HeatWave Service DB System instance with High Availability. Mutually exclusive with the create_invisible_pks value.
ignore_wildcard_grants - Ignore errors from grants on schemas with wildcards, which are interpreted differently in systems where partial_revokes system variable is enabled.
skip_invalid_accounts - Skips accounts which do not have a password or use authentication methods (plugins) not supported by the MySQL HeatWave Service.
strip_definers - Strips the "DEFINER=account" clause from views, routines, events and triggers. The MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. If this characteristic is missing, SQL SECURITY INVOKER clause will be added. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.
Please refer to the MySQL manual for details about DEFINER and SQL SECURITY.
strip_invalid_grants - Strips grant statements which would fail when users are loaded, i.e. grants referring to a specific routine which does not exist.
strip_restricted_grants - Certain privileges are restricted in the MySQL HeatWave Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges.
strip_tablespaces - Tablespaces have some restrictions in the MySQL HeatWave Service. If you'd like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.
Additionally, the following changes will always be made to DDL scripts when the ocimds option is enabled:
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability where instance has version older than 8.0.32, all tables at the source server need to have Primary Keys. This needs to be fixed manually before running the dump. Starting with MySQL 8.0.23 invisible columns may be used to add Primary Keys without changing the schema compatibility, for more information see: https://dev.mysql.com/doc/refman/en/invisible-columns.html.
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, please see https://docs.oracle.com/en-us/iaas/mysql-database/doc/creating-replication-channel.html.
In order to use MySQL HeatWave Service DB Service instance with High Availability, all tables must have a Primary Key. This can be fixed automatically using the create_invisible_pks compatibility value.
Please refer to the MySQL HeatWave Service documentation for more information about restrictions and compatibility.
Dumping to a Bucket in the OCI Object Storage
If the osBucketName option is used, the dump is stored in the specified OCI bucket, connection is established using the local OCI profile. The directory structure is simulated within the object name.
The osNamespace, ociConfigFile and ociProfile options cannot be used if the osBucketName option is set to an empty string.
The osNamespace option overrides the OCI namespace obtained based on the tenancy ID from the local OCI profile.
Enabling dump loading using pre-authenticated requests
The loadDump utility supports loading a dump using a pre-authenticated request (PAR). The simplest way to do this is by providing a PAR to the location of the dump in a bucket, the PAR must be created with the following permissions:
The generated URL can be used to load the dump, see \? loadDump for more details.
Another way to enable loading a dump without requiring an OCI Profile, is to execute the dump operations enabling the ociParManifest option which will cause the dump operation automatically generates a PAR for every file in the dump, and will store them as part of the dump in a file named "@.manifest.json". The manifest is updated as the dump operation progresses.
Using a PAR with permissions to read the manifest is another option to load the dump using PAR.
The ociParManifest option cannot be used if osBucketName is not set.
When creating PARs, an expiration time is required, it can be defined through the ociParExpireTime option. If the option is not used, a predefined expiration time will be used equivalent to a week after the dump operation started. The values assigned to this option should be conformant to RFC3339.
The ociParExpireTime option cannot be used if the ociParManifest option is not enabled.
Dumping to a Bucket in the AWS S3 Object Storage
If the s3BucketName option is used, the dump is stored in the specified AWS S3 bucket. Connection is established using default local AWS configuration paths and profiles, unless overridden. The directory structure is simulated within the object name.
The s3CredentialsFile, s3ConfigFile, s3Profile, s3Region and s3EndpointOverride options cannot be used if the s3BucketName option is not set or set to an empty string.
Handling of the AWS settings
The AWS options are evaluated in the order of precedence, the first available value is used.
The AWS credentials are fetched from the following providers, in the order of precedence:
The items specified above correspond to the following credentials:
The process/command line specified by the credential_process setting must write a JSON object to the standard output in the following form:
The Expiration key, if given, specifies when the credentials are going to expire, they will be automatically refreshed before this happens.
The following credential handling rules apply:
Dumping to a Container in the Azure Blob Storage
If the azureContainerName option is used, the dump is stored in the specified Azure container. Connection is established using the configuration at the local Azure configuration file.The directory structure is simulated within the blob name.
The azureConfigFile option cannot be used if the azureContainerName option is not set or set to an empty string.
Handling of the Azure settings
If a connection string is defined either case in the environment variable or the configuration option, the individual configuration values for account and key will be ignored.
If a SAS Token is defined, it will be used for the authorization (ignoring any defined account key).
The default Azure Blob Endpoint to be used in the operations is defined by:
https://<account>.blob.core.windows.net
Unless a different EndPoint is defined in the connection string.
ArgumentError | in the following scenarios:
|
RuntimeError | in the following scenarios:
|
Undefined dumpInstance | ( | String | outputUrl, |
Dictionary | options | ||
) |
Dumps the whole database to files in the output directory.
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The outputUrl specifies where the dump is going to be stored.
By default, a local directory is used, and in this case outputUrl can be prefixed with file:// scheme. If a relative path is given, the absolute path is computed as relative to the current working directory. If the output directory does not exist but its parent does, it is created. If the output directory exists, it must be empty. All directories are created with the following access rights (on operating systems which support them): rwxr-x---. All files are created with the following access rights (on operating systems which support them): rw-r-----.
The following options are supported:
Requirements
Details
This operation writes SQL files per each schema, table and view dumped, along with some global SQL files.
Table data dumps are written to text files using the specified file format, optionally splitting them into multiple chunk files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
Data dumps cannot be created for the following tables:
Dumps cannot be created for the following schemas:
If the excludeSchemas or includeSchemas options contain a schema which is not included in the dump or does not exist, it is ignored.
The names given in the exclude{object}, include{object}, where or partitions options should be valid MySQL identifiers, quoted using backtick characters when required.
If the exclude{object}, include{object}, where or partitions options contain an object which does not exist, or an object which belongs to a schema which does not exist, it is ignored.
The tzUtc option allows dumping TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones.
If the consistent option is set to true, a global read lock is set using the FLUSH TABLES WITH READ LOCK statement, all threads establish connections with the server and start transactions using:
Once all the threads start transactions, the instance is locked for backup and the global read lock is released.
If the account used for the dump does not have enough privileges to execute FLUSH TABLES, LOCK TABLES will be used as a fallback instead. All tables being dumped, in addition to DDL and GRANT related tables in the mysql schema will be temporarily locked.
The ddlOnly and dataOnly options cannot both be set to true at the same time.
The chunking option causes the the data from each table to be split and written to multiple chunk files. If this option is set to false, table data is written to a single file.
If the chunking option is set to true, but a table to be dumped cannot be chunked (for example if it does not contain a primary key or a unique index), data is dumped to multiple files using a single thread.
The value of the threads option must be a positive number.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
Both the bytesPerChunk and maxRate options support unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
The value of the bytesPerChunk option cannot be smaller than "128k".
MySQL HeatWave Service Compatibility
The MySQL HeatWave Service has a few security related restrictions that are not present in a regular, on-premise instance of MySQL. In order to make it easier to load existing databases into the Service, the dump commands in the MySQL Shell has options to detect potential issues and in some cases, to automatically adjust your schema definition to be compliant. For best results, always use the latest available version of MySQL Shell.
The ocimds option, when set to true, will perform schema checks for most of these issues and abort the dump if any are found. The loadDump() command will also only allow loading dumps that have been created with the "ocimds" option enabled.
Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL HeatWave Service. However, the compatibility option can be used to automatically modify the dumped schema SQL scripts, resolving some of these compatibility issues. You may pass one or more of the following values to the "compatibility" option.
create_invisible_pks - Each table which does not have a Primary Key will have one created when the dump is loaded. The following Primary Key is added to the table:
Dumps created with this value can be used with Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, as long as target instance has version 8.0.32 or newer. Mutually exclusive with the ignore_missing_pks value.
force_innodb - The MySQL HeatWave Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB. It will also remove the ROW_FORMAT=FIXED option, as it is not supported by the InnoDB storage engine.
ignore_missing_pks - Ignore errors caused by tables which do not have Primary Keys. Dumps created with this value cannot be used in MySQL HeatWave Service DB System instance with High Availability. Mutually exclusive with the create_invisible_pks value.
ignore_wildcard_grants - Ignore errors from grants on schemas with wildcards, which are interpreted differently in systems where partial_revokes system variable is enabled.
skip_invalid_accounts - Skips accounts which do not have a password or use authentication methods (plugins) not supported by the MySQL HeatWave Service.
strip_definers - Strips the "DEFINER=account" clause from views, routines, events and triggers. The MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. If this characteristic is missing, SQL SECURITY INVOKER clause will be added. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.
Please refer to the MySQL manual for details about DEFINER and SQL SECURITY.
strip_invalid_grants - Strips grant statements which would fail when users are loaded, i.e. grants referring to a specific routine which does not exist.
strip_restricted_grants - Certain privileges are restricted in the MySQL HeatWave Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges.
strip_tablespaces - Tablespaces have some restrictions in the MySQL HeatWave Service. If you'd like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.
Additionally, the following changes will always be made to DDL scripts when the ocimds option is enabled:
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability where instance has version older than 8.0.32, all tables at the source server need to have Primary Keys. This needs to be fixed manually before running the dump. Starting with MySQL 8.0.23 invisible columns may be used to add Primary Keys without changing the schema compatibility, for more information see: https://dev.mysql.com/doc/refman/en/invisible-columns.html.
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, please see https://docs.oracle.com/en-us/iaas/mysql-database/doc/creating-replication-channel.html.
In order to use MySQL HeatWave Service DB Service instance with High Availability, all tables must have a Primary Key. This can be fixed automatically using the create_invisible_pks compatibility value.
Please refer to the MySQL HeatWave Service documentation for more information about restrictions and compatibility.
Dumping to a Bucket in the OCI Object Storage
If the osBucketName option is used, the dump is stored in the specified OCI bucket, connection is established using the local OCI profile. The directory structure is simulated within the object name.
The osNamespace, ociConfigFile and ociProfile options cannot be used if the osBucketName option is set to an empty string.
The osNamespace option overrides the OCI namespace obtained based on the tenancy ID from the local OCI profile.
Enabling dump loading using pre-authenticated requests
The loadDump utility supports loading a dump using a pre-authenticated request (PAR). The simplest way to do this is by providing a PAR to the location of the dump in a bucket, the PAR must be created with the following permissions:
The generated URL can be used to load the dump, see \? loadDump for more details.
Another way to enable loading a dump without requiring an OCI Profile, is to execute the dump operations enabling the ociParManifest option which will cause the dump operation automatically generates a PAR for every file in the dump, and will store them as part of the dump in a file named "@.manifest.json". The manifest is updated as the dump operation progresses.
Using a PAR with permissions to read the manifest is another option to load the dump using PAR.
The ociParManifest option cannot be used if osBucketName is not set.
When creating PARs, an expiration time is required, it can be defined through the ociParExpireTime option. If the option is not used, a predefined expiration time will be used equivalent to a week after the dump operation started. The values assigned to this option should be conformant to RFC3339.
The ociParExpireTime option cannot be used if the ociParManifest option is not enabled.
Dumping to a Bucket in the AWS S3 Object Storage
If the s3BucketName option is used, the dump is stored in the specified AWS S3 bucket. Connection is established using default local AWS configuration paths and profiles, unless overridden. The directory structure is simulated within the object name.
The s3CredentialsFile, s3ConfigFile, s3Profile, s3Region and s3EndpointOverride options cannot be used if the s3BucketName option is not set or set to an empty string.
Handling of the AWS settings
The AWS options are evaluated in the order of precedence, the first available value is used.
The AWS credentials are fetched from the following providers, in the order of precedence:
The items specified above correspond to the following credentials:
The process/command line specified by the credential_process setting must write a JSON object to the standard output in the following form:
The Expiration key, if given, specifies when the credentials are going to expire, they will be automatically refreshed before this happens.
The following credential handling rules apply:
Dumping to a Container in the Azure Blob Storage
If the azureContainerName option is used, the dump is stored in the specified Azure container. Connection is established using the configuration at the local Azure configuration file.The directory structure is simulated within the blob name.
The azureConfigFile option cannot be used if the azureContainerName option is not set or set to an empty string.
Handling of the Azure settings
If a connection string is defined either case in the environment variable or the configuration option, the individual configuration values for account and key will be ignored.
If a SAS Token is defined, it will be used for the authorization (ignoring any defined account key).
The default Azure Blob Endpoint to be used in the operations is defined by:
https://<account>.blob.core.windows.net
Unless a different EndPoint is defined in the connection string.
ArgumentError | in the following scenarios:
|
RuntimeError | in the following scenarios:
|