MySQL Shell API 9.2.0
Unified development interface for MySQL Products
|
Global object that groups miscellaneous tools like upgrade checker and JSON import. More...
Functions | |
None | check_for_server_upgrade (ConnectionData connectionData, dict options) |
Performs series of tests on specified MySQL server to check if the upgrade process will succeed. More... | |
None | import_json (str file, dict options) |
Import JSON documents from file to collection or table in MySQL Server using X Protocol session. More... | |
None | import_table (list urls, dict options) |
Import table dump stored in files to target table using LOAD DATA LOCAL INFILE calls in parallel connections. More... | |
None | load_dump (str url, dict options) |
Loads database dumps created by MySQL Shell. More... | |
None | export_table (str table, str outputUrl, dict options) |
Exports the specified table to the data dump file. More... | |
None | dump_tables (str schema, list tables, str outputUrl, dict options) |
Dumps the specified tables or views from the given schema to the files in the target directory. More... | |
None | dump_schemas (list schemas, str outputUrl, dict options) |
Dumps the specified schemas to the files in the output directory. More... | |
None | dump_instance (str outputUrl, dict options) |
Dumps the whole database to files in the output directory. More... | |
None | copy_instance (ConnectionData connectionData, dict options) |
Copies a source instance to the target instance. Requires an open global Shell session to the source instance, if there is none, an exception is raised. More... | |
None | copy_schemas (list schemas, ConnectionData connectionData, dict options) |
Copies schemas from the source instance to the target instance. Requires an open global Shell session to the source instance, if there is none, an exception is raised. More... | |
None | copy_tables (str schema, list tables, ConnectionData connectionData, dict options) |
Copies tables and views from schema in the source instance to the target instance. Requires an open global Shell session to the source instance, if there is none, an exception is raised. More... | |
None | dump_binlogs (str outputUrl, dict options) |
Dumps binary logs generated since a specific point in time to the given local or remote directory. More... | |
None | load_binlogs (str url, dict options) |
Loads binary log dumps created by MySQL Shell from a local or remote directory. More... | |
Global object that groups miscellaneous tools like upgrade checker and JSON import.
None check_for_server_upgrade | ( | ConnectionData | connectionData, |
dict | options | ||
) |
Performs series of tests on specified MySQL server to check if the upgrade process will succeed.
connectionData | Optional the connection data to server to be checked |
options | Optional dictionary of options to modify tool behaviour. |
If no connectionData is specified tool will try to establish connection using data from current session.
Connection Data
The connection data may be specified in the following formats:
A basic URI string has the following format:
[scheme://][user[:password]@]<host[:port]|socket>[/schema][?option=value&option=value...]
For additional information about MySQL connection data, see Connection Data.
Tool behaviour can be modified with following options:
If targetVersion is not specified, the current Shell version will be used as target version.
Limitations
When running this tool with a server older than 8.0, some checks have additional requirements:
None import_json | ( | str | file, |
dict | options | ||
) |
Import JSON documents from file to collection or table in MySQL Server using X Protocol session.
file | Path to JSON documents file |
options | Optional dictionary with import options |
This function reads standard JSON documents from a file, it also supports converting BSON Data Types represented using the MongoDB Extended Json (strict mode) into MySQL values.
The options dictionary supports the following options:
The following options are valid only when convertBsonTypes is enabled. These are all boolean flags. The ignoreRegexOptions option is enabled by default, the rest is disabled by default.
If the schema is not provided, an active schema on the global session, if set, will be used.
The collection and the table options cannot be combined. If they are not provided, the basename of the file without extension will be used as target collection name.
If the target collection or table does not exist, they are created, otherwise the data is inserted into the existing collection or table.
The tableColumn implies the use of the table option and cannot be combined with the collection option.
BSON Data Type Processing.
If only convertBsonOid is enabled, no conversion will be done on the rest of the BSON Data Types.
To use extractOidTime, it should be set to a name which will be used to insert an additional field into the main document. The value of the new field will be the timestamp obtained from the ObjectID value. Note that this will be done only for an ObjectID value associated to the '_id' field of the main document.
NumberLong and NumberInt values will be converted to integer values.
NumberDecimal values are imported as strings, unless decimalAsDouble is enabled.
Regex values will be converted to strings containing the regular expression. The regular expression options are ignored unless ignoreRegexOptions is disabled. When ignoreRegexOptions is disabled, the regular expression will be converted to the form: /<regex>/<options>.
None import_table | ( | list | urls, |
dict | options | ||
) |
Import table dump stored in files to target table using LOAD DATA LOCAL INFILE calls in parallel connections.
urls | URL or list of URLs to files with user data. URL can contain a glob pattern with wildcard '*' and/or '?'. All selected files must be chunks of the same target table. |
options | Optional dictionary with import options |
The urls parameter is a string or list of strings which specifies the files to be imported. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
Options dictionary:
dialect predefines following set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in following manner:
If the schema is not provided, an active schema on the global session, if set, will be used.
If the input values are not necessarily enclosed within fieldsEnclosedBy, set fieldsOptionallyEnclosed to true.
If you specify one separator that is the same as or a prefix of another, LOAD DATA INFILE cannot interpret the input properly.
Connection options set in the global session, such as compression, ssl-mode, etc. are used in parallel connections.
Each parallel connection sets the following session variables:
Note: because of storage engine limitations, table locks held by MyISAM will cause imports of such tables to be sequential, regardless of the number of threads used.
Example input data for dialects:
Examples of decodeColumns usage:
None load_dump | ( | str | url, |
dict | options | ||
) |
Loads database dumps created by MySQL Shell.
url | defines the location of the dump to be loaded |
options | Optional dictionary with load options |
The url parameter identifies the location of the dump to be loaded. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
load_dump() will load a dump from the specified path. It transparently handles compressed files and directly streams data when loading from remote storage. If the 'waitDumpTimeout' option is set, it will load a dump on-the-fly, loading table data chunks as the dumper produces them.
Table data will be loaded in parallel using the configured number of threads (4 by default). Multiple threads per table can be used if the dump was created with table chunking enabled. Data loads are scheduled across threads in a way that tries to maximize parallelism, while also minimizing lock contention from concurrent loads to the same table. If there are more tables than threads, different tables will be loaded per thread, larger tables first. If there are more threads than tables, then chunks from larger tables will be proportionally assigned more threads.
LOAD DATA LOCAL INFILE is used to load table data and thus, the 'local_infile' MySQL global setting must be enabled.
If target MySQL server supports BULK LOAD, the load operation of compatible tables can be offloaded to the target server, which parallelizes and loads data directly from the Cloud storage.
Resuming
The load command will store progress information into a file for each step of the loading process, including successfully completed and interrupted/failed ones. If that file already exists, its contents will be used to skip steps that have already been completed and retry those that failed or didn't start yet.
When resuming, table chunks that have started being loaded but didn't finish are loaded again. Duplicate rows are discarded by the server. Tables that do not have unique keys are truncated before the load is resumed.
IMPORTANT: Resuming assumes that no changes have been made to the partially loaded data between the failure and the retry. Resuming after external changes has undefined behavior and may lead to data loss.
The progress state file has a default name of load-progress.<server_uuid>.json and is written to the same location as the dump. If 'progressFile' is specified, progress will be written to either a local file at the given path, or, if the HTTP(S) scheme is used, to a remote file using HTTP PUT requests. Setting it to an empty string will disable progress tracking and resuming.
If the 'resetProgress' option is enabled, progress information from previous load attempts of the dump to the destination server is discarded and the load is restarted. You may use this option to retry loading the whole dump from the beginning. However, changes made to the database are not reverted, so previously loaded objects should be manually dropped first.
Options dictionary:
Connection options set in the global session, such as compression, ssl-mode, etc. are inherited by load sessions.
Examples:
Loading a dump using Pre-authenticated Requests (PAR)
When a dump is created in OCI Object Storage, it is possible to load it using a single pre-authenticated request which gives access to the location of the dump. The requirements for this PAR include:
Given a dump located at a bucket root and a PAR created for the bucket, the dump can be loaded by providing the PAR as the URL parameter:
Example:
Given a dump located at some directory within a bucket and a PAR created for the given directory, the dump can be loaded by providing the PAR and the prefix as the URL parameter:
Example:
In both of the above cases the load is done using pure HTTP GET requests and the progressFile option is mandatory.
None export_table | ( | str | table, |
str | outputUrl, | ||
dict | options | ||
) |
Exports the specified table to the data dump file.
table | Name of the table to be exported. |
outputUrl | Target file to store the data. |
options | Optional dictionary with the export options. |
The value of table parameter should be in form of table or schema.table, quoted using backtick characters when required. If schema is omitted, an active schema on the global Shell session is used. If there is none, an exception is raised.
The outputUrl specifies URL to a file where the exported data is going to be stored. The parent directory of the output file must exist. If the output file exists, it is going to be overwritten. The output file is created with the following access rights (on operating systems which support them): rw-r-----. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
The following options are supported:
Requirements
Details
This operation writes table data dump to the specified by the user files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
Options
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
The maxRate option supports unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
None dump_tables | ( | str | schema, |
list | tables, | ||
str | outputUrl, | ||
dict | options | ||
) |
Dumps the specified tables or views from the given schema to the files in the target directory.
schema | Name of the schema that contains tables/views to be dumped. |
tables | List of tables/views to be dumped. |
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The tables parameter cannot be an empty list.
The outputUrl specifies URL to a directory where the dump is going to be stored. If the output directory does not exist but its parent does, it is created. If the output directory exists, it must be empty. All directories are created with the following access rights (on operating systems which support them): rwxr-x---. All files are created with the following access rights (on operating systems which support them): rw-r-----. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
The following options are supported:
Requirements
Details
This operation writes SQL files per each table and view dumped, along with some global SQL files. The information about the source schema is also saved, meaning that when using the util.load_dump() function to load the dump, it is automatically recreated. Alternatively, dump can be loaded into another existing schema using the schema option.
Table data dumps are written to text files using the specified file format, optionally splitting them into multiple chunk files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
Options
If the all option is set to true and the tables parameter is set to an empty array, all views and tables from the specified schema are going to be dumped. If the tables parameter is not set to an empty array, an exception is thrown.
The names given in the exclude{object}, include{object}, where or partitions options should be valid MySQL identifiers, quoted using backtick characters when required.
If the exclude{object}, include{object}, where or partitions options contain an object which does not exist, or an object which belongs to a schema which does not exist, it is ignored.
The tzUtc option allows dumping TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones.
If the consistent option is set to true, a global read lock is set using the FLUSH TABLES WITH READ LOCK statement, all threads establish connections with the server and start transactions using:
Once all the threads start transactions, the instance is locked for backup and the global read lock is released.
If the account used for the dump does not have enough privileges to execute FLUSH TABLES, LOCK TABLES will be used as a fallback instead. All tables being dumped, in addition to DDL and GRANT related tables in the mysql schema will be temporarily locked.
The ddlOnly and dataOnly options cannot both be set to true at the same time.
The chunking option causes the the data from each table to be split and written to multiple chunk files. If this option is set to false, table data is written to a single file.
If the chunking option is set to true, but a table to be dumped cannot be chunked (for example if it does not contain a primary key or a unique index), data is dumped to multiple files using a single thread.
The value of the threads option must be a positive number.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
Both the bytesPerChunk and maxRate options support unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
The value of the bytesPerChunk option cannot be smaller than "128k".
MySQL HeatWave Service Compatibility
The MySQL HeatWave Service has a few security related restrictions that are not present in a regular, on-premise instance of MySQL. In order to make it easier to load existing databases into the Service, the dump commands in the MySQL Shell has options to detect potential issues and in some cases, to automatically adjust your schema definition to be compliant. For best results, always use the latest available version of MySQL Shell.
The ocimds option, when set to true, will perform schema checks for most of these issues and abort the dump if any are found. The load_dump() command will also only allow loading dumps that have been created with the "ocimds" option enabled.
Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL HeatWave Service. However, the compatibility option can be used to automatically modify the dumped schema SQL scripts, resolving some of these compatibility issues. You may pass one or more of the following values to the "compatibility" option.
create_invisible_pks - Each table which does not have a Primary Key will have one created when the dump is loaded. The following Primary Key is added to the table:
Dumps created with this value can be used with Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, as long as target instance has version 8.0.32 or newer. Mutually exclusive with the ignore_missing_pks value.
force_innodb - The MySQL HeatWave Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB. It will also remove the ROW_FORMAT=FIXED option, as it is not supported by the InnoDB storage engine.
force_non_standard_fks - In MySQL 8.4.0, a new system variable restrict_fk_on_non_standard_key was added, which prohibits creation of non-standard foreign keys (that reference non-unique keys or partial fields of composite keys), when enabled. The MySQL HeatWave Service instances have this variable enabled by default, which causes dumps with such tables to fail to load. This option will disable checks for non-standard foreign keys, and cause the loader to set the session value of restrict_fk_on_non_standard_key variable to OFF. Creation of foreign keys with non-standard keys may break the replication.
ignore_missing_pks - Ignore errors caused by tables which do not have Primary Keys. Dumps created with this value cannot be used in MySQL HeatWave Service DB System instance with High Availability. Mutually exclusive with the create_invisible_pks value.
ignore_wildcard_grants - Ignore errors from grants on schemas with wildcards, which are interpreted differently in systems where partial_revokes system variable is enabled. When this variable is enabled, the _ and % characters are treated as literals, which could lead to unexpected results. Before using this compatibility option, each such grant should be carefully reviewed.
skip_invalid_accounts - Skips accounts which do not have a password or use authentication methods (plugins) not supported by the MySQL HeatWave Service.
strip_definers - This option should not be used if the destination MySQL HeatWave Service DB System instance has version 8.2.0 or newer. In such case, the administrator role is granted the SET_ANY_DEFINER privilege. Users which have this privilege are able to specify any valid authentication ID in the DEFINER clause.
Strips the "DEFINER=account" clause from views, routines, events and triggers. The MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. If this characteristic is missing, SQL SECURITY INVOKER clause will be added. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.
Please refer to the MySQL manual for details about DEFINER and SQL SECURITY.
strip_invalid_grants - Strips grant statements which would fail when users are loaded, i.e. grants referring to a specific routine which does not exist.
strip_restricted_grants - Certain privileges are restricted in the MySQL HeatWave Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges. If the destination MySQL version supports the SET_ANY_DEFINER privilege, the SET_USER_ID privilege is replaced with SET_ANY_DEFINER instead of being stripped.
strip_tablespaces - Tablespaces have some restrictions in the MySQL HeatWave Service. If you'd like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.
unescape_wildcard_grants - Fixes grants on schemas with wildcards, replacing escaped \_ and \% wildcards in schema names with _ and % wildcard characters. When the partial_revokes system variable is enabled, the \ character is treated as a literal, which could lead to unexpected results. Before using this compatibility option, each such grant should be carefully reviewed.
Additionally, the following changes will always be made to DDL scripts when the ocimds option is enabled:
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability where instance has version older than 8.0.32, all tables at the source server need to have Primary Keys. This needs to be fixed manually before running the dump. Starting with MySQL 8.0.23 invisible columns may be used to add Primary Keys without changing the schema compatibility, for more information see: https://dev.mysql.com/doc/refman/en/invisible-columns.html.
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, please see https://docs.oracle.com/en-us/iaas/mysql-database/doc/creating-replication-channel.html.
In order to use MySQL HeatWave Service DB Service instance with High Availability, all tables must have a Primary Key. This can be fixed automatically using the create_invisible_pks compatibility value.
Please refer to the MySQL HeatWave Service documentation for more information about restrictions and compatibility.
None dump_schemas | ( | list | schemas, |
str | outputUrl, | ||
dict | options | ||
) |
Dumps the specified schemas to the files in the output directory.
schemas | List of schemas to be dumped. |
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The schemas parameter cannot be an empty list.
The outputUrl specifies URL to a directory where the dump is going to be stored. If the output directory does not exist but its parent does, it is created. If the output directory exists, it must be empty. All directories are created with the following access rights (on operating systems which support them): rwxr-x---. All files are created with the following access rights (on operating systems which support them): rw-r-----. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
The following options are supported:
Requirements
Details
This operation writes SQL files per each schema, table and view dumped, along with some global SQL files.
Table data dumps are written to text files using the specified file format, optionally splitting them into multiple chunk files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
Data dumps cannot be created for the following tables:
Options
The names given in the exclude{object}, include{object}, where or partitions options should be valid MySQL identifiers, quoted using backtick characters when required.
If the exclude{object}, include{object}, where or partitions options contain an object which does not exist, or an object which belongs to a schema which does not exist, it is ignored.
The tzUtc option allows dumping TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones.
If the consistent option is set to true, a global read lock is set using the FLUSH TABLES WITH READ LOCK statement, all threads establish connections with the server and start transactions using:
Once all the threads start transactions, the instance is locked for backup and the global read lock is released.
If the account used for the dump does not have enough privileges to execute FLUSH TABLES, LOCK TABLES will be used as a fallback instead. All tables being dumped, in addition to DDL and GRANT related tables in the mysql schema will be temporarily locked.
The ddlOnly and dataOnly options cannot both be set to true at the same time.
The chunking option causes the the data from each table to be split and written to multiple chunk files. If this option is set to false, table data is written to a single file.
If the chunking option is set to true, but a table to be dumped cannot be chunked (for example if it does not contain a primary key or a unique index), data is dumped to multiple files using a single thread.
The value of the threads option must be a positive number.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
Both the bytesPerChunk and maxRate options support unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
The value of the bytesPerChunk option cannot be smaller than "128k".
MySQL HeatWave Service Compatibility
The MySQL HeatWave Service has a few security related restrictions that are not present in a regular, on-premise instance of MySQL. In order to make it easier to load existing databases into the Service, the dump commands in the MySQL Shell has options to detect potential issues and in some cases, to automatically adjust your schema definition to be compliant. For best results, always use the latest available version of MySQL Shell.
The ocimds option, when set to true, will perform schema checks for most of these issues and abort the dump if any are found. The load_dump() command will also only allow loading dumps that have been created with the "ocimds" option enabled.
Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL HeatWave Service. However, the compatibility option can be used to automatically modify the dumped schema SQL scripts, resolving some of these compatibility issues. You may pass one or more of the following values to the "compatibility" option.
create_invisible_pks - Each table which does not have a Primary Key will have one created when the dump is loaded. The following Primary Key is added to the table:
Dumps created with this value can be used with Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, as long as target instance has version 8.0.32 or newer. Mutually exclusive with the ignore_missing_pks value.
force_innodb - The MySQL HeatWave Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB. It will also remove the ROW_FORMAT=FIXED option, as it is not supported by the InnoDB storage engine.
force_non_standard_fks - In MySQL 8.4.0, a new system variable restrict_fk_on_non_standard_key was added, which prohibits creation of non-standard foreign keys (that reference non-unique keys or partial fields of composite keys), when enabled. The MySQL HeatWave Service instances have this variable enabled by default, which causes dumps with such tables to fail to load. This option will disable checks for non-standard foreign keys, and cause the loader to set the session value of restrict_fk_on_non_standard_key variable to OFF. Creation of foreign keys with non-standard keys may break the replication.
ignore_missing_pks - Ignore errors caused by tables which do not have Primary Keys. Dumps created with this value cannot be used in MySQL HeatWave Service DB System instance with High Availability. Mutually exclusive with the create_invisible_pks value.
ignore_wildcard_grants - Ignore errors from grants on schemas with wildcards, which are interpreted differently in systems where partial_revokes system variable is enabled. When this variable is enabled, the _ and % characters are treated as literals, which could lead to unexpected results. Before using this compatibility option, each such grant should be carefully reviewed.
skip_invalid_accounts - Skips accounts which do not have a password or use authentication methods (plugins) not supported by the MySQL HeatWave Service.
strip_definers - This option should not be used if the destination MySQL HeatWave Service DB System instance has version 8.2.0 or newer. In such case, the administrator role is granted the SET_ANY_DEFINER privilege. Users which have this privilege are able to specify any valid authentication ID in the DEFINER clause.
Strips the "DEFINER=account" clause from views, routines, events and triggers. The MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. If this characteristic is missing, SQL SECURITY INVOKER clause will be added. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.
Please refer to the MySQL manual for details about DEFINER and SQL SECURITY.
strip_invalid_grants - Strips grant statements which would fail when users are loaded, i.e. grants referring to a specific routine which does not exist.
strip_restricted_grants - Certain privileges are restricted in the MySQL HeatWave Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges. If the destination MySQL version supports the SET_ANY_DEFINER privilege, the SET_USER_ID privilege is replaced with SET_ANY_DEFINER instead of being stripped.
strip_tablespaces - Tablespaces have some restrictions in the MySQL HeatWave Service. If you'd like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.
unescape_wildcard_grants - Fixes grants on schemas with wildcards, replacing escaped \_ and \% wildcards in schema names with _ and % wildcard characters. When the partial_revokes system variable is enabled, the \ character is treated as a literal, which could lead to unexpected results. Before using this compatibility option, each such grant should be carefully reviewed.
Additionally, the following changes will always be made to DDL scripts when the ocimds option is enabled:
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability where instance has version older than 8.0.32, all tables at the source server need to have Primary Keys. This needs to be fixed manually before running the dump. Starting with MySQL 8.0.23 invisible columns may be used to add Primary Keys without changing the schema compatibility, for more information see: https://dev.mysql.com/doc/refman/en/invisible-columns.html.
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, please see https://docs.oracle.com/en-us/iaas/mysql-database/doc/creating-replication-channel.html.
In order to use MySQL HeatWave Service DB Service instance with High Availability, all tables must have a Primary Key. This can be fixed automatically using the create_invisible_pks compatibility value.
Please refer to the MySQL HeatWave Service documentation for more information about restrictions and compatibility.
None dump_instance | ( | str | outputUrl, |
dict | options | ||
) |
Dumps the whole database to files in the output directory.
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The outputUrl specifies URL to a directory where the dump is going to be stored. If the output directory does not exist but its parent does, it is created. If the output directory exists, it must be empty. All directories are created with the following access rights (on operating systems which support them): rwxr-x---. All files are created with the following access rights (on operating systems which support them): rw-r-----. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
The following options are supported:
Requirements
Details
This operation writes SQL files per each schema, table and view dumped, along with some global SQL files.
Table data dumps are written to text files using the specified file format, optionally splitting them into multiple chunk files.
Requires an open, global Shell session, and uses its connection options, such as compression, ssl-mode, etc., to establish additional connections.
Data dumps cannot be created for the following tables:
Dumps cannot be created for the following schemas:
Options
If the excludeSchemas or includeSchemas options contain a schema which is not included in the dump or does not exist, it is ignored.
The names given in the exclude{object}, include{object}, where or partitions options should be valid MySQL identifiers, quoted using backtick characters when required.
If the exclude{object}, include{object}, where or partitions options contain an object which does not exist, or an object which belongs to a schema which does not exist, it is ignored.
The tzUtc option allows dumping TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones.
If the consistent option is set to true, a global read lock is set using the FLUSH TABLES WITH READ LOCK statement, all threads establish connections with the server and start transactions using:
Once all the threads start transactions, the instance is locked for backup and the global read lock is released.
If the account used for the dump does not have enough privileges to execute FLUSH TABLES, LOCK TABLES will be used as a fallback instead. All tables being dumped, in addition to DDL and GRANT related tables in the mysql schema will be temporarily locked.
The ddlOnly and dataOnly options cannot both be set to true at the same time.
The chunking option causes the the data from each table to be split and written to multiple chunk files. If this option is set to false, table data is written to a single file.
If the chunking option is set to true, but a table to be dumped cannot be chunked (for example if it does not contain a primary key or a unique index), data is dumped to multiple files using a single thread.
The value of the threads option must be a positive number.
The dialect option predefines the set of options fieldsTerminatedBy (FT), fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy (FESC) and linesTerminatedBy (LT) in the following manner:
Both the bytesPerChunk and maxRate options support unit suffixes:
i.e. maxRate="2k" - limit throughput to 2000 bytes per second.
The value of the bytesPerChunk option cannot be smaller than "128k".
MySQL HeatWave Service Compatibility
The MySQL HeatWave Service has a few security related restrictions that are not present in a regular, on-premise instance of MySQL. In order to make it easier to load existing databases into the Service, the dump commands in the MySQL Shell has options to detect potential issues and in some cases, to automatically adjust your schema definition to be compliant. For best results, always use the latest available version of MySQL Shell.
The ocimds option, when set to true, will perform schema checks for most of these issues and abort the dump if any are found. The load_dump() command will also only allow loading dumps that have been created with the "ocimds" option enabled.
Some issues found by the ocimds option may require you to manually make changes to your database schema before it can be loaded into the MySQL HeatWave Service. However, the compatibility option can be used to automatically modify the dumped schema SQL scripts, resolving some of these compatibility issues. You may pass one or more of the following values to the "compatibility" option.
create_invisible_pks - Each table which does not have a Primary Key will have one created when the dump is loaded. The following Primary Key is added to the table:
Dumps created with this value can be used with Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, as long as target instance has version 8.0.32 or newer. Mutually exclusive with the ignore_missing_pks value.
force_innodb - The MySQL HeatWave Service requires use of the InnoDB storage engine. This option will modify the ENGINE= clause of CREATE TABLE statements that use incompatible storage engines and replace them with InnoDB. It will also remove the ROW_FORMAT=FIXED option, as it is not supported by the InnoDB storage engine.
force_non_standard_fks - In MySQL 8.4.0, a new system variable restrict_fk_on_non_standard_key was added, which prohibits creation of non-standard foreign keys (that reference non-unique keys or partial fields of composite keys), when enabled. The MySQL HeatWave Service instances have this variable enabled by default, which causes dumps with such tables to fail to load. This option will disable checks for non-standard foreign keys, and cause the loader to set the session value of restrict_fk_on_non_standard_key variable to OFF. Creation of foreign keys with non-standard keys may break the replication.
ignore_missing_pks - Ignore errors caused by tables which do not have Primary Keys. Dumps created with this value cannot be used in MySQL HeatWave Service DB System instance with High Availability. Mutually exclusive with the create_invisible_pks value.
ignore_wildcard_grants - Ignore errors from grants on schemas with wildcards, which are interpreted differently in systems where partial_revokes system variable is enabled. When this variable is enabled, the _ and % characters are treated as literals, which could lead to unexpected results. Before using this compatibility option, each such grant should be carefully reviewed.
skip_invalid_accounts - Skips accounts which do not have a password or use authentication methods (plugins) not supported by the MySQL HeatWave Service.
strip_definers - This option should not be used if the destination MySQL HeatWave Service DB System instance has version 8.2.0 or newer. In such case, the administrator role is granted the SET_ANY_DEFINER privilege. Users which have this privilege are able to specify any valid authentication ID in the DEFINER clause.
Strips the "DEFINER=account" clause from views, routines, events and triggers. The MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. By stripping the DEFINER clause, these objects will be created with that default definer. Views and routines will additionally have their SQL SECURITY clause changed from DEFINER to INVOKER. If this characteristic is missing, SQL SECURITY INVOKER clause will be added. This ensures that the access permissions of the account querying or calling these are applied, instead of the user that created them. This should be sufficient for most users, but if your database security model requires that views and routines have more privileges than their invoker, you will need to manually modify the schema before loading it.
Please refer to the MySQL manual for details about DEFINER and SQL SECURITY.
strip_invalid_grants - Strips grant statements which would fail when users are loaded, i.e. grants referring to a specific routine which does not exist.
strip_restricted_grants - Certain privileges are restricted in the MySQL HeatWave Service. Attempting to create users granting these privileges would fail, so this option allows dumped GRANT statements to be stripped of these privileges. If the destination MySQL version supports the SET_ANY_DEFINER privilege, the SET_USER_ID privilege is replaced with SET_ANY_DEFINER instead of being stripped.
strip_tablespaces - Tablespaces have some restrictions in the MySQL HeatWave Service. If you'd like to have tables created in their default tablespaces, this option will strip the TABLESPACE= option from CREATE TABLE statements.
unescape_wildcard_grants - Fixes grants on schemas with wildcards, replacing escaped \_ and \% wildcards in schema names with _ and % wildcard characters. When the partial_revokes system variable is enabled, the \ character is treated as a literal, which could lead to unexpected results. Before using this compatibility option, each such grant should be carefully reviewed.
Additionally, the following changes will always be made to DDL scripts when the ocimds option is enabled:
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability where instance has version older than 8.0.32, all tables at the source server need to have Primary Keys. This needs to be fixed manually before running the dump. Starting with MySQL 8.0.23 invisible columns may be used to add Primary Keys without changing the schema compatibility, for more information see: https://dev.mysql.com/doc/refman/en/invisible-columns.html.
In order to use Inbound Replication into an MySQL HeatWave Service DB System instance with High Availability, please see https://docs.oracle.com/en-us/iaas/mysql-database/doc/creating-replication-channel.html.
In order to use MySQL HeatWave Service DB Service instance with High Availability, all tables must have a Primary Key. This can be fixed automatically using the create_invisible_pks compatibility value.
Please refer to the MySQL HeatWave Service documentation for more information about restrictions and compatibility.
None copy_instance | ( | ConnectionData | connectionData, |
dict | options | ||
) |
Copies a source instance to the target instance. Requires an open global Shell session to the source instance, if there is none, an exception is raised.
connectionData | Specifies the connection information required to establish a connection to the target instance. |
options | Optional dictionary with the copy options. |
Runs simultaneous dump and load operations, while storing the dump artifacts in memory.
If target is a MySQL HeatWave Service DB System instance, automatically checks for compatibility with it.
Connection Data
The connection data may be specified in the following formats:
A basic URI string has the following format:
[scheme://][user[:password]@]<host[:port]|socket>[/schema][?option=value&option=value...]
For additional information about MySQL connection data, see Connection Data.
The following options are supported:
For discussion of all options see: dump_instance() and load_dump().
None copy_schemas | ( | list | schemas, |
ConnectionData | connectionData, | ||
dict | options | ||
) |
Copies schemas from the source instance to the target instance. Requires an open global Shell session to the source instance, if there is none, an exception is raised.
schemas | List of strings with names of schemas to be copied. |
connectionData | Specifies the connection information required to establish a connection to the target instance. |
options | Optional dictionary with the copy options. |
Runs simultaneous dump and load operations, while storing the dump artifacts in memory.
If target is a MySQL HeatWave Service DB System instance, automatically checks for compatibility with it.
Connection Data
The connection data may be specified in the following formats:
A basic URI string has the following format:
[scheme://][user[:password]@]<host[:port]|socket>[/schema][?option=value&option=value...]
For additional information about MySQL connection data, see Connection Data.
The following options are supported:
For discussion of all options see: dump_schemas() and load_dump().
None copy_tables | ( | str | schema, |
list | tables, | ||
ConnectionData | connectionData, | ||
dict | options | ||
) |
Copies tables and views from schema in the source instance to the target instance. Requires an open global Shell session to the source instance, if there is none, an exception is raised.
schema | Name of the schema that contains tables and views to be copied. |
tables | List of strings with names of tables and views to be copied. |
connectionData | Specifies the connection information required to establish a connection to the target instance. |
options | Optional dictionary with the copy options. |
Runs simultaneous dump and load operations, while storing the dump artifacts in memory.
If target is a MySQL HeatWave Service DB System instance, automatically checks for compatibility with it.
Connection Data
The connection data may be specified in the following formats:
A basic URI string has the following format:
[scheme://][user[:password]@]<host[:port]|socket>[/schema][?option=value&option=value...]
For additional information about MySQL connection data, see Connection Data.
The following options are supported:
For discussion of all options see: dump_tables() and load_dump().
None dump_binlogs | ( | str | outputUrl, |
dict | options | ||
) |
Dumps binary logs generated since a specific point in time to the given local or remote directory.
outputUrl | Target directory to store the dump files. |
options | Optional dictionary with the dump options. |
The outputUrl specifies URL to a directory where the dump is going to be stored. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
The options dictionary supports the following options:
Details
The starting point can be automatically determined from a previous dump (either by reusing the same output directory created previously by util.dump_binlogs(), or by setting the since option to a directory containing dump created either by util.dump_instance() or util.dump_binlogs()) or as a explicitly specified binlog file and position in the startFrom option.
Use util.load_binlogs() to apply binary log dumps created with this command.
The Shell must be connected to the server from where binlogs will be dumped from.
If directory specified by the outputUrl parameter does not exist, either since or startFrom option must also be given.
If directory specified by the outputUrl parameter exists, neither since nor startFrom options may be given, as they will be automatically determined.
By default, the since option assumes the same storage location as outputUrl:
In order to use a different storage, prefix the option value with file:// for a local storage, or use a PAR URL.
None load_binlogs | ( | str | url, |
dict | options | ||
) |
Loads binary log dumps created by MySQL Shell from a local or remote directory.
url | URL to a local or remote directory containing dump created by util.dump_binlogs(). |
options | Optional dictionary with the load options. |
The url parameter specifies the location of the dump to be loaded. Allowed values:
For additional information on remote storage support, see Remote Storage Options.
The options dictionary supports the following options:
Details
The Shell must be connected to the server where binlogs will be loaded to.
The binary logs to be loaded are automatically selected based on the contents of the dump and the current state of the target instance.