This section describes the MySQL Shell copy utilities:
The copy utilities enable you to copy DDL and data between MySQL instances, without the need for intermediate storage. The data is streamed from source to destination.
Approximately 32MB of memory is pre-allocated to store metadata files which are discarded as they are read and the copy is processed.
It is possible to copy from a source to an HeatWave Service DB System. If you defined a DB System as the target, the utility detects this and enables HeatWave Service compatibility checks by default. See Section 11.5, “Instance Dump Utility, Schema Dump Utility, and Table Dump Utility” for more information on these checks.
The copy utilities combine dump and load utilities into a single operation, for ease of use. The majority of the options available to the load and dump utilities are also available to the copy utilities and are documented in the following sections.
-
The copy utilities use
LOAD DATA LOCAL INFILE
statements to upload data, so thelocal_infile
system variable must be set toON
on the target server. You can do this by issuing the following statement on the target instance before running the copy utility:SET GLOBAL local_infile = 1;
To avoid a known potential security issue with
LOAD DATA LOCAL
, when the MySQL server replies to the utility'sLOAD DATA
requests with file transfer requests, the utility only sends the predetermined data chunks, and ignores any specific requests attempted by the server. For more information, see Security Considerations for LOAD DATA LOCAL. The copy utilities only support General Availability (GA) releases of MySQL Server versions.
MySQL 5.7 or later is required for the destination MySQL instance where the copy will be loaded.
Object names in the instance or schema must be in the
latin1
orutf8
characterset.Data consistency is guaranteed only for tables that use the
InnoDB
storage engine.-
The minimum required set of privileges that the user account used to run the utility must have on all the schemas involved is as follows:
EVENT
,RELOAD
,SELECT
,SHOW VIEW
, andTRIGGER
.If the
consistent
option is set totrue
, which is the default, theLOCK TABLES
privilege on all copied tables can substitute for theRELOAD
privilege if the latter is not available.If the user account does not have the
BACKUP_ADMIN
privilege andLOCK INSTANCE FOR BACKUP
cannot be executed, the utilities make an extra consistency check during the copy. If this check fails, an instance copy is stopped, but a schema copy or a table copy continues and returns an error message to alert the user that the consistency check failed.If the
consistent
option is set tofalse
, theBACKUP_ADMIN
andRELOAD
privileges are not required.
The user account used to run the utility needs the
REPLICATION CLIENT
privilege in order for the utility to be able to include the binary log file name and position in the metadata. If the user ID does not have that privilege, the copy continues but does not include the binary log information. The binary log information can be used after loading the copied data into the replica server to set up replication with a non-GTID source server, using theASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS
option of theCHANGE REPLICATION SOURCE TO
statement.The utilities convert columns with data types that are not safe to be stored in text form (such as
BLOB
) to Base64. The size of these columns therefore must not exceed approximately 0.74 times the value of themax_allowed_packet
system variable (in bytes) that is configured on the target MySQL instance.For compatibility with HeatWave Service, all tables must use the
InnoDB
storage engine. If you defined a DB System as the target, the utility detects this, enables HeatWave Service compatibility checks by default, and checks for any exceptions found in the source, and thecompatibility
option alters the copy to replace other storage engines withInnoDB
.For the instance and schema copy utilities, for compatibility with HeatWave Service, all tables in the instance or schema must be located in the MySQL data directory and must use the default schema encryption.
-
HeatWave Service uses
partial_revokes=ON
, which means database-level user grants on schemas which contain wildcards, such as_
or%
, are reported as errors.You can also use the compatibility options,
ignore_wildcard_grants
andstrip_invalid_grants
See Options for HeatWave Service and Oracle Cloud Infrastructure for more information.
A number of other security related restrictions and requirements apply to items such as tablespaces and privileges for compatibility with HeatWave Service. The
compatibility
option automatically alters the copy to resolve some of the compatibility issues. You might need (or prefer) to make some changes manually. For more details, see the description for thecompatibility
option.For HeatWave Service High Availability, which uses Group Replication, primary keys are required on every table. MySQL Shell checks and reports an error for any tables in the copy that are missing primary keys. The
compatibility
option can be set to ignore missing primary keys if you do not need them, or to add primary keys in invisible columns where they are not present. For details, see the description for thecompatibility
option. If possible, instead of managing this in the utility, consider creating primary keys in the tables on the source server before copying them.If the source is MySQL 5.7, and the target is a DB System,
util.checkForServerUpgrade
is run automatically. Pre-upgrade checks are run depending on the type of objects included in the copy.Progress resumption is not supported by the copy utilities.
The copy instance, copy schema, and copy table utilities use the MySQL Shell global session to obtain the connection details of the MySQL server from which the copy is carried out. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running one of the utilities. The utilities open their own sessions for each thread, copying options such as connection compression and SSL options from the global session, and do not make any further use of the global session.
-
util.copyInstance(connectionData[, options])
: Enables copying of an entire instance to another server.-
connectionData
: Defines the connection details for the destination server you want to copy to.This can be one of the following:
A simple
user@host
string.A connection URI such as
mysql://user@host:port?option=value,option=value
A connection dictionary, such as
{ "scheme": "mysql", "user": "u", "host": "h", "port": 1234, "option": "value" }
-
-
util.copySchemas(schemaList, connectionData[, options])
: Enables copying of one or more schemas to another server.schemaList
: Defines the list of schemas to copy from the current server to the destination server.
-
util.copyTables(schemaName, tablesList, connectionData[, options])
: Enables copying of one or more tables from a schema to another server.schemaName
: Defines the name of the schema from which to copy tables.tablesList
: Defines the names of the tables from the named schema to copy to the destination server.
-
dryRun: [ true | false ]
Displays information about the copy with the specified set of options, and about the results of HeatWave Service compatibility checks, but does not proceed with the copy. Setting this option enables you to list out all of the compatibility issues before starting the copy. The default is
false
.-
showProgress: [ true | false ]
Display (
true
) or hide (false
) progress information for the copy. The default istrue
ifstdout
is a terminal (tty
), such as when MySQL Shell is in interactive mode, andfalse
otherwise. The progress information includes the estimated total number of rows to be copied, the number of rows copied so far, the percentage complete, and the throughput in rows and bytes per second.-
threads:
int
-
The number of parallel threads to use to copy chunks of data from the MySQL instance. Each thread has its own connection to the MySQL instance. The default is 4.
The copy utilities require twice the number of threads, one thread to copy and one thread to write. If threads is set to N, 2N threads are used.
-
maxRate: "
string
" The maximum number of bytes per second per thread for data read throughput during the copy. The unit suffixes
k
for kilobytes,M
for megabytes, andG
for gigabytes can be used (for example, setting100M
limits throughput to 100 megabytes per second per thread). Setting0
(which is the default value), or setting the option to an empty string, means no limit is set.-
defaultCharacterSet: "
string
" The character set to be used during the session connections that are opened by MySQL Shell to the target server. The default is
utf8mb4
. The session value of the system variablescharacter_set_client
,character_set_connection
, andcharacter_set_results
are set to this value for each connection. The character set must be permitted by thecharacter_set_client
system variable and supported by the MySQL instance.-
checksum: [true|false]
-
If enabled, on dump, a metadata file,
@.checksums.json
is generated with the copy. This file contains the checksum data for the copy, enabling data verification.The following conditions apply if
checksum: true
during the copy process:If
ddlOnly:false
andchunking:true
, a checksum is generated for each copied table and partition chunk.If
ddlOnly:false
andchunking:false
, a checksum is generated for each copied table and table partition.If
ddlOnly:true
, a checksum is generated for each copied table and table partition.
If enabled, the utility checks the generated checksum data after the corresponding data is loaded. The verification is limited to data which was dumped, ignoring generated data such as invisible primary keys.
Errors are returned if a checksum does not match or if a table is missing and cannot be verified.
If
checksum: true
but no data was loaded, either due toloadData: false
or no data being dumped, the utility verifies the dump's checksum information against the current contents of the affected tables.If a table does not exist, an error is displayed for each missing table.
If
checksum: true
anddryRun: true
, the checksum is not verified. A message is displayed stating that no verification took place.
-
consistent: [ true | false ]
-
Enable (
true
) or disable (false
) consistent data copies by locking the instance for backup during the copy. The default istrue
.When
true
is set, the utility sets a global read lock using theFLUSH TABLES WITH READ LOCK
statement (if the user ID used to run the utility has theRELOAD
privilege), or a series of table locks usingLOCK TABLES
statements (if the user ID does not have theRELOAD
privilege but does haveLOCK TABLES
). The transaction for each thread is started using the statementsSET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
andSTART TRANSACTION WITH CONSISTENT SNAPSHOT
. When all threads have started their transactions, the instance is locked for backup (as described in LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE Statements) and the global read lock is released.If the user account does not have the
BACKUP_ADMIN
privilege andLOCK INSTANCE FOR BACKUP
cannot be executed, the utilities make an extra consistency check during the copy. If this check fails, an instance copy is stopped, but a schema or table copy continues and returns an error message to alert the user that the consistency check failed. -
skipConsistencyChecks: [ true | false ]
-
Enable (
true
) or disable (false
) the extra consistency check performed whenconsistent: true
. Default isfalse
.This option is ignored if
consistent: false
. -
schema: "
string
" -
The target schema into which the contents of the copied schema must be loaded.
If the schema does not exist, it is created, and the copied schema is loaded to that new schema. If the new schema name differs from the schema name in the copy, the copy is loaded to the new schema, but no changes are made to the loaded data. That is, any reference to the old schema name remains in the data. All stored procedures, views, and so on, refer to the original schema, not the new one.
This load option is supported for single schema copies, or for filtering options which result in a single schema. That is, if you are using
copyInstance
to copy data to a new instance, you can copy all the data to a single schema if the source contains only one schema, or the defined filters result in a single schema being copied to the destination. -
skipBinlog: [ true | false ]
Skips binary logging on the target MySQL instance for the sessions used by the utility during the course of the copy, by issuing a
SET sql_log_bin=0
statement. The default isfalse
, so binary logging is active by default. For HeatWave Service DB Systems, this option is not used, and the import stops with an error if you attempt to set it totrue
. For other MySQL instances, always setskipBinlog
totrue
if you are applying thegtid_executed
GTID set from the source MySQL instance on the target MySQL instance, either using theupdateGtidSet
option or manually. When GTIDs are in use on the target MySQL instance (gtid_mode=ON
), setting this option totrue
prevents new GTIDs from being generated and assigned as the import is being carried out, so that the original GTID set from the source server can be used. The user account must have the required permissions to set thesql_log_bin
system variable.-
ignoreVersion: [ true | false ]
-
Copy even if the major version number of the source from which the data was copied is non-consecutive to the major version number of the destination, such as 5.6 to 8.1. The default is
false
, meaning that an error is issued and the copy stops if the major version number is different. When this option is set totrue
, a warning is issued and the copy proceeds. Note that the copy will only be successful if the copied schemas have no compatibility issues with the new major version.NoteignoreVersion
is not required for copying between consecutive major versions, such as 5.7 to 8.1.Before attempting a copy using the
ignoreVersion
option, use MySQL Shell's upgrade checker utilitycheckForServerUpgrade()
to check the source instance and fix any compatibility issues identified by the utility before attempting to copy. -
ignoreExistingObjects: [ true | false ]
Copy even if the copy contains objects that already exist in the target instance. The default is
false
, meaning that an error is issued and the copy stops when a duplicate object is found. When this option is set totrue
, duplicate objects are reported but no error is generated and the copy proceeds. This option should be used with caution, because the utility does not check whether the contents of the object in the target MySQL instance and in the dump files are different, so it is possible for the resulting copy to contain incorrect or invalid data. An alternative strategy is to use theexcludeTables
option to exclude tables that you have already copied where you have verified the object in the dump files is identical with the imported object in the target MySQL instance. The safest choice is to remove duplicate objects from the target MySQL instance before restarting the copy.-
handleGrantErrors: [ abort | drop_account | ignore ]
-
The action taken in the event of errors related to
GRANT
orREVOKE
errors.abort
: (default) stops the copy process and displays an error.drop_account
: deletes the account and continues the copy process.ignore
: ignores the error and continues the copy process.
-
maxBytesPerTransaction:
number
-
The maximum number of bytes that can be copied from a data chunk in a single
LOAD DATA
statement. If a data file exceeds themaxBytesPerTransaction
value, multipleLOAD DATA
statements load data from the file in chunks less than or equal to themaxBytesPerTransaction
value.The unit suffixes
k
for kilobytes,M
for megabytes, andG
for gigabytes can be used. The minimum value is 4096 bytes. If a lesser value is specified, an exception is thrown. If themaxBytesPerTransaction
option is unset, thebytesPerChunk
value is used instead.If a data file contains a row that is larger than the
maxBytesPerTransaction
setting, the row's data is requested in a singleLOAD DATA
statement. A warning is emitted for the first row encountered that exceeds themaxBytesPerTransaction
setting.An intended use for this option is to load data in smaller chunks when a data file is too large for the target server's limits, such as the limits defined by the server's
group_replication_transaction_size_limit
ormax_binlog_cache_size
settings. For example, If you receive the error "MySQL Error 1197 (HY000): Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage" when loading data, setmaxBytesPerTransaction
to a value less than or equal to the server instance’smax_binlog_cache_size
setting. -
sessionInitSql:
list of strings
-
A list of SQL statements to run at the start of each client session used for copying data into the target MySQL instance. You can use this option to change session variables. For example, the following statements skip binary logging on the target MySQL instance for the sessions used by the utility during the course of the import, and increase the number of threads available for index creation:
sessionInitSQL: ["SET SESSION sql_log_bin=0;", "SET SESSION innodb_ddl_threads=8,"]
If an error occurs while running the SQL statements, the copy stops and returns an error message.
-
tzUtc: [ true | false ]
Include a statement at the start of the copy to set the time zone to UTC. All timestamp data in the output is converted to this time zone. The default is
true
. Setting the time zone to UTC facilitates moving data between servers with different time zones, or handling a set of data that has multiple time zones. Set this option tofalse
to keep the original timestamps if preferred.-
chunking: [ true | false ]
-
Enable (
true
) or disable (false
) chunking for table data, which splits the data for each table into multiple files. The default istrue
. UsebytesPerChunk
to specify the chunk size. If you set the chunking option tofalse
, chunking does not take place and the utility creates one data file for each table.If a table has no primary key or unique index, chunking is done based on the number of rows in the table, the average row length, and the
bytesPerChunk
value. -
bytesPerChunk: "
string
" Sets the approximate number of bytes to be written to each data file when chunking is enabled. The unit suffixes
k
for kilobytes,M
for megabytes, andG
for gigabytes can be used. The default is 64 MB (64M
), and the minimum is 128 KB (128k
). Specifying this option setschunking
totrue
implicitly.-
loadIndexes: [ true | false ]
-
Create (
true
) or do not create (false
) secondary indexes for tables. The default istrue
. When this option is set tofalse
, secondary indexes are not created during the import, and you must create them afterwards. This can be useful if you are loading the DDL files and data files separately, and if you want to make changes to the table structure after loading the DDL files. Afterwards, you can create the secondary indexes by running the dump loading utility again withloadIndexes
set totrue
anddeferTableIndexes
set toall
.MySQL Shell utilizes MySQL Server's parallel index creation. All indexes in a table are added simultaneously.
See Configuring Parallel Threads for Online DDL Operations for restrictions and configuration.
-
deferTableIndexes: [ off | fulltext | all ]
Defer the creation of secondary indexes until after the table data is loaded. This can reduce loading times.
off
means all indexes are created during the table load. The default settingfulltext
defers full-text indexes only.all
defers all secondary indexes and only creates primary indexes during the table load, and also indexes defined on columns containing auto-increment values.-
analyzeTables: [ off | on | histogram ]
Execute
ANALYZE TABLE
for tables when they have been loaded.on
analyzes all tables, andhistogram
analyzes only tables that have histogram information stored in the dump. The default isoff
. You can run the dump loading utility with this option to analyze the tables even if the data has already been loaded.-
updateGtidSet: [ off | append | replace ]
-
Apply the
gtid_executed
GTID set from the source MySQL instance, as recorded in the dump metadata, to thegtid_purged
GTID set on the target MySQL instance. Thegtid_purged
GTID set holds the GTIDs of all transactions that have been applied on the server, but do not exist on any binary log file on the server. The default isoff
, meaning that the GTID set is not applied.Do not use this option when Group Replication is running on the target MySQL instance.
For MySQL instances that are not HeatWave Service DB System instances, when you set
append
orreplace
to update the GTID set, also set theskipBinlog
option totrue
. This ensures the GTIDs on the source server match the GTIDs on the target server. For HeatWave Service DB System instances, this option is not used.For a target MySQL instance from MySQL 8.0, you can set the option to
append
, which appends thegtid_executed
GTID set from the source MySQL instance to thegtid_purged
GTID set on the target MySQL instance. Thegtid_executed
GTID set to be applied, which is shown in thegtidExecuted
field in the@.json
dump file, must not intersect with thegtid_executed
set already on the target MySQL instance. For example, you can use this option when importing a schema from a different source MySQL instance to a target MySQL instance that already has schemas from other source servers.You can also use
replace
for a target MySQL instance from MySQL 8.0, to replace thegtid_purged
GTID set on the target MySQL instance with thegtid_executed
GTID set from the source MySQL instance. To do this, thegtid_executed
GTID set from the source MySQL instance must be a superset of thegtid_purged
GTID set on the target MySQL instance, and must not intersect with the set of transactions in the target'sgtid_executed
GTID set that are not in itsgtid_purged
GTID set.For a target MySQL instance at MySQL 5.7, set the option to
replace
, which replaces thegtid_purged
GTID set on the target MySQL instance with thegtid_executed
GTID set from the source MySQL instance. In MySQL 5.7, to do this thegtid_executed
andgtid_purged
GTID sets on the target MySQL instance must be empty, so the instance must be unused with no previously imported GTID sets.To apply the GTID set, after the import, use MySQL Shell's
\sql
command (or enter SQL mode) to issue the following statement on the connected MySQL instance, copying thegtid_executed
GTID set from thegtidExecuted
field in the@.json
dump file in the dump metadata:shell-js> \sql SET @@GLOBAL.gtid_purged= "+gtidExecuted_set";
This statement, which works from MySQL 8.0, adds the source MySQL Server instance's
gtid_executed
GTID set to the target MySQL instance'sgtid_purged
GTID set. For MySQL 5.7, the plus sign (+
) must be omitted, and thegtid_executed
andgtid_purged
GTID sets on the target MySQL instance must be empty. For more details, see the description of thegtid_purged
system variable in the release of the target MySQL instance. -
compatibility:
array of strings
-
Apply the specified requirements for compatibility with HeatWave Service for all tables in the copy, altering the dump files as necessary.
The following modifications can be specified as an array of strings:
-
force_innodb
Change
CREATE TABLE
statements to use theInnoDB
storage engine for any tables that do not already use it.-
skip_invalid_accounts
Remove user accounts created with external authentication plugins that are not supported in HeatWave Service. This option also removes user accounts that do not have passwords set, except where an account with no password is identified as a role, in which case it is copied using the
CREATE ROLE
statement.-
strip_definers
Remove the
DEFINER
clause from views, routines, events, and triggers, so these objects are created with the default definer (the user invoking the schema), and change theSQL SECURITY
clause for views and routines to specifyINVOKER
instead ofDEFINER
. HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. If your security model requires that views and routines have more privileges than the account querying or calling them, you must manually modify the schema before copying it.-
strip_restricted_grants
Remove specific privileges that are restricted by HeatWave Service from
GRANT
statements, so users and their roles cannot be given these privileges (which would cause user creation to fail). This option also removesREVOKE
statements for system schemas (mysql
andsys
) if the administrative user account on an Oracle Cloud Infrastructure Compute instance does not itself have the relevant privileges, so cannot remove them.-
strip_tablespaces
Remove the
TABLESPACE
clause fromCREATE TABLE
statements, so all tables are created in their default tablespaces. HeatWave Service has some restrictions on tablespaces.-
ignore_missing_pks
Make the instance, schema, or table copy utility ignore any missing primary keys when the dump is carried out. Dumps created with this modification cannot be loaded into a HeatWave Service High Availability instance, because primary keys are required for HeatWave Service High Availability, which uses Group Replication. To add missing primary keys automatically, use the
create_invisible_pks
modification, or consider creating primary keys in the tables on the source server.-
ignore_wildcard_grants
If enabled, ignores errors from grants on schemas with wildcards, which are interpreted differently in systems where the
partial_revokes
system variable is enabled.-
strip_invalid_grants
If enabled, strips grant statements which would fail when users are copied. Such as grants referring to a specific routine which does not exist.
-
create_invisible_pks
-
Adds primary keys in invisible columns for each table that does not contain a primary key. This modification enables a copy where some tables lack primary keys to be loaded into a HeatWave Service High Availability instance. Primary keys are required for HeatWave Service High Availability, which uses Group Replication.
The data is unchanged by this modification, as the tables do not contain the invisible columns until they have been processed by the copy utility. The invisible columns (which are named "
my_row_id
") have no impact on applications that use the uploaded tables.
-
-
where: {"schemaName.tableName": "string"}
-
A key-value pair comprising of a valid table identifier, of the form
, and a valid SQL condition expression used to filter the data being copied.schemaName
.tableName
NoteThe SQL is validated only when it is executed. If you are copying many tables, any SQL-syntax-related issues will only be seen late in the process. As such, it is recommended you test your SQL condition before using it in a long-running export process.
-
partitions: {
schemaName.tableName
: ["string
","string
",..]} -
A key-value pair comprising of a valid table identifier, of the form
, and a list of valid partitions.schemaName
.tableName
For example, to copy only the partitions named
p1
andp2
from the tableschema.table
:partitions: {'
schema
.table
':["p1", "p2"]} -
ddlOnly: [ true | false ]
Setting this option to
true
includes only the DDL files for the items in the copy, and does not copy the data. The default isfalse
.-
dataOnly: [ true | false ]
Setting this option to
true
includes only the data files for the items in the copy, and does not include DDL files. The default isfalse
.-
users: [ true | false ]
-
(Instance copy utility only) Include (
true
) or exclude (false
) users and their roles and grants in the copy. The default istrue
. The schema and table copy utilities do not include users, roles, or grants in a copy.You can use the
excludeUsers
orincludeUsers
option to specify individual user accounts to be excluded from or included in the copy.NoteIf copying users from a MySQL 5.6 instance, the user performing the copy must have the SUPER privilege.
-
excludeUsers:
array of strings
(Instance copy utility only) Exclude the named user accounts from the copy. Use to exclude user accounts that are not accepted for import to a HeatWave Service DB System, or that already exist or are not wanted on the target MySQL instance. Specify each user account string in the format
"'
for an account that is defined with a user name and host name, oruser_name
'@'host_name
'""'
for an account that is defined with a user name only. If you do not supply a host name, all accounts with that user name are excluded.user_name
'"-
includeUsers:
array of strings
(Instance copy utility only) Include only the named user accounts in the copy. Specify each user account string as for the
excludeUsers
option. Use as an alternative toexcludeUsers
if only a few user accounts are required in the copy. You can also specify both options to include some accounts and exclude others.-
excludeSchemas:
array of strings
(Instance copy utility only) Exclude the named schemas from the copy. Note that the
information_schema
,mysql
,ndbinfo
,performance_schema
, andsys
schemas are always excluded from an instance copy.-
includeSchemas:
array of strings
(Instance copy utility only) Include only the named schemas in the copy. You cannot include the
information_schema
,mysql
,ndbinfo
,performance_schema
, orsys
schemas by naming them on this option. If you want to copy one or more of these schemas, you can do this using the schema copy utilityutil.copySchemas()
.-
excludeTables:
array of strings
(Instance and schema copy utilities only) Exclude the named tables (DDL and data) from the copy. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed. Note that the data for the
mysql.apply_status
,mysql.general_log
,mysql.schema
, andmysql.slow_log tables
is always excluded from a schema copy, although their DDL statements are included, and you cannot include that data by naming the table in another option or utility.-
includeTables:
array of strings
(Instance and schema copy utilities only) Include only the named tables in the copy. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
events: [ true | false ]
(Instance and schema copy utilities only) Include (
true
) or exclude (false
) events for each schema in the copy. The default istrue
.-
excludeEvents:
array of strings
(Instance and schema copy utilities only) Exclude the named events from the copy. Names of events must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
includeEvents:
array of strings
(Instance and schema copy utilities only) Include only the named events in the copy. Event names must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
routines: [ true | false ]
(Instance and schema copy utilities only) Include (
true
) or exclude (false
) functions and stored procedures for each schema in the copy. The default istrue
. Note that user-defined functions are not included, even whenroutines
is set totrue
.-
excludeRoutines:
array of strings
(Instance and schema copy utilities only) Exclude the named functions and stored procedures from the copy. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
includeRoutines:
array of strings
(Instance and schema copy utilities only) Include only the named functions and stored procedures in the copy. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
all: [ true | false ]
(Table copy utility only) Setting this option to
true
includes all views and tables from the specified schema in the copy. The default isfalse
. When you use this option, set thetables
parameter to an empty array.-
triggers: [ true | false ]
(All copy utilities) Include (
true
) or exclude (false
) triggers for each table in the copy. The default istrue
.-
excludeTriggers:
array of strings
(All copy utilities) Exclude the named triggers from the copy. Names of triggers must be qualified with a valid schema name and table name (
schema.table.trigger
), and quoted with the backtick character if needed. You can exclude all triggers for a specific table by specifying a schema name and table name with this option (schema.table
).-
includeTriggers:
array of strings
(All copy utilities) Include only the named triggers in the copy. Names of triggers must be qualified with a valid schema name and table name (
schema.table.trigger
), and quoted with the backtick character if needed. You can include all triggers for a specific table by specifying a schema name and table name with this option (schema.table
).
The following examples show how to use the copy utilities:
-
Copying an instance from local to HeatWave Service High Availability DB System:
JS> util.copyInstance('mysql://User001@DBSystemIPAddress',{threads: 6, deferTableIndexes: "all", compatibility: ["strip_restricted_grants", "strip_definers", "create_invisible_pks"]})
This example copies an instance to a DB System, with the user
User001
and a series of compatibility options which make the instance compatible with a DB System.create_invisible_pks
is included because a High Availability DB System uses Group Replication, which requires that each table have a Primary Key. This option adds an invisible primary key to each table. -
Copying a schema to the target instance and renaming the schema:
util.copySchemas(['sakila'], 'user@localhost:4101',{schema: "mySakilaSchema"})
This example copies the contents of a schema from the source to a schema with a different name on the destination,
localhost:4101
. -
Copying a table from a schema to another schema on the destination:
util.copyTables('sakila', ['actor'], 'root@localhost:4101',{schema: "mySakilaSchema"})
This example copies the
actor
table from thesakila
schema, to themySakilaSchema
on the destination,localhost:4101
.