MySQL Shell's dump loading utility
util.loadDump() supports the import into a
MySQL HeatWave Service DB System or a MySQL Server instance of schemas or tables
dumped using MySQL Shell's
Section 12.5, “Instance Dump Utility, Schema Dump Utility, and Table Dump Utility”. The
dump loading utility provides data streaming from remote storage,
parallel loading of tables or table chunks, progress state
tracking, resume and reset capability, and the option of
concurrent loading while the dump is still taking place. To get
the best functionality, always use the most recent version
available of MySQL Shell's dump and dump loading utilities.
For import into a MySQL HeatWave Service DB System, MySQL Shell must be
installed on an Oracle Cloud Infrastructure Compute instance that has access to the
MySQL HeatWave Service DB System. If the dump files are in an Oracle Cloud Infrastructure Object
Storage bucket, you can access the Object Storage bucket from
the Compute instance. If the dump files are on your local
system, you need to transfer them to the Oracle Cloud Infrastructure Compute instance
using the copy utility of your choice, depending on the
operating system you chose for your Compute instance. Ensure the
dump was created with the ocimds option set
to true in MySQL Shell's instance dump
utility or schema dump utility, for compatibility with MySQL HeatWave Service.
MySQL Shell's table dump utility does not use this option.
For output produced by the instance dump utility or schema dump
utility, MySQL Shell's dump loading utility uses the DDL files
and tab-separated .tsv data files to set up
the server instance or schema in the target MySQL instance, then
loads the data. Dumps containing only the DDL files or only the
data files can be used to perform these tasks separately. The
dump loading utility also lets you separately apply the DDL
files and data files from a regular dump that contains both
sorts of files.
You can use options for the utility to include or exclude specified schemas and tables, users and their roles and grants, events, routines, and triggers from the import. Note that users and their roles and grants are excluded from the load by default. If you specify conflicting include and exclude options or name an object that is not included in the dump files anyway, an error is reported and the load stops so you can correct the options.
For output produced by MySQL Shell's table dump utility, the
dump contains the information required to set up the schema that
originally contained the table. By default, from that release,
the schema is recreated in the target MySQL instance if it does
not already exist. Alternatively, you can specify the
schema option in the dump loading utility to
load the table into an alternative schema in the target MySQL
instance, which must exist there.
You can carry out a dry run with your chosen set of dump loading options to show what actions would be performed when you run the utility for real with those options.
The waitDumpTimeout option lets you apply a
dump that is still in the process of being created. Tables are
loaded as they become available, and the utility waits for the
specified number of seconds after new data stops arriving in the
dump location. When the timeout elapses, the utility assumes the
dump is complete and stops importing.
Progress state for an import is stored in a persistent progress
state file, which records steps successfully completed and steps
that were interrupted or failed. By default, the progress state
file is named
load-progress.
and created in the dump directory, but you can choose a
different name and location. The dump loading utility references
the progress state file when you resume or retry the import for
a dump, and skips completed steps. Deduplication is
automatically managed for tables that were partially loaded. If
you interrupt a dump in progress by using Ctrl +
C, on the first use of that key combination, no new
tasks are started by the utility but existing tasks continue.
Pressing Ctrl + C again stops existing tasks,
resulting in error messages. In either case, the utility can
still resume the import from where it stopped.
server_uuid.json
You can choose to reset the progress state and start the import
for a dump again from the beginning, but in this case the
utility does not skip objects that were already created and does
not manage deduplication. If you do this, to ensure a correct
import, you must manually remove from the target MySQL instance
all previously loaded objects from that dump, including schemas,
tables, users, views, triggers, routines, and events. Otherwise,
the import stops with an error if an object in the dump files
already exists in the target MySQL instance. With appropriate
caution, you may use the
ignoreExistingObjects option to make the
utility report duplicate objects but skip them and continue with
the import. Note that the utility does not check whether the
contents of the object in the target MySQL instance and in the
dump files are different, so it is possible for the resulting
import to contain incorrect or invalid data.
Do not change the data in the dump files between a dump
stopping and a dump resuming. Resuming a dump after changing
the data has undefined behavior and can lead to data
inconsistency and data loss. If you need to change the data
after partially loading a dump, manually drop all objects that
were created during the partial import (as listed in the
progress state file), then run the dump loading utility with
the resetProgress option to start again
from the beginning.
If you need to modify any data in the dump’s data files before
importing it to the target MySQL instance, you can do this by
combining MySQL Shell’s parallel table import utility
util.importTable() with the dump loading
utility. To do this, first use the dump loading utility to load
only the DDL for the selected table, to create the table on the
target server. Then use the parallel table import utility to
capture and transform data from the output files for the table,
and import it to the target table. Repeat that process as
necessary for any other tables where you want to modify the
data. Finally, use the dump loading utility to load the DDL and
data for any remaining tables that you do not want to modify,
excluding the tables that you did modify. For a description of
the procedure, see
Modifying Dumped Data.
The tables in a dump are loaded in parallel by the number of
threads you specify using the threads option,
which defaults to 4. If table data was chunked when the dump was
created, multiple threads can be used for a table, otherwise
each thread loads one table at a time. The dump loading utility
schedules data imports across threads to maximize parallelism. A
pool of background threads is used to fetch the contents of
files. If the dump files were compressed by MySQL Shell's dump
utilities, the dump loading utility handles decompression for
them.
By default, fulltext indexes for a table are created only after the table is completely loaded, which speeds up the import. You can choose to defer all index creation (except the primary index) until each table is completely loaded. You can also opt to create all indexes during the table import. You can also choose to disable index creation during the import, and create the indexes afterwards, for example if you want to make changes to the table structure after loading.
For an additional improvement to data loading performance, you
can disable the InnoDB redo log on
the target MySQL instance during the import. Note that this
should only be done on a new MySQL Server instance (not a
production system), and this feature is not available on MySQL
DB System. For more information, see
Disabling Redo Logging.
MySQL 5.7 or later is required for the destination MySQL instance where the dump is loaded.
MySQL Shell's dump loading utility from versions of MySQL Shell previous to 8.0.27 cannot load dumps that are created using the dump utilities in MySQL Shell 8.0.27 or later. This is because from MySQL Shell8.0.27, information is included in the dump metadata about features used in creating the dump. This feature list is not backward compatible, but it supports backward compatibility when new features are added in future releases. To get the best functionality, always use the most recent version available of MySQL Shell's dump and dump loading utilities.
The dump loading utility uses the
LOAD DATA LOCAL INFILEstatement, so the global setting of thelocal_infilesystem variable on the target MySQL instance must beONfor the duration of the import. By default, this system variable is set toONin a standard MySQL HeatWave Service DB System configuration.The
LOAD DATA LOCAL INFILEstatement uses nonrestrictive data interpretation, which turns errors into warnings and continues with the load operation. This process can include assigning default values and implicit default values to fields, and converting invalid values to the closest valid value for the column data type. For details of the statement's behavior, seeLOAD DATA.On the target MySQL instance, the dump loading utility checks whether the
sql_require_primary_keysystem variable is set toON, and if it is, returns an error if there is a table in the dump files with no primary key. By default, this system variable is set toOFFin a standard MySQL HeatWave Service DB System configuration.The dump loading utility does not automatically apply the
gtid_executedGTID set from the source MySQL instance on the target MySQL instance. The GTID set is included in the dump metadata from MySQL Shell's instance dump utility, schema dump utility, or table dump utility, as thegtidExecutedfield in the@.jsondump file. To apply these GTIDs on the target MySQL instance for use with replication, use theupdateGtidSetoption or import them manually, depending on the release of the target MySQL instance and the MySQL Shell release. This is also supported on MySQL HeatWave Service DB System instances. See the description of theupdateGtidSetoption for details.
MySQL Shell supports loading dump files from an Object Storage bucket using a pre-authenticated request (PAR). PARs provide a way to let users access a bucket or an object without having their own credentials.
Before using this access method, assess the business requirement for and the security ramifications of pre-authenticated access to a bucket or objects in a bucket. A PAR gives anyone who has the PAR access to the targets identified in the request. Carefully manage the distribution of PARs.
MySQL Shell supports using a read access PAR (an Object Read
PAR) for all objects in a bucket or objects in a bucket with a
specific prefix. For information about creating bucket PARs and
prefix PARs, see
Using
Pre-Authenticated Requests. When using a bucket PAR or
prefix PAR, the dump loading utility requires a local progress
state file. The content of the file is in JSON format, so a text
file with a .json extension is appropriate
(for example, progress.json). The following
example shows the syntax for loading dump files using a PAR
created for all objects in a bucket:
JS> util.loadDump("BucketPARURL", {progressFile: "progress.json"})
The same syntax is used to load objects in a bucket with a specific prefix, but in this case, the PAR URL includes the prefix:
shell-js> util.loadDump("PrefixPARURL", progressFile: "progress.json"})
The dump loading utility uses the MySQL Shell global session to obtain the connection details of the target MySQL instance to which the dump is to be imported. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running the utility. The utility opens its own sessions for each thread, copying options such as connection compression and SSL options from the global session, and does not make any further use of the global session.
In the MySQL Shell API, the dump loading utility is a function
of the util global object, and has the
following signature:
util.loadDump(url[, options])
options is a dictionary of options that can
be omitted if it is empty. The options are listed in the
remaining sections in this topic.
If you are importing a dump that is located in the Oracle Cloud
Infrastructure Compute instance's filesystem where you are
running the utility, url is a string
specifying the path to a local directory containing the dump
files. You can prefix a local directory path with the
file:// schema. In this example in
MySQL Shell's JavaScript mode, a dry run is carried out to
check that there will be no issues when the dump files are
loaded from a local directory into the connected MySQL instance:
shell-js> util.loadDump("/mnt/data/worlddump", {dryRun: true})
If you are importing a dump from an Oracle Cloud Infrastructure Object Storage bucket,
url is the path prefix that the dump files
have in the bucket, which was assigned using the
outputUrl parameter when the dump was
created. Use the osBucketName option to
provide the name of the Object Storage bucket, and the
osNamespace option to identify the namespace
for the bucket. In this example in MySQL Shell's JavaScript
mode, the dump prefixed worlddump is loaded
from an Object Storage bucket into the connected MySQL HeatWave Service DB System
using 8 threads:
shell-js> util.loadDump("worlddump", {
threads: 8, osBucketName: "hanna-bucket", osNamespace: "idx28w1ckztq"})
The namespace for an Object Storage bucket is displayed in the
Bucket Information tab of the bucket
details page in the Oracle Cloud Infrastructure console, or can be obtained using the
Oracle Cloud Infrastructure command line interface. A connection is established to the
Object Storage bucket using the default profile in the default
Oracle Cloud Infrastructure CLI configuration file, or
alternative details that you specify using the
ociConfigFile and
ociProfile options. For instructions to set
up a CLI configuration file, see
SDK
and CLI Configuration File.
-
dryRun: [ true | false ] Display information about what actions would be performed given the specified options and dump files, including any errors that would be returned based on the dump contents, but do not proceed with the import. The default is
false.-
waitDumpTimeout:number Setting this option to a value greater than 0 activates concurrent loading of the dump while it is still being produced. The value is a timeout (in seconds) for which the utility waits for further data after all uploaded data chunks in the dump location have been processed. This allows the utility to import the dump while it is still in the process of being created. Data is processed as it becomes available, and the import stops when the timeout is exceeded with no further data appearing in the dump location. The default setting,
0, means that the utility marks the dump as complete when all uploaded data chunks have been processed and does not wait for more data. With the default setting, concurrent loading is disabled.-
schema: "string" -
The target schema into which a dump produced by MySQL Shell's dump utilities must be loaded.
If the schema does not exist, it is created, and the dump is loaded to that new schema. If the new schema name differs from the schema name in the dump, the dump is loaded to the new schema, but no changes are made to the loaded data. That is, any reference to the old schema name remains in the data. All stored procedures, views, and so on, refer to the original schema, not the new one.
This load option is supported for single schema dumps, or for filtering options which result in a single schema.
-
threads:number The number of parallel threads to use to upload chunks of data to the target MySQL instance. Each thread has its own connection to the MySQL instance. The default is 4. if the dump was created with chunking enabled (which is the default), the utility can use multiple threads to load data for a table; otherwise a thread is only used for one table.
-
backgroundThreads:number The number of threads in the pool of background threads used to fetch the contents of files. The default is the value of the
threadsoption for a dump loaded from the local server, or four times the value of thethreadsoption for a dump loaded from a non-local server.-
progressFile: "string" -
Specifies the path to a local progress state file for tracking load progress. Other values are permitted depending on the type of load operation:
When loading a dump from local storage:
The
progressFileoption may be omitted. In this case, a progress state file namedload-progress-is automatically created in the dump directory.server-uuid.jsonThe
progressFileoption can be set to an empty string to disable progress state tracking, which means that the dump loading utility cannot resume a partially completed import.
When loading a dump from OCI Object Storage using a pre-authenticated request (PAR), the
progressFileoption is mandatory.If the load operation is performed using a bucket or prefix PAR, set the
progressFileoption to the path of a local progress state file.If the load operation is performed using a manifest file PAR, set the
progressFileoption to the path of a local progress state file or specify a write PAR for a progress state file residing in the same location as the manifest file.
If a local progress state file or a valid write PAR is specified but the progress state file does not exist, the file will be created.
-
showProgress: [ true | false ] Display (
true) or hide (false) progress information for the import. The default istrueifstdoutis a terminal (tty), such as when MySQL Shell is in interactive mode, andfalseotherwise. The progress information includes the number of active threads and their actions, the amount of data loaded so far, the percentage complete and the rate of throughput. When the progress information is not displayed, progress state is still recorded in the dump loading utility's progress state file.-
resetProgress: [ true | false ] Setting this option to
trueresets the progress state and starts the import again from the beginning. The default isfalse. Note that with this option, the dump loading utility does not skip objects that were already created and does not manage reduplication. If you want to use this option, to ensure a correct import, you must first manually remove from the target MySQL instance all previously loaded objects, including schemas, tables, users, views, triggers, routines, and events from that dump. Otherwise, the import stops with an error if an object in the dump files already exists in the target MySQL instance. With appropriate caution, you may use theignoreExistingObjectsoption to make the utility report duplicate objects but skip them and continue with the import.-
skipBinlog: [ true | false ] Skips binary logging on the target MySQL instance for the sessions used by the utility during the course of the import, by issuing a
SET sql_log_bin=0statement. The default isfalse, so binary logging is active by default. For MySQL HeatWave Service DB System, this option is not used, and the import stops with an error if you attempt to set it totrue. For other MySQL instances, always setskipBinlogtotrueif you are applying thegtid_executedGTID set from the source MySQL instance on the target MySQL instance, either using theupdateGtidSetoption or manually. When GTIDs are in use on the target MySQL instance (gtid_mode=ON), setting this option totrueprevents new GTIDs from being generated and assigned as the import is being carried out, so that the original GTID set from the source server can be used. The user account must have the required permissions to set thesql_log_binsystem variable.-
ignoreVersion: [ true | false ] -
Import the dump even if the major version number of the MySQL instance from which the data was dumped is non-consecutive to the major version number of the MySQL instance to which the data will be uploaded. The default is
false, meaning that an error is issued and the import does not proceed if the major version number is non-consecutive. When this option is set totrue, a warning is issued and the import proceeds. Note that the import will only be successful if the schemas in the dump files have no compatibility issues with the new major version.NoteignoreVersionis not required for copying between consecutive major versions, such as 8.0 to 8.1.Before attempting an import using the
ignoreVersionoption, use MySQL Shell's upgrade checker utilitycheckForServerUpgrade()to check the schemas on the source MySQL instance. Fix any compatibility issues identified by the utility before dumping the schemas and importing them to the target MySQL instance. -
dropExistingObjects: [ true | false ] -
The default value is false.
Import the dump even if it contains user accounts or DDL objects that already exist in the target database. If this option is set to false, any existing object results in an error. Setting it to true drops existing user accounts and objects before creating them.
NoteSchemas are not dropped.
NoteIt is not possible to enable
dropExistingObjectsifignoreExistingObjectsis enabled, or ifloadDdlis disabled. -
ignoreExistingObjects: [ true | false ] -
Import the dump even if it contains objects that already exist in the target schema in the MySQL instance. The default is
false, meaning that an error is issued and the import stops when a duplicate object is found, unless the import is being resumed from a previous attempt using a progress state file, in which case the check is skipped. When this option is set totrue, duplicate objects are reported but no error is generated and the import proceeds. This option should be used with caution, because the utility does not check whether the contents of the object in the target MySQL instance and in the dump files are different, so it is possible for the resulting import to contain incorrect or invalid data. An alternative strategy is to use theexcludeTablesoption to exclude tables that you have already loaded where you have verified the object in the dump files is identical with the imported object in the target MySQL instance. The safest choice is to remove duplicate objects from the target MySQL instance before restarting the dump.This option also permits the import of a dump created without the use of the
ocimdsoption into a MySQL HeatWave Service instance.NoteIt is not possible to enable
ignoreExistingObjectsifdropExistingObjectsis enabled. -
handleGrantErrors: abort|drop_account|ignore -
The action taken in the event of errors related to
GRANTorREVOKEerrors.abort: (default) stops the load process and displays an error.drop_account: deletes the account and continues the load process.ignore: ignores the error and continues the load process.
-
characterSet: "string" The character set to be used for the import to the target MySQL instance, for example in the
CHARACTER SEToption of theLOAD DATAstatement. The default is the character set given in the dump metadata that was used when the dump was created by MySQL Shell's instance dump utility, schema dump utility, or table dump utility, which default to usingutf8mb4. The character set must be permitted by thecharacter_set_clientsystem variable and supported by the MySQL instance.-
maxBytesPerTransaction:number -
The maximum number of bytes that can be loaded from a data file in a single
LOAD DATAstatement. If a data file exceeds themaxBytesPerTransactionvalue, multipleLOAD DATAstatements load data from the file in chunks less than or equal to themaxBytesPerTransactionvalue.The unit suffixes
kfor kilobytes,Mfor megabytes, andGfor gigabytes can be used. The minimum value is 4096 bytes. If a lesser value is specified, the 4096 byte minimum is used implicitly. If themaxBytesPerTransactionoption is unset, thebytesPerChunkvalue used to dump the data is used as the default setting for files larger than 1.5 * thebytesPerChunkvalue. If themaxBytesPerTransactionoption is unset and the data file is less than 1.5 * thebytesPerChunkvalue, the data is requested in a singleLOAD DATAstatement.If a data file contains a row that is larger than the
maxBytesPerTransactionsetting, the row's data is requested in a singleLOAD DATAstatement. A warning is emitted for the first row encountered that exceeds themaxBytesPerTransactionsetting.If a load operation with a configured
maxBytesPerTransactionsetting is interrupted and resumes execution, chunks that were already loaded are skipped. The resumed load operation uses the currentmaxBytesPerTransactionsetting. The setting used before the operation was interrupted is not saved to the progress state file.An intended use for this option is to load data in smaller chunks when a data file is too large for the target server's limits, such as the limits defined by the server's
group_replication_transaction_size_limitormax_binlog_cache_sizesettings. For example, If you receive the error "MySQL Error 1197 (HY000): Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage" when loading data, setmaxBytesPerTransactionto a value less than or equal to the server instance’smax_binlog_cache_sizesetting. -
sessionInitSql:list of strings -
A list of SQL statements to run at the start of each client session used for loading data into the target MySQL instance. You can use this option to change session variables. For example, the following statements skip binary logging on the target MySQL instance for the sessions used by the utility during the course of the import, and increase the number of threads available for index creation:
sessionInitSQL: ["SET SESSION sql_log_bin=0;", "SET SESSION innodb_ddl_threads=8,"]If an error occurs while running the SQL statements, the import stops and returns an error message.
-
loadIndexes: [ true | false ] -
Create (
true) or do not create (false) secondary indexes for tables. The default istrue. When this option is set tofalse, secondary indexes are not created during the import, and you must create them afterwards. This can be useful if you are loading the DDL files and data files separately, and if you want to make changes to the table structure after loading the DDL files. Afterwards, you can create the secondary indexes by running the dump loading utility again withloadIndexesset totrueanddeferTableIndexesset toall.MySQL Shell utilizes MySQL Server's parallel index creation. Previously, the dump loading utilities added indexes sequentially, one at a time. As of this release, all indexes in a table are added simultaneously.
See Configuring Parallel Threads for Online DDL Operations for restrictions and configuration.
-
deferTableIndexes: [ off | fulltext | all ] Defer the creation of secondary indexes until after the table data is loaded. This can reduce loading times.
offmeans all indexes are created during the table load. The default settingfulltextdefers full-text indexes only.alldefers all secondary indexes and only creates primary indexes during the table load, and also indexes defined on columns containing auto-increment values.-
analyzeTables: [ off | on | histogram ] Execute
ANALYZE TABLEfor tables when they have been loaded.onanalyzes all tables, andhistogramanalyzes only tables that have histogram information stored in the dump. The default isoff. You can run the dump loading utility with this option to analyze the tables even if the data has already been loaded.-
checksum: [true|false] -
If enabled, the
loadDumputility checks the checksum data generated by the dump utility after the corresponding data is loaded. The verification is limited to data which was dumped, ignoring generated data such as invisible primary keys added by the loadDump utility.Errors are returned if a checksum does not match or if a table is missing and cannot be verified.
If
checksum: truebut no data was loaded, either due toloadData: falseor no data being dumped, the utility verifies the dump's checksum information against the current contents of the affected tables.If a table does not exist, an error is displayed for each missing table.
If
checksum: trueanddryRun: true, the checksum is not verified. A message is displayed stating that no verification took place.
-
showMetadata: [ true | false ] -
Prints the
gtid_executedGTID set and the binary log file name and position from the source instance, taken from the dump metadata included with dumps produced by MySQL Shell's instance dump utility, schema dump utility, or table dump utility. The metadata is printed in YAML format.The
gtid_executedGTID set is always included in the dump as thegtidExecutedfield in the@.jsondump file. The dump loading utility does not automatically apply thegtid_executedGTID set from the source MySQL instance on the target MySQL instance. To apply these GTIDs on the target MySQL instance for use with replication, use theupdateGtidSetoption or import them manually, depending on the release of the target MySQL instance. This is also supported on MySQL HeatWave Service DB System instances. See the description of theupdateGtidSetoption for details.The binary log file name and position are included provided that the user account used to run the dump utility had the
REPLICATION CLIENTprivilege. The binary log file name and position can be used to set up replication from a source server that does not have GTIDs enabled and does not use GTID-based replication, to a replica that has GTIDs enabled, using theASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONSoption of theCHANGE REPLICATION SOURCE TOstatement (which is available from MySQL Server 8.0.23). -
updateGtidSet: [ off | append | replace ] -
Apply the
gtid_executedGTID set from the source MySQL instance, as recorded in the dump metadata, to thegtid_purgedGTID set on the target MySQL instance. Thegtid_purgedGTID set holds the GTIDs of all transactions that have been applied on the server, but do not exist on any binary log file on the server. The default isoff, meaning that the GTID set is not applied.Do not use this option for a dump produced by MySQL Shell's table dump utility, only for dumps produced by MySQL Shell's instance dump utility or schema dump utility. Also, do not use this option when Group Replication is running on the target MySQL instance.
For MySQL instances that are not MySQL HeatWave Service DB System instances, when you set
appendorreplaceto update the GTID set, also set theskipBinlogoption totrue. This ensures the GTIDs on the source server match the GTIDs on the target server. For MySQL HeatWave Service DB System instances, this option is not used.For a target MySQL instance from MySQL 8.0, you can set the option to
append, which appends thegtid_executedGTID set from the source MySQL instance to thegtid_purgedGTID set on the target MySQL instance. Thegtid_executedGTID set to be applied, which is shown in thegtidExecutedfield in the@.jsondump file, must not intersect with thegtid_executedset already on the target MySQL instance. For example, you can use this option when importing a schema from a different source MySQL instance to a target MySQL instance that already has schemas from other source servers.You can also use
replacefor a target MySQL instance from MySQL 8.0, to replace thegtid_purgedGTID set on the target MySQL instance with thegtid_executedGTID set from the source MySQL instance. To do this, thegtid_executedGTID set from the source MySQL instance must be a superset of thegtid_purgedGTID set on the target MySQL instance, and must not intersect with the set of transactions in the target'sgtid_executedGTID set that are not in itsgtid_purgedGTID set.For a target MySQL instance at MySQL 5.7, set the option to
replace, which replaces thegtid_purgedGTID set on the target MySQL instance with thegtid_executedGTID set from the source MySQL instance. In MySQL 5.7, to do this thegtid_executedandgtid_purgedGTID sets on the target MySQL instance must be empty, so the instance must be unused with no previously imported GTID sets.For MySQL HeatWave Service DB System, this method is not supported. To apply the GTID set, after the import, use MySQL Shell's
\sqlcommand (or enter SQL mode) to issue the following statement on the connected MySQL instance, copying thegtid_executedGTID set from thegtidExecutedfield in the@.jsondump file in the dump metadata:shell-js> \sql SET @@GLOBAL.gtid_purged= "+gtidExecuted_set";This statement, which works from MySQL 8.0, adds the source MySQL Server instance's
gtid_executedGTID set to the target MySQL instance'sgtid_purgedGTID set. For MySQL 5.7, the plus sign (+) must be omitted, and thegtid_executedandgtid_purgedGTID sets on the target MySQL instance must be empty. For more details, see the description of thegtid_purgedsystem variable in the release of the target MySQL instance. -
createInvisiblePKs: [ true | false ] -
Add primary keys in invisible columns for each table in the dump that does not contain a primary key. The
truesetting is applied automatically if the dump was created with thecreate_invisible_pksoption by MySQL Shell’s instance dump utilityutil.dumpInstance(), schema dump utilityutil.dumpSchemas(), or table dump utilityutil.dumpTables(). The primary keys are only added if the DDL for the dump is loaded (loadDdl: true). The invisible columns (which are named "my_row_id") have no impact on applications that use the uploaded tables.When
createInvisiblePKsis set totrue, the target MySQL instance must be MySQL Server 8.0.24 or later, or the load fails. Invisible columns are available from MySQL Server 8.0.23, but a limitation on them in that release prevents the use of this function.Adding primary keys in this way does not yet enable inbound replication of the modified tables to a MySQL HeatWave Service High Availability DB System, as that feature currently requires the primary keys to exist in both the source server and the replica server. If possible, instead of using this option, consider creating primary keys in the tables on the source server, before dumping them again. From MySQL 8.0.23, you can do this with no impact to applications by using invisible columns to hold the primary keys. This is a best practice for performance and usability, and helps the dumped database to work seamlessly with MySQL HeatWave Service.
-
osBucketName: "string" The name of the Oracle Cloud Infrastructure Object Storage bucket where the dump files are located. By default, the
[DEFAULT]profile in the Oracle Cloud Infrastructure CLI configuration file located at~/.oci/configis used to establish a connection to the bucket. You can substitute an alternative profile to be used for the connection with theociConfigFileandociProfileoptions. For instructions to set up a CLI configuration file, see SDK and CLI Configuration File.-
osNamespace: "string" The Oracle Cloud Infrastructure namespace where the Object Storage bucket named by
osBucketNameis located. The namespace for an Object Storage bucket is displayed in the Bucket Information tab of the bucket details page in the Oracle Cloud Infrastructure console, or can be obtained using the Oracle Cloud Infrastructure command line interface.-
ociConfigFile: "string" An Oracle Cloud Infrastructure CLI configuration file that contains the profile to use for the connection, instead of the one in the default location
~/.oci/config.-
ociProfile: "string" The profile name of the Oracle Cloud Infrastructure profile to use for the connection, instead of the
[DEFAULT]profile in the Oracle Cloud Infrastructure CLI configuration file used for the connection.-
ociAuth: "string" -
The authentication method to use when connecting to Oracle Cloud Infrastructure. This option requires
osBucketNameis configured with a valid value.The following options are available:
-
api_key: OCI connections use the OCI configuration file. See Section 4.8.1, “Oracle Cloud Infrastructure Object Storage”.If
osBucketNameis defined with a valid value, butociAuthis not defined,api_keyis the default value used. -
instance_principal: OCI connections use instance principal authentication. See Instance Principal Authentication.This option can not be used if
ociConfigFileorociProfileare defined. -
resource_principal: OCI connections use resource principal authentication. See Resource Principal Authentication.This option can not be used if
ociConfigFileorociProfileare defined. security_token: OCI connections use a temporary, generated session token. See Session Token-Based Authentication.
-
MySQL Shell supports loading dumps stored in S3-compatible buckets, such as Amazon Web Services (AWS) S3.
MySQL Shell supports AWS S3 configuration in command line options, environment variables, and configuration files. Command line options override environment variables, configuration files, and default options.
For information on configuration requirements, see Section 4.8, “Cloud Service Configuration”.
-
s3BucketName: "string" The name of the S3 bucket where the dump files are located. By default, the
defaultprofile in the Amazon Web Services (AWS) CLIconfigandcredentialsfiles located at~/.aws/are used to establish a connection to the S3 bucket. You can substitute alternative configurations and credentials for the connection with thes3ConfigFileands3CredentialsFileoptions. For instructions on installing and configuring the AWS CLI, see Getting started with the AWS CLI.-
s3CredentialsFile: "string" A credentials file that contains the user's credentials to use for the connection, instead of the one in the default location,
~/.aws/credentials. Typically, the credentials file contains theaws_access_key_idandaws_secret_access_keyto use for the connection.-
s3ConfigFile: "string" An AWS CLI configuration file that contains the profile to use for the connection, instead of the one in the default location
~/.aws/config. Typically, the config file contains the region and output type to use for the connection.-
s3Profile: "string" The profile name of the s3 CLI profile to use for the connection, instead of the
defaultprofile in the AWS CLI configuration file used for the connection.-
s3Region: "string" The name of the region to use for the connection.
-
s3EndpointOverride: "string" -
The URL of the endpoint to use instead of the default.
When connecting to the Oracle Cloud Infrastructure S3 compatibility API, the endpoint takes the following format:
https://. Replacenamespace.compat.objectstorage.region.oraclecloud.comnamespacewith the Object Storage namespace andregionwith your region identifier. For example, the region identifier for the US East (Ashburn) region isus-ashburn-1.For a namespace named axaxnpcrorw5 in the US East (Ashburn) region:
https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com.
The following example shows the load of a MySQL dump from a
folder, test, in an AWS S3 bucket,
Bucket001, :
util.loadDump("test",{s3BucketName: "Bucket001", threads: 4})
The following example shows the load of a MySQL dump from a
prefix, test, in an Object Storage bucket,
Bucket001, using a configuration profile,
oci, and the
s3EndpointOverride to direct the connection
to the OCI endpoint of the required tenancy and region:
util.loadDump("test",{s3BucketName: "Bucket001",
s3EndpointOverride: "https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com",
s3Profile: "oci", threads: 4})
MySQL Shell supports loading from Microsoft Azure Blob Storage.
MySQL Shell supports Microsoft Azure Blob Storage configuration in command line options, environment variables, and configuration files. Command line options override environment variables, and configuration files.
For information on configuration requirements and the order of precedence of the configuration types, see Section 4.8, “Cloud Service Configuration”.
-
azureContainerName: "string" Mandatory. The name of the Azure container from which the dump is to be loaded. The container must exist.
-
azureConfigFile: "string" -
Optional. A configuration file that contains the storage connection parameters, instead of the one in the default location, such as
~/.azure/config. If this is not defined, the default configuration file is used.azureContainerNamemust be defined, and not be empty. -
azureStorageAccount: "string" Optional. The name of the Azure storage account to use for the operation.
-
azureStorageSasToken: "string" Optional. Azure Shared Access Signature (SAS) token to be used for the authentication of the operation, instead of a key.
In the following example, the configuration uses a configuration
string for the connection parameters, which means the dump
command only requires the azureContainerName.
Example config file:
[cloud]
name = AzureCloud
[storage]
connection_string=alphanumericConnectionString
Example loadDump command, which imports the
contents of a folder named prefix1, in a
container named mysqlshellazure, to the
connected MySQL instance:
util.loadDump("prefix1", {azureContainerName: "mysqlshellazure", threads: 4})
-
loadDdl: [ true | false ] Setting this option to
falseexcludes the DDL files in the dump from the load. The default istrue, meaning that the DDL files are loaded.-
loadData: [ true | false ] Setting this option to
falseexcludes the data files in the dump from the load. The default istrue, meaning that the data files are loaded.-
loadUsers: [ true | false ] -
Import (
true) or do not import (false) users and their roles and grants into the target MySQL instance. The default isfalse, so users are not imported by default. Statements for the current user are skipped. If a user already exists in the target MySQL instance, an error is returned and the user's grants from the dump files are not applied. You can use theexcludeUsersorincludeUsersoption in the dump loading utility to specify user accounts to be excluded or included in the import.MySQL Shell's schema dump utility and table dump utility do not include users, roles, and grants in a dump, but the instance dump utility can, and does by default. The
excludeUsersandincludeUsersoptions can also be used in the instance dump utility to exclude or include named user accounts from the dump files.If you specify
truebut the supplied dump files do not contain user accounts, the utility returns a warning and continues. -
excludeUsers:array of strings Exclude the named user accounts from the import. You can use it to exclude user accounts that are not accepted for import to a MySQL HeatWave Service DB System, or that already exist or are not wanted on the target MySQL instance. Specify each user account string in the format
"'for an account that is defined with a user name and host name, oruser_name'@'host_name'""'for an account that is defined with a user name only. If you do not supply a host name, all accounts with that user name are excluded.user_name'"-
includeUsers:array of strings Include only the named user accounts in the import. Specify each user account string as for the
excludeUsersoption. You can use it as an alternative toexcludeUsersif only a few user accounts are required in the target MySQL instance. You can also specify both options to include some accounts and exclude others.-
excludeSchemas:array of strings Exclude the named schemas from the import. Note that the
information_schema,mysql,ndbinfo,performance_schema, andsysschemas are always excluded from a dump that is created by MySQL Shell's instance dump utility.-
includeSchemas:array of strings Load only the named schemas from the dump files. You can specify both options to include some schemas and exclude others.
-
excludeTables:array of strings Exclude the named tables from the import, so that they are not uploaded to the target MySQL instance. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed. Note that the data for the
mysql.apply_status,mysql.general_log,mysql.schema, andmysql.slow_log tablesis always excluded from a dump created by MySQL Shell's schema dump utility, although their DDL statements are included.-
includeTables:array of strings Load only the named tables from the dump files. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed. You can specify both options to include some tables and exclude others.
-
excludeEvents:array of strings Exclude the named events from the import. Names of events must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
includeEvents:array of strings Load only the named events from the dump files. Event names must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
excludeRoutines:array of strings Exclude the named functions and stored procedures from the import. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
includeRoutines:array of strings Load only the named functions and stored procedures from the dump files. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
-
excludeTriggers:array of strings Exclude the named triggers from the import. Names of triggers must be qualified with a valid schema name and table name (
schema.table.trigger), and quoted with the backtick character if needed. You can exclude all triggers for a specific table by specifying a schema name and table name with this option (schema.table).-
includeTriggers:array of strings Load only the named triggers from the dump files. Names of triggers must be qualified with a valid schema name and table name (
schema.table.trigger), and quoted with the backtick character if needed. You can include all triggers for a specific table by specifying a schema name and table name with this option (schema.table).-
includeLibraries:array of strings -
List of library objects to be loaded from the dump, in the format
. By default, all library objects are loaded.schema.libraryFor example:
"includeLibraries": [ "`sakila`.`library1`", "`sakila`.`library2`" ]See Using JavaScript Libraries for information on libraries.
-
excludeLibraries:array of strings -
List of library objects to be loaded from the dump, in the format
.schema.libraryFor example:
"excludeLibraries": [ "`sakila`.`library1`", "`sakila`.`library2`" ]
MySQL Server 8.0.30 introduced GIPK mode,
Generated Invisible Primary Keys. When running in this mode,
for any InnoDB table that is created without an explicit primary
key, the MySQL server automatically adds a generated invisible
primary key (GIPK) to the table. This mode is enabled by setting
sql_generate_invisible_primary_key
to ON.
MySQL Shell's load utility option
createInvisiblePKs uses the server's GIPK
mode to generate invisible primary keys for tables which do not
have primary keys.
Under certain circumstances, if a user has insufficient privileges to use GIPK mode, MySQL Shell can fall back to the previous method of generating invisible primary keys.
If createInvisiblePKs:false and
sql_generate_invisible_primary_key=OFF,
primary keys are not generated for any table loaded from the
dump.
If createInvisiblePKs:false and
sql_generate_invisible_primary_key=ON,
MySQL Shell attempts to set
sql_generate_invisible_primary_key=OFF.
If the change is successful, primary keys are not generated for
any table loaded from the dump.
If createInvisiblePKs:true and
sql_generate_invisible_primary_key=OFF,
MySQL Shell attempts to set
sql_generate_invisible_primary_key=ON.
If the change is successful, primary keys are generated for
every table without primary keys loaded from the dump.
If createInvisiblePKs:true and
sql_generate_invisible_primary_key=ON,
primary keys are generated for every table loaded using the
MySQL Server GIPK mode.
If the user running the MySQL Shell load utility does not have
the required MYSQL Server privileges, the attempt to set
sql_generate_invisible_primary_key
fails. If the attempt fails and
createInvisiblePKs:true, MySQL Shell
generates the invisible primary keys for any table which does
not have them.
If the GIPK server option,
sql_generate_invisible_primary_key
is enabled and the MySQL Shell load utility option
createInvisiblePKs is disabled, and the user
does not have the required privileges to change
sql_generate_invisible_primary_key,
an error is generated and the load fails. It is possible to
override this behavior by enabling the MySQL Shell environment
variable, MYSQLSH_ALLOW_ALWAYS_GIPK. This
environment variable overrides
createInvisiblePKs:false, enabling invisible
private keys on all loaded tables, if the
sql_generate_invisible_primary_key
is enabled.
If the MySQL Shell load utility option
createInvisiblePKs is enabled, but the GIPK
server option,
sql_generate_invisible_primary_key
is disabled, MySQL Shell attempts to enable
sql_generate_invisible_primary_key.
If the user has the appropriate privileges on the target MySQL
server,
sql_generate_invisible_primary_key
is enabled, and the load utility uses the server's GIPK mode to
create the invisible primary keys on the loaded tables. If the
user does not have the required privileges, such as on MySQL HeatWave Service,
MySQL Shell falls back to the previous behavior and creates the
primary keys on the table without using the server's GIPK mode.
MySQL Shell’s parallel table import utility
util.importTable() can be used in combination
with the dump loading utility util.loadDump()
to modify data in the chunked output files before uploading it
to the target MySQL instance. You can modify the data for one
table at a time by this method.
-
Use the dump loading utility with the
loadDdl: trueandloadData: falseoptions, to load the DDL file only, and create the selected table on the target MySQL instance with no data.shell-js> util.loadDump("/mnt/data/proddump", { > includeTables: ["product.pricing"], > loadDdl: true, > loadData: false}); -
Use the parallel table import utility to capture and transform the data for the table, and import it to the empty table on the target MySQL instance. In this example, the data for the
pricingtable is in multiple compressed files, which are specified using wildcard pattern matching. The values from theidandprodnamecolumns in the dump files are assigned unchanged to the same columns in the target table. The values from thepricecolumn in the dump files are captured and assigned to the variable@1. ThedecodeColumnsoption is then used to reduce the prices by a standard amount, and the reduced prices are placed in thepricecolumn of the target table.shell-js> util.importTable ("/mnt/data/proddump/product@pricing@*.zst", { > schema: "product", > table: "pricing", > columns: ["id", "prodname", 1], > decodeColumns: { "price": "0.8 * @1"}}); Repeat Steps 1 and 2 as needed for any other tables in the dump files where you need to modify the data.
-
When you have finished uploading all the tables and data that needed to be modified, use the dump loading utility to load both the DDL and the data for any remaining tables that you do not need to modify. Be sure to exclude the tables that you did modify in the previous steps.
shell-js> util.loadDump("/mnt/data/proddump", {excludeTables: ["product.pricing"]});
Error numbers in the range 53000-53999 are specific to
MySQL Shell's dump loading utility
util.loadDump(). The following errors might
be returned:
-
Error number:
53000; Symbol:SHERR_LOAD_MANIFEST_EXPIRED_PARSMessage: The PARs in the manifest file have expired, the expiration time was set to: %s
-
Error number:
53001; Symbol:SHERR_LOAD_MANIFEST_PAR_MISMATCHMessage: The provided PAR must be a file on the dump location: '%s'
-
Error number:
53002; Symbol:SHERR_LOAD_SPLITTING_DDL_FAILEDMessage: Error splitting DDL script for table %s: %s
-
Error number:
53003; Symbol:SHERR_LOAD_SECONDARY_ENGINE_ERRORMessage: The table %s has a secondary engine set, but not all indexes have been recreated
-
Error number:
53004; Symbol:SHERR_LOAD_FAILED_TO_DISABLE_BINLOGMessage: 'SET sql_log_bin=0' failed with error: %s
-
Error number:
53005; Symbol:SHERR_LOAD_WORKER_THREAD_FATAL_ERRORMessage: Error loading dump
-
Error number:
53006; Symbol:SHERR_LOAD_UNSUPPORTED_DUMP_VERSIONMessage: Unsupported dump version
-
Error number:
53007; Symbol:SHERR_LOAD_UNSUPPORTED_DUMP_CAPABILITIESMessage: Unsupported dump capabilities
-
Error number:
53008; Symbol:SHERR_LOAD_INCOMPLETE_DUMPMessage: Incomplete dump
-
Error number:
53009; Symbol:SHERR_LOAD_UNSUPPORTED_SERVER_VERSIONMessage: Loading dumps is only supported in MySQL 5.7 or newer
-
Error number:
53010; Symbol:SHERR_LOAD_DUMP_NOT_MDS_COMPATIBLEMessage: Dump is not MDS compatible
-
Error number:
53011; Symbol:SHERR_LOAD_SERVER_VERSION_MISMATCHMessage: MySQL version mismatch
-
Error number:
53012; Symbol:SHERR_LOAD_UPDATE_GTID_GR_IS_RUNNINGMessage: The updateGtidSet option cannot be used on server with group replication running.
-
Error number:
53013; Symbol:SHERR_LOAD_UPDATE_GTID_APPEND_NOT_SUPPORTEDMessage: Target MySQL server does not support updateGtidSet:'append'.
-
Error number:
53014; Symbol:SHERR_LOAD_UPDATE_GTID_REQUIRES_SKIP_BINLOGMessage: The updateGtidSet option on MySQL 5.7 target server can only be used if the skipBinlog option is enabled.
-
Error number:
53015; Symbol:SHERR_LOAD_UPDATE_GTID_REPLACE_REQUIRES_EMPTY_VARIABLESMessage: The updateGtidSet:'replace' option can be used on target server version only if GTID_PURGED and GTID_EXECUTED are empty, but they are not.
-
Error number:
53016; Symbol:SHERR_LOAD_UPDATE_GTID_REPLACE_SETS_INTERSECTMessage: The updateGtidSet:'replace' option can only be used if gtid_subtract(gtid_executed,gtid_purged) on target server does not intersect with the dumped GTID set.
-
Error number:
53017; Symbol:SHERR_LOAD_UPDATE_GTID_REPLACE_REQUIRES_SUPERSETMessage: The updateGtidSet:'replace' option can only be used if the dumped GTID set is a superset of the current value of gtid_purged on target server.
-
Error number:
53018; Symbol:SHERR_LOAD_UPDATE_GTID_APPEND_SETS_INTERSECTMessage: The updateGtidSet:'append' option can only be used if gtid_executed on target server does not intersect with the dumped GTID set.
-
Error number:
53019; Symbol:SHERR_LOAD_INVISIBLE_PKS_UNSUPPORTED_SERVER_VERSIONMessage: The 'createInvisiblePKs' option requires server 8.0.24 or newer.
-
Error number:
53020; Symbol:SHERR_LOAD_REQUIRE_PRIMARY_KEY_ENABLEDMessage: sql_require_primary_key enabled at destination server
-
Error number:
53021; Symbol:SHERR_LOAD_DUPLICATE_OBJECTS_FOUNDMessage: Duplicate objects found in destination database
-
Error number:
53022; Symbol:SHERR_LOAD_DUMP_WAIT_TIMEOUTMessage: Dump timeout
-
Error number:
53023; Symbol:SHERR_LOAD_INVALID_METADATA_FILEMessage: Invalid metadata file %s
-
Error number:
53024; Symbol:SHERR_LOAD_PARSING_METADATA_FILE_FAILEDMessage: Could not parse metadata file %s: %s
-
Error number:
53025; Symbol:SHERR_LOAD_LOCAL_INFILE_DISABLEDMessage: local_infile disabled in server
-
Error number:
53026; Symbol:SHERR_LOAD_PROGRESS_FILE_ERRORMessage: Error loading load progress file '%s': %s
-
Error number:
53027; Symbol:SHERR_LOAD_PROGRESS_FILE_UUID_MISMATCHMessage: Progress file was created for a server with UUID %s, while the target server has UUID: %s
-
Error number:
53028; Symbol:SHERR_LOAD_MANIFEST_UNKNOWN_OBJECTMessage: Unknown object in manifest: %s
-
Error number:
53029; Symbol:SHERR_LOAD_CORRUPTED_DUMP_MISSING_METADATAMessage: Dump directory is corrupted, some of the metadata files are missing
-
Error number:
53030; Symbol:SHERR_LOAD_CORRUPTED_DUMP_MISSING_DATAMessage: Dump directory is corrupted, some of the data files are missing
-
Error number:
53031; Symbol:SHERR_LOAD_CHECKSUM_VERIFICATION_FAILEDMessage: Checksum verification failed
Error numbers in the range 54000-54999 are for connection and
network errors experienced by MySQL Shell's dump loading
utility util.loadDump(), or by MySQL Shell's
instance dump utility util.dumpInstance(),
schema dump utility util.dumpSchemas(), and
table dump utility util.dumpTables(). In most
cases, the error code matches the HTTP error involved – for
example, error 54404 occurs when the target of a URL is not
found (HTTP 404 Not Found). The following errors might be
returned: