MySQL Shell 8.0  /  MySQL Shell Utilities  /  Table Export Utility

11.3 Table Export Utility

MySQL Shell's table export utility util.exportTable(), introduced in MySQL Shell 8.0.22, exports a MySQL relational table into a data file, either on the local server or in an Oracle Cloud Infrastructure Object Storage bucket. The data can then be uploaded into a table on a target MySQL server using MySQL Shell's parallel table import utility util.importTable() (see Section 11.4, “Parallel Table Import Utility”), which uses parallel connections to provide rapid data import for large data files. The data file can also be used to import data to a different application, or as a lightweight logical backup for a single data table.

About the Utility

By default, the table export utility produces a data file in the default format for MySQL Shell's parallel table import utility. Preset options are available to export CSV files for either DOS or UNIX systems, and TSV files. The table export utility cannot produce JSON data. You can also set field- and line-handling options as for the SELECT...INTO OUTFILE statement to create data files in arbitrary formats.

util.exportTable() can be used with partitioned and subpartitioned tables, but does not perform any special handling of these. One file is always created per table by this utility, regardless of release version.

When choosing a destination for the table export file, note that for import into a MySQL DB System, the MySQL Shell instance where you run the parallel table import utility must be installed on an Oracle Cloud Infrastructure Compute instance that has access to the MySQL DB System. If you export the table to a file in an Object Storage bucket, you can access the Object Storage bucket from the Compute instance. If you create the table export file on your local system, you need to transfer it to the Oracle Cloud Infrastructure Compute instance using the copy utility of your choice, depending on the operating system you chose for your Compute instance.

Requirements and Restrictions

The following requirements apply to exports using the table export utility:

  • MySQL 5.7 or later is required for the source MySQL instance and the destination MySQL instance.

  • The upload method used to transfer files to an Oracle Cloud Infrastructure Object Storage bucket has a file size limit of 1.2 TiB.

Running the Utility

The table export utility uses the MySQL Shell global session to obtain the connection details of the target MySQL server from which the export is carried out. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running the utility. The utility opens its own session for each thread, copying options such as connection compression and SSL options from the global session, and does not make any further use of the global session. You can limit the maximum rate of data transfer to balance the load on the network.

In the MySQL Shell API, the table export utility is a function of the util global object, and has the following signature:

util.exportTable(table, outputUrl[, options])

table is the name of the relational data table to be exported to the data file. The table name can be qualified with a valid schema name, and quoted with the backtick character if needed. If the schema is omitted, the active schema for the MySQL Shell global session is used.

options is a dictionary of options that can be omitted if it is empty. The options are listed in the final section of this topic.

If you are exporting the data to the local filesystem, outputUrl is a string specifying the path to the exported data file, and the file name itself, with an appropriate extension. You can specify an absolute path or a path relative to the current working directory. You can prefix a local directory path with the file:// schema. In this example in MySQL Shell's JavaScript mode, the user exports the employees table from the hr schema using the default dialect. The file is written to the exports directory in the user's home directory, and is given a .txt extension that is appropriate for a file in this format:

shell-js> util.exportTable("hr.employees", "file:///home/hanna/exports/employees.txt")

The target directory must exist before the export takes place, but it does not have to be empty. If the exported data file already exists there, it is overwritten. For an export to a local directory, the data file is created with the access permissions rw-r----- (on operating systems where these are supported). The owner of the file is the user account that is running MySQL Shell.

If you are exporting the data to an Oracle Cloud Infrastructure Object Storage bucket, or to S3-compatible storage,outputUrl is the name for the data file in the bucket, including a suitable file extension. You can include directory separators to simulate a directory structure. Use the osBucketName option to provide the name of the Object Storage bucket, and the osNamespace option to identify the namespace for the bucket. In this example in MySQL Shell's Python mode, the user exports the employees table from the hr schema as a file in TSV format to the Object Storage bucket hanna-bucket:

shell-py> util.export_table("hr.employees", "dump/employees.tsv", {
        > dialect: "tsv", "osBucketName": "hanna-bucket", "osNamespace": "idx28w1ckztq" })

The namespace for an Object Storage bucket is displayed in the Bucket Information tab of the bucket details page in the Oracle Cloud Infrastructure console, or can be obtained using the Oracle Cloud Infrastructure command line interface. A connection is established to the Object Storage bucket using the default profile in the default Oracle Cloud Infrastructure CLI configuration file, or alternative details that you specify using the ociConfigFile and ociProfile options. For instructions to set up a CLI configuration file, see SDK and CLI Configuration File.

Options

where: "string"

A valid SQL condition expression used to filter the data being exported.

Note

The SQL is validated only when it is executed. If you are exporting many tables, any SQL-syntax-related issues will only be seen late in the process. As such, it is recommended you test your SQL condition before using it in a long-running export process.

In the following example, where exports only those rows of the table sakila.actor where the value of actor_id is greater than 150, to a file named dump.csv:

              util.exportTable("sakila.actor", "dump.csv", {"where" : "actor_id > 150"})
partitions: ["string","string",..]

A list of valid partition names which limits the export to the specified partitions.

The following example exports the partitions p1 and p2 from schema.table to a file named dump.csv:

              util.exportTable("schema.table", "dump.csv", {"partitions" : ["p1", "p2"]}
dialect: [default|csv|csv-unix|tsv]

Specify a set of field- and line-handling options for the format of the exported data file. You can use the selected dialect as a base for further customization, by also specifying one or more of the linesTerminatedBy, fieldsTerminatedBy, fieldsEnclosedBy, fieldsOptionallyEnclosed, and fieldsEscapedBy options to change the settings.

The default dialect produces a data file matching what would be created using a SELECT...INTO OUTFILE statement with the default settings for that statement. .txt is an appropriate file extension to assign to these output files. Other dialects are available to export CSV files for either DOS or UNIX systems (.csv), and TSV files (.tsv).

The settings applied for each dialect are as follows:

Table 11.1 Dialect settings for table export utility

dialect

linesTerminatedBy

fieldsTerminatedBy

fieldsEnclosedBy

fieldsOptionallyEnclosed

fieldsEscapedBy

default

[LF]

[TAB]

[empty]

false

\

csv

[CR][LF]

,

''

true

\

csv-unix

[LF]

,

''

false

\

tsv

[CR][LF]

[TAB]

''

true

\


Note
  1. The carriage return and line feed values for the dialects are operating system independent.

  2. If you use the linesTerminatedBy, fieldsTerminatedBy, fieldsEnclosedBy, fieldsOptionallyEnclosed, and fieldsEscapedBy options, depending on the escaping conventions of your command interpreter, the backslash character (\) might need to be doubled if you use it in the option values.

  3. Like the MySQL server with the SELECT...INTO OUTFILE statement, MySQL Shell does not validate the field- and line-handling options that you specify. Inaccurate selections for these options can cause data to be exported partially or incorrectly. Always verify your settings before starting the export, and verify the results afterwards.

linesTerminatedBy: "characters"

One or more characters (or an empty string) with which the utility terminates each of the lines in the exported data file. The default is as for the specified dialect, or a linefeed character (\n) if the dialect option is omitted. This option is equivalent to the LINES TERMINATED BY option for the SELECT...INTO OUTFILE statement. Note that the utility does not provide an equivalent for the LINES STARTING BY option for the SELECT...INTO OUTFILE statement, which is set to the empty string.

fieldsTerminatedBy: "characters"

One or more characters (or an empty string) with which the utility terminates each of the fields in the exported data file. The default is as for the specified dialect, or a tab character (\t) if the dialect option is omitted. This option is equivalent to the FIELDS TERMINATED BY option for the SELECT...INTO OUTFILE statement.

fieldsEnclosedBy: "character"

A single character (or an empty string) with which the utility encloses each of the fields in the exported data file. The default is as for the specified dialect, or the empty string if the dialect option is omitted. This option is equivalent to the FIELDS ENCLOSED BY option for the SELECT...INTO OUTFILE statement.

fieldsOptionallyEnclosed: [ true | false ]

Whether the character given for fieldsEnclosedBy is to enclose all of the fields in the exported data file (false), or to enclose a field only if it has a string data type such as CHAR, BINARY, TEXT, or ENUM (true). The default is as for the specified dialect, or false if the dialect option is omitted. This option makes the fieldsEnclosedBy option equivalent to the FIELDS OPTIONALLY ENCLOSED BY option for the SELECT...INTO OUTFILE statement.

fieldsEscapedBy: "character"

The character that is to begin escape sequences in the exported data file. The default is as for the specified dialect, or a backslash (\) if the dialect option is omitted. This option is equivalent to the FIELDS ESCAPED BY option for the SELECT...INTO OUTFILE statement. If you set this option to the empty string, no characters are escaped, which is not recommended because special characters used by SELECT...INTO OUTFILE must be escaped.

maxRate: "string"

The maximum number of bytes per second per thread for data read throughput during the export. The unit suffixes k for kilobytes, M for megabytes, and G for gigabytes can be used (for example, setting 100M limits throughput to 100 megabytes per second per thread). Setting 0 (which is the default value), or setting the option to an empty string, means no limit is set.

showProgress: [ true | false ]

Display (true) or hide (false) progress information for the export. The default is true if stdout is a terminal (tty), such as when MySQL Shell is in interactive mode, and false otherwise. The progress information includes the estimated total number of rows to be exported, the number of rows exported so far, the percentage complete, and the throughput in rows and bytes per second.

compression: "string"

The compression type to use when writing the exported data file. The default is to use no compression (none). The alternatives are to use gzip compression (gzip) or zstd compression (zstd).

defaultCharacterSet: "string"

The character set to be used during the session connections that are opened by MySQL Shell to the server for the export. The default is utf8mb4. The session value of the system variables character_set_client, character_set_connection, and character_set_results are set to this value for each connection. The character set must be permitted by the character_set_client system variable and supported by the MySQL instance.

Options for OCI Cloud Infrastructure

osBucketName: "string"

The name of the Oracle Cloud Infrastructure Object Storage bucket to which the exported data file is to be written. By default, the [DEFAULT] profile in the Oracle Cloud Infrastructure CLI configuration file located at ~/.oci/config is used to establish a connection to the bucket. You can substitute an alternative profile to be used for the connection with the ociConfigFile and ociProfile options. For instructions to set up a CLI configuration file, see SDK and CLI Configuration File.

osNamespace: "string"

The Oracle Cloud Infrastructure namespace where the Object Storage bucket named by osBucketName is located. The namespace for an Object Storage bucket is displayed in the Bucket Information tab of the bucket details page in the Oracle Cloud Infrastructure console, or can be obtained using the Oracle Cloud Infrastructure command line interface.

ociConfigFile: "string"

An Oracle Cloud Infrastructure CLI configuration file that contains the profile to use for the connection, instead of the one in the default location ~/.oci/config.

ociProfile: "string"

The profile name of the Oracle Cloud Infrastructure profile to use for the connection, instead of the [DEFAULT] profile in the Oracle Cloud Infrastructure CLI configuration file used for the connection.

Options for S3-compatible Services

MySQL Shell supports exporting tables to S3-compatible buckets, such as Amazon Web Services (AWS) S3.

Note

MySQL Shell supports AWS S3 configuration in command line options, environment variables, and configuration files. Command line options override environment variables, configuration files, and default options.

For information on configuration requirements, see Section 4.7, “Cloud Service Configuration”.

s3BucketName: "string"

The name of the S3 bucket to which the export is to be written. By default, the default profile of the config and credentials files located at ~/.aws/ are used to establish a connection to the S3 bucket. You can substitute alternative configurations and credentials for the connection with the s3ConfigFile and s3CredentialsFile options. For instructions on installing and configuring the AWS CLI, see Getting started with the AWS CLI.

s3CredentialsFile:"string"

A credentials file that contains the user's credentials to use for the connection, instead of the one in the default location, ~/.aws/credentials. Typically, the credentials file contains the aws_access_key_id and aws_secret_access_key to use for the connection.

s3ConfigFile: "string"

A configuration file that contains the profile to use for the connection, instead of the one in the default location, such as ~/.aws/config. Typically, the config file contains the region and output type to use for the connection.

s3Profile: "string"

The profile name of the s3 CLI profile to use for the connection, instead of the default profile.

s3Region: "string"

The name of the region to use for the connection.

s3EndpointOverride: "string"

The URL of the endpoint to use instead of the default.

When connecting to the Oracle Cloud Infrastructure S3 compatbility API, the endpoint takes the following format: https://namespace.compat.objectstorage.region.oraclecloud.com. Replace namespace with the Object Storage namespace and region with your region identifier. For example, the region identifier for the US East (Ashburn) region is us-ashburn-1.

For a namespace named axaxnpcrorw5 in the US East (Ashburn) region:

https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com.

Options for Microsoft Azure Blob Storage

MySQL Shell supports exporting to Microsoft Azure Blob Storage.

Note

MySQL Shell supports Microsoft Azure Blob Storage configuration in command line options, environment variables, and configuration files. Command line options override environment variables, and configuration files.

For information on configuration requirements and the order of precedence of the configuration types, see Section 4.7, “Cloud Service Configuration”.

azureContainerName: "string"

Mandatory. The name of the Azure container to which the export is to be written. The container must exist.

azureConfigFile: "string"

Optional. A configuration file that contains the storage connection parameters, instead of the one in the default location, such as ~/.azure/config. If this is not defined, the default configuration file is used.

azureContainerName must be defined, and not be empty.

azureStorageAccount: "string"

Optional. The name of the Azure storage account to use for the operation.

azureStorageSasToken: "string"

Optional. Azure Shared Access Signature (SAS) token to be used for the authentication of the operation, instead of a key.

In the following example, the configuration uses a configuration string for the connection parameters, which means the exportTable command requires only the azureContainerName.

Example config file:

        [cloud]
         name = AzureCloud

        [storage]
         connection_string=alphanumericConnectionString

Example exportTable command, which exports the sakila.actor table, as a TSV file, to a container named mysqlshellazure:

        util.exportTable("sakila.actor", "actor.tsv", {dialect: "tsv", azureContainerName: "mysqlshellazure"})