MySQL Shell's instance dump utility
      util.dumpInstance() and schema dump utility
      util.dumpSchemas(), introduced in MySQL Shell
      8.0.21, support the export of all schemas or a selected schema
      from an on-premise MySQL instance into an Oracle Cloud
      Infrastructure Object Storage bucket or a set of local files. The
      table dump utility util.dumpTables(),
      introduced in MySQL Shell 8.0.22, supports the same operations
      for a selection of tables or views from a schema. The exported
      items can then be imported into a MySQL HeatWave Service DB System or a MySQL
      Server instance using the util.loadDump()
      utility (see
      Section 11.6, “Dump Loading Utility”). To get
      the best functionality, always use the most recent version
      available of MySQL Shell's dump and dump loading utilities.
    
MySQL Shell's instance dump utility, schema dump utility, and table dump utility provide Oracle Cloud Infrastructure Object Storage streaming, MySQL HeatWave Service compatibility checks and modifications, parallel dumping with multiple threads, and file compression, which are not provided by mysqldump. Progress information is displayed during the dump. You can carry out a dry run with your chosen set of dump options to show information about what actions would be performed, what items would be dumped, and (for the instance dump utility and schema dump utility) what MySQL HeatWave Service compatibility issues would need to be fixed, when you run the utility for real with those options.
When choosing a destination for the dump files, note that for import into a MySQL HeatWave Service DB System, the MySQL Shell instance where you run the dump loading utility must be installed on an Oracle Cloud Infrastructure Compute instance that has access to the MySQL HeatWave Service DB System. If you dump the instance, schema, or tables to an Object Storage bucket, you can access the Object Storage bucket from the Compute instance. If you create the dump files on your local system, you need to transfer them to the Oracle Cloud Infrastructure Compute instance using the copy utility of your choice, depending on the operating system you chose for your Compute instance.
        The dumps created by MySQL Shell's instance dump utility,
        schema dump utility, and table dump utility comprise DDL files
        specifying the schema structure, and tab-separated
        .tsv files containing the data. You can also
        choose to produce the DDL files only or the data files only, if
        you want to set up the exported schema as a separate exercise
        from populating it with the exported data. You can choose
        whether or not to lock the instance for backup during the dump
        for data consistency. By default, the dump utilities chunk table
        data into multiple data files and compress the files.
      
        You can use options for the utilities to include or exclude
        specified schemas and tables, users and their roles and grants,
        events, routines, and triggers. If you specify conflicting
        include and exclude options or name an object that is not
        included in the dump anyway, before MySQL Shell 8.0.28 the
        situation is ignored and the dump proceeds without the object.
        From MySQL Shell 8.0.28 an error is reported and the dump stops
        so you can correct the options. If you need to dump the majority
        of the schemas in a MySQL instance, as an alternative strategy,
        you can use the instance dump utility rather than the schema
        dump utility, and specify the excludeSchemas
        option to list those schemas that are not to be dumped.
        Similarly, if you need to dump the majority of the tables in a
        schema, you can use the schema dump utility with the
        excludeTables option rather than the table
        dump utility.
      
        The data for the mysql.apply_status,
        mysql.general_log,
        mysql.schema, and mysql.slow_log
        tables is always excluded from a dump created by
        MySQL Shell's schema dump utility, although their DDL
        statements are included. The
        information_schema, mysql,
        ndbinfo,
        performance_schema, and
        sys schemas are always excluded from an
        instance dump.
      
        By default, the time zone is standardized to UTC in all the
        timestamp data in the dump output, which facilitates moving data
        between servers with different time zones and handling data that
        has multiple time zones. You can use the tzUtc:
        false option to keep the original timestamps if
        preferred.
      
        The MySQL Shell dump loading utility
        util.loadDump() supports loading exported
        instances and schemas from an Object Storage bucket using a
        pre-authenticated request (PAR). From MySQL Shell 8.0.22 to
        8.0.26, instances and schemas must be exported with the
        ociParManifest enabled to permit a load
        operation from Object Storage using a PAR. For details, see the
        description for the ociParManifest option.
        From MySQL Shell 8.0.27, with the introduction of support for
        PARs for all objects in a bucket or objects in a bucket with a
        specific prefix, enabling the ociParManifest
        option when exporting instances and schemas is no longer
        strictly necessary. For information about loading dumps using a
        PAR, see Section 11.6, “Dump Loading Utility”.
      
        Beginning with MySQL Shell 8.0.27, MySQL Shell's instance dump
        utility, schema dump utility, and table dump utility are
        partition aware (see
        Partitioning, in the
        MySQL Manual). When a table being dumped
        is partitioned, each partition is treated as an independent
        table; if the table has subpartitions each subpartition is
        treated as an independent table. This also means that, when
        chunking is enabled, each partition or subpartition of a
        partitioned or subpartitioned table is chunked independently.
        The base names of dump files created for partitioned tables use
        the format
        schema@table@partitionschema and
        table are, respectively the names of
        the parent schema and table, and
        partition is the URL-encoded name of
        the partition or subpartition.
      
        To manage additions of features that are not supported by
        earlier versions of the MySQL Shell utilities, beginning with
        MySQL Shell 8.0.27, util.dumpInstance(),
        util.dumpSchemas(),
        util.dumpTables(), and
        util.loadDump() all write a list of features
        used in creating the dump to the dump metadata file; for each
        such feature, an element is added to the list. When the dump
        loading utility reads the metadata file and finds an unsupported
        feature listed, it reports an error; the error message includes
        a version of MySQL Shell that supports the feature. The
        partition awareness feature introduced in MySQL Shell 8.0.27 is
        the first feature to support feature management.
      
- The instance dump utility, schema dump utility, and table dump utility only support General Availability (GA) releases of MySQL Server versions. 
- MySQL 5.7 or later is required for the destination MySQL instance where the dump will be loaded. 
- For the source MySQL instance, dumping from MySQL 5.7 or later is fully supported in all MySQL Shell releases where the utilities are available. From MySQL Shell 8.0.22 through MySQL Shell 8.0.25, it is possible to dump an instance, schema, or table from a MySQL 5.6 instance and load it into a MySQL 5.7 or later destination, but dumping user accounts from MySQL 5.6 is not supported. From MySQL Shell 8.0.26, dumping user accounts from MySQL 5.6 is supported. 
- MySQL Shell's dump loading utility - util.loadDump()from versions of MySQL Shell previous to 8.0.27 cannot load dumps that are created using the dump utilities in MySQL Shell 8.0.27 or later. This is because from MySQL Shell 8.0.27, information is included in the dump metadata about features used in creating the dump. This feature list is not backward compatible, but it supports backward compatibility when new features are added in future releases. To get the best functionality, always use the most recent version available of MySQL Shell's dump and dump loading utilities.
- Object names in the instance or schema must be in the - latin1or- utf8characterset.
- Data consistency is guaranteed only for tables that use the - InnoDBstorage engine.
- 
The minimum required set of privileges that the user account used to run the utility must have on all the schemas involved is as follows: EVENT,RELOAD,SELECT,SHOW VIEW, andTRIGGER.- If the - consistentoption is set to- true, which is the default, the- LOCK TABLESprivilege on all dumped tables can substitute for the- RELOADprivilege if the latter is not available.
- Before MySQL Shell 8.0.29, if the - consistentoption is set to- true, which is the default, the- BACKUP_ADMINprivilege is also required. From MySQL Shell 8.0.29, it is not required. If the user account does not have the- BACKUP_ADMINprivilege and- LOCK INSTANCE FOR BACKUPcannot be executed, the utilities make an extra consistency check during the dump. If this check fails, an instance dump is stopped, but a schema dump or a table dump continues and returns an error message to alert the user that the consistency check failed.
- If the - consistentoption is set to- false, the- BACKUP_ADMINand- RELOADprivileges are not required.
- If the dump is from a MySQL 5.6 or MySQL 5.7 instance, the - EXECUTEprivilege is also required if a view in the dump calls a function to get its data (up until MySQL 8.0.27, when it is no longer needed).
- If the dump is from a MySQL 5.6 instance and includes user accounts (which is possible only with the instance dump utility), the - SUPERprivilege is also required.
- If - activate_all_roles_on_loginis enabled, the user requires- SELECTon- mysql.role_edges. If it is not enabled, the user requires- SELECTon- mysql.default_roles.
 
- From MySQL Shell 8.0.24, the user account used to run the utility needs the - REPLICATION CLIENTprivilege in order for the utility to be able to include the binary log file name and position in the dump metadata. If the user ID does not have that privilege, the dump continues but does not include the binary log information. The binary log information can be used after loading the dumped data into the replica server to set up replication with a non-GTID source server, using the- ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONSoption of the- CHANGE REPLICATION SOURCE TOstatement (which is available from MySQL Server 8.0.23).
- The upload method used to transfer files to an Oracle Cloud Infrastructure Object Storage bucket has a file size limit of 1.2 TiB. In MySQL Shell 8.0.21, the multipart size setting means that the numeric limit on multiple file parts applies first, creating a limit of approximately 640 GB. From MySQL Shell 8.0.22, the multipart size setting has been changed to allow the full file size limit. 
- The utilities convert columns with data types that are not safe to be stored in text form (such as - BLOB) to Base64. The size of these columns therefore must not exceed approximately 0.74 times the value of the- max_allowed_packetsystem variable (in bytes) that is configured on the target MySQL instance.
- For the table dump utility, exported views and triggers must not use qualified names to reference other views or tables. 
- The table dump utility does not dump routines, so any routines referenced by the dumped objects (for example, by a view that uses a function) must already exist when the dump is loaded. 
- For import into a MySQL HeatWave Service DB System, set the - ocimdsoption to- true, to ensure compatibility with MySQL HeatWave Service.
- For compatibility with MySQL HeatWave Service, all tables must use the - InnoDBstorage engine. The- ocimdsoption checks for any exceptions found in the dump, and the- compatibilityoption alters the dump files to replace other storage engines with- InnoDB.
- For the instance dump utility and schema dump utility, for compatibility with MySQL HeatWave Service, all tables in the instance or schema must be located in the MySQL data directory and must use the default schema encryption. The - ocimdsoption alters the dump files to apply these requirements.
- 
MySQL HeatWave Service uses partial_revokes=ON, which means database-level user grants on schemas which contain wildcards, such as_or%, are reported as errors.The compatibility option, ignore_wildcard_grantswas introduced in MySQL Shell 8.0.33, along withstrip_invalid_grantsSee Options for MySQL HeatWave Service and Oracle Cloud Infrastructure for more information. 
- A number of other security related restrictions and requirements apply to items such as tablespaces and privileges for compatibility with MySQL HeatWave Service. The - ocimdsoption checks for any exceptions found during the dump, and the- compatibilityoption automatically alters the dump files to resolve some of the compatibility issues. You might need (or prefer) to make some changes manually. For more details, see the description for the- compatibilityoption.
- For MySQL HeatWave Service High Availability, which uses Group Replication, primary keys are required on every table. From MySQL Shell 8.0.24, the - ocimdsoption checks and reports an error for any tables in the dump that are missing primary keys. The- compatibilityoption can be set to ignore missing primary keys if you do not need them, or to notify MySQL Shell’s dump loading utility to add primary keys in invisible columns where they are not present. For details, see the description for the- compatibilityoption. If possible, instead of managing this in the utility, consider creating primary keys in the tables on the source server before dumping them again.
- As of MySQL Shell 8.0.30, if any of the dump utilities are run against MySQL 5.7, with - "ocimds": true,- util.checkForServerUpgradeis run automatically. Pre-upgrade checks are run depending on the type of objects included in the dump.
The instance dump utility, schema dump utility, and table dump utility use the MySQL Shell global session to obtain the connection details of the target MySQL server from which the export is carried out. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running one of the utilities. The utilities open their own sessions for each thread, copying options such as connection compression and SSL options from the global session, and do not make any further use of the global session.
        In the MySQL Shell API, the instance dump utility, schema dump
        utility, and table dump utility are functions of the
        util global object, and have the following
        signatures:
      
util.dumpInstance(outputUrl[, options]) 
util.dumpSchemas(schemas, outputUrl[, options])
util.dumpTables(schema, tables, outputUrl[, options])
        options is a dictionary of options that can
        be omitted if it is empty. The available options for the
        instance dump utility, schema dump utility, and table dump
        utility are listed in the remaining sections in this topic.
      
        For the schema dump utility, schemas
        specifies a list of one or more schemas to be dumped from the
        MySQL instance.
      
        For the table dump utility, schema specifies
        the schema that contains the items to be dumped, and
        tables is an array of strings specifying the
        tables or views to be dumped. From MySQL Shell 8.0.23, the
        table dump includes the information required to set up the
        specified schema in the target MySQL instance, although it can
        be loaded into an alternative target schema by using the dump
        loading utility's schema option. In
        MySQL Shell 8.0.22, schema information is not included, so the
        dump files produced by this utility must be loaded into an
        existing target schema.
      
        The table dump utility can be used to select individual tables
        from a schema, for example if you want to transfer tables
        between schemas. In this example in MySQL Shell's JavaScript
        mode, the tables employees and
        salaries from the hr
        schema are exported to the local directory
        emp, which the utility creates in the current
        working directory:
      
shell-js> util.dumpTables("hr", [ "employees", "salaries" ], "emp")
        To dump all of the views and tables from the specified schema,
        use the all option and set the
        tables parameter to an empty array, as in
        this example:
      
shell-js> util.dumpTables("hr", [], "emp", { "all": true })
        If you are dumping to the local filesystem,
        outputUrl is a string specifying the path to
        a local directory where the dump files are to be placed. You can
        specify an absolute path or a path relative to the current
        working directory. You can prefix a local directory path with
        the file:// schema. In this example, the
        connected MySQL instance is dumped to a local directory, with
        some modifications made in the dump files for compatibility with
        MySQL HeatWave Service. The user first carries out a dry run to inspect the
        schemas and view the compatibility issues, then runs the dump
        with the appropriate compatibility options applied to remove the
        issues:
      
shell-js> util.dumpInstance("C:/Users/hanna/worlddump", {dryRun: true, ocimds: true})
Checking for compatibility with MySQL HeatWave Service 8.0.21
...
Compatibility issues with MySQL HeatWave Service 8.0.21 were found. Please use the 
'compatibility' option to apply compatibility adaptations to the dumped DDL.
Util.dumpInstance: Compatibility issues were found (RuntimeError)
shell-js> util.dumpInstance("C:/Users/hanna/worlddump", {
        > ocimds: true, compatibility: ["strip_definers", "strip_restricted_grants"]})
        The target directory must be empty before the export takes
        place. If the directory does not yet exist in its parent
        directory, the utility creates it. For an export to a local
        directory, the directories created during the dump are created
        with the access permissions rwxr-x---, and
        the files are created with the access permissions
        rw-r----- (on operating systems where these
        are supported). The owner of the files and directories is the
        user account that is running MySQL Shell.
      
        If you are dumping to an Oracle Cloud Infrastructure Object Storage bucket,
        outputUrl is a path that will be used to
        prefix the dump files in the bucket, to simulate a directory
        structure. Use the osBucketName option to
        provide the name of the Object Storage bucket, and the
        osNamespace option to identify the namespace
        for the bucket. In this example, the user dumps the
        world schema from the connected MySQL
        instance to an Object Storage bucket, with the same
        compatibility modifications as in the previous example:
      
shell-js> util.dumpSchemas(["world"], "worlddump", {
        > "osBucketName": "hanna-bucket", "osNamespace": "idx28w1ckztq", 
        > "ocimds": "true", "compatibility": ["strip_definers", "strip_restricted_grants"]})
        In the Object Storage bucket, the dump files all appear with the
        prefix worlddump, for example:
      
worlddump/@.done.json	
worlddump/@.json	
worlddump/@.post.sql
worlddump/@.sql
worlddump/world.json	
worlddump/world.sql	
worlddump/world@city.json	
worlddump/world@city.sql	
worlddump/world@city@@0.tsv.zst
worlddump/world@city@@0.tsv.zst.idx
...
        The namespace for an Object Storage bucket is displayed in the
        Bucket Information tab of the bucket
        details page in the Oracle Cloud Infrastructure console, or can be obtained using the
        Oracle Cloud Infrastructure command line interface. A connection is established to the
        Object Storage bucket using the default profile in the default
        Oracle Cloud Infrastructure CLI configuration file, or
        alternative details that you specify using the
        ociConfigFile and
        ociProfile options. For instructions to set
        up a CLI configuration file, see
        SDK
        and CLI Configuration File.
      
- 
            dryRun: [ true | false ]
- Display information about what would be dumped with the specified set of options, and about the results of MySQL HeatWave Service compatibility checks (if the - ocimdsoption is specified), but do not proceed with the dump. Setting this option enables you to list out all of the compatibility issues before starting the dump. The default is- false.
- 
            showProgress: [ true | false ]
- Display ( - true) or hide (- false) progress information for the dump. The default is- trueif- stdoutis a terminal (- tty), such as when MySQL Shell is in interactive mode, and- falseotherwise. The progress information includes the estimated total number of rows to be dumped, the number of rows dumped so far, the percentage complete, and the throughput in rows and bytes per second.
- 
            threads:int
- The number of parallel threads to use to dump chunks of data from the MySQL instance. Each thread has its own connection to the MySQL instance. The default is 4. 
- 
            maxRate: "string"
- The maximum number of bytes per second per thread for data read throughput during the dump. The unit suffixes - kfor kilobytes,- Mfor megabytes, and- Gfor gigabytes can be used (for example, setting- 100Mlimits throughput to 100 megabytes per second per thread). Setting- 0(which is the default value), or setting the option to an empty string, means no limit is set.
- 
            defaultCharacterSet: "string"
- The character set to be used during the session connections that are opened by MySQL Shell to the server for the dump. The default is - utf8mb4. The session value of the system variables- character_set_client,- character_set_connection, and- character_set_resultsare set to this value for each connection. The character set must be permitted by the- character_set_clientsystem variable and supported by the MySQL instance.
- 
            consistent: [ true | false ]
- 
Enable ( true) or disable (false) consistent data dumps by locking the instance for backup during the dump. The default istrue.When trueis set, the utility sets a global read lock using theFLUSH TABLES WITH READ LOCKstatement (if the user ID used to run the utility has theRELOADprivilege), or a series of table locks usingLOCK TABLESstatements (if the user ID does not have theRELOADprivilege but does haveLOCK TABLES). The transaction for each thread is started using the statementsSET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READandSTART TRANSACTION WITH CONSISTENT SNAPSHOT. When all threads have started their transactions, the instance is locked for backup (as described in LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE Statements) and the global read lock is released.From MySQL Shell 8.0.29, if the user account does not have the BACKUP_ADMINprivilege andLOCK INSTANCE FOR BACKUPcannot be executed, the utilities make an extra consistency check during the dump. If this check fails, an instance dump is stopped, but a schema dump or a table dump continues and returns an error message to alert the user that the consistency check failed.
- 
            skipConsistencyChecks: [ true | false ]
- 
Enable ( true) or disable (false) the extra consistency check performed whenconsistent: true. Default isfalse.This option is ignored if consistent: false.
- 
            tzUtc: [ true | false ]
- Include a statement at the start of the dump to set the time zone to UTC. All timestamp data in the dump output is converted to this time zone. The default is - true, so timestamp data is converted by default. Setting the time zone to UTC facilitates moving data between servers with different time zones, or handling a set of data that has multiple time zones. Set this option to- falseto keep the original timestamps if preferred.
- 
            compression: "string"
- The compression type to use when writing data files for the dump. The default is to use zstd compression ( - zstd). The alternatives are to use gzip compression (- gzip) or no compression (- none).
- 
            chunking: [ true | false ]
- 
Enable ( true) or disable (false) chunking for table data, which splits the data for each table into multiple files. The default istrue, so chunking is enabled by default. UsebytesPerChunkto specify the chunk size. If you set the chunking option tofalse, chunking does not take place and the utility creates one data file for each table.Prior to MySQL Shell 8.0.32, to chunk table data into separate files, a primary key or unique index must be defined for the table, which the utility uses to select an index column to order and chunk the data. If a table does not contain either of these, a warning is displayed and the table data is written to a single file. As of MySQL Shell 8.0.32, if a table has no primary key or unique index, chunking is done based on the number of rows in the table, the average row length, and the bytesPerChunkvalue.
- 
            bytesPerChunk: "string"
- Sets the approximate number of bytes to be written to each data file when chunking is enabled. The unit suffixes - kfor kilobytes,- Mfor megabytes, and- Gfor gigabytes can be used. The default is 64 MB (- 64M) from MySQL Shell 8.0.22 (32 MB in MySQL Shell 8.0.21), and the minimum is 128 KB (- 128k). Specifying this option sets- chunkingto- trueimplicitly. The utility aims to chunk the data for each table into files each containing this amount of data before compression is applied. The chunk size is an average and is calculated based on table statistics and explain plan estimates.
- 
            dialect: [default|csv|csv-unix|tsv]
- 
(Added in MySQL Shell 8.0.32) Specify a set of field- and line-handling options for the format of the exported data file. You can use the selected dialect as a base for further customization, by also specifying one or more of the linesTerminatedBy,fieldsTerminatedBy,fieldsEnclosedBy,fieldsOptionallyEnclosed, andfieldsEscapedByoptions to change the settings.The default dialect produces a data file matching what would be created using a SELECT...INTO OUTFILEstatement with the default settings for that statement..txtis an appropriate file extension to assign to these output files. Other dialects are available to export CSV files for either DOS or UNIX systems (.csv), and TSV files (.tsv).The settings applied for each dialect are as follows: Table 11.3 Dialect settings for table export utility dialectlinesTerminatedByfieldsTerminatedByfieldsEnclosedByfieldsOptionallyEnclosedfieldsEscapedBydefault[LF] [TAB] [empty] false\ csv[CR][LF] , '' true\ csv-unix[LF] , '' false\ tsv[CR][LF] [TAB] '' true\ Note- The carriage return and line feed values for the dialects are operating system independent. 
- If you use the - linesTerminatedBy,- fieldsTerminatedBy,- fieldsEnclosedBy,- fieldsOptionallyEnclosed, and- fieldsEscapedByoptions, depending on the escaping conventions of your command interpreter, the backslash character (\) might need to be doubled if you use it in the option values.
- Like the MySQL server with the - SELECT...INTO OUTFILEstatement, MySQL Shell does not validate the field- and line-handling options that you specify. Inaccurate selections for these options can cause data to be exported partially or incorrectly. Always verify your settings before starting the export, and verify the results afterwards.
 
- 
            linesTerminatedBy: "characters"
- (Added in MySQL Shell 8.0.32) One or more characters (or an empty string) with which the utility terminates each of the lines in the exported data file. The default is as for the specified dialect, or a linefeed character ( - \n) if the dialect option is omitted. This option is equivalent to the- LINES TERMINATED BYoption for the- SELECT...INTO OUTFILEstatement. Note that the utility does not provide an equivalent for the- LINES STARTING BYoption for the- SELECT...INTO OUTFILEstatement, which is set to the empty string.
- 
            fieldsTerminatedBy: "characters"
- (Added in MySQL Shell 8.0.32) One or more characters (or an empty string) with which the utility terminates each of the fields in the exported data file. The default is as for the specified dialect, or a tab character ( - \t) if the dialect option is omitted. This option is equivalent to the- FIELDS TERMINATED BYoption for the- SELECT...INTO OUTFILEstatement.
- 
            fieldsEnclosedBy: "character"
- (Added in MySQL Shell 8.0.32) A single character (or an empty string) with which the utility encloses each of the fields in the exported data file. The default is as for the specified dialect, or the empty string if the dialect option is omitted. This option is equivalent to the - FIELDS ENCLOSED BYoption for the- SELECT...INTO OUTFILEstatement.
- 
            fieldsOptionallyEnclosed: [ true | false ]
- (Added in MySQL Shell 8.0.32) Whether the character given for - fieldsEnclosedByis to enclose all of the fields in the exported data file (- false), or to enclose a field only if it has a string data type such as- CHAR,- BINARY,- TEXT, or- ENUM(- true). The default is as for the specified dialect, or- falseif the dialect option is omitted. This option makes the- fieldsEnclosedByoption equivalent to the- FIELDS OPTIONALLY ENCLOSED BYoption for the- SELECT...INTO OUTFILEstatement.
- 
            fieldsEscapedBy: "character"
- (Added in MySQL Shell 8.0.32) The character that is to begin escape sequences in the exported data file. The default is as for the specified dialect, or a backslash (\) if the dialect option is omitted. This option is equivalent to the - FIELDS ESCAPED BYoption for the- SELECT...INTO OUTFILEstatement. If you set this option to the empty string, no characters are escaped, which is not recommended because special characters used by- SELECT...INTO OUTFILEmust be escaped.
- 
            where:
- 
A key-value pair comprising of a valid table identifier, of the form schemaName.tableNameNoteThe SQL is validated only when it is executed. If you are exporting many tables, any SQL-syntax-related issues will only be seen late in the process. As such, it is recommended you test your SQL condition before using it in a long-running export process. In the following example, whereexports only those rows of the tablesakila.actorandsakila.actor_infowhere the value ofactor_idis greater than 150, to a local folder namedout:util.dumpTables("sakila", ["actor","actor_info"], "out", {"where" : {"sakila.actor": "actor_id > 150", "sakila.actor_info": "actor_id > 150"}})
- 
            partitions: ["string","string",..]
- 
A list of valid partition names which limits the export to the specified partitions. For example, to export only the partitions named p1andp2:partitions: ["p1", "p2"]The following example exports the partitions p1 and p2 from table1 and the partition p2 from table2: util.dumpTables("schema", ["table","table2"], "out", {"partitions" : { "schema.table1": ["p1", "p2"],"schema.table2": ["p2"]}})
- 
            ddlOnly: [ true | false ]
- Setting this option to - trueincludes only the DDL files for the dumped items in the dump, and does not dump the data. The default is- false.
- 
            dataOnly: [ true | false ]
- Setting this option to - trueincludes only the data files for the dumped items in the dump, and does not include DDL files. The default is- false.
- 
            users: [ true | false ]
- 
(Instance dump utility only) Include ( true) or exclude (false) users and their roles and grants in the dump. The default istrue, so users are included by default. The schema dump utility and table dump utility do not include users, roles, and grants in a dump.From MySQL Shell 8.0.22, you can use the excludeUsersorincludeUsersoption to specify individual user accounts to be excluded or included in the dump files. These options can also be used with MySQL Shell's dump loading utilityutil.loadDump()to exclude or include individual user accounts at the point of import, depending on the requirements of the target MySQL instance.Note- Dumping user accounts from a MySQL 5.6 instance is not supported. If you are dumping from this version, set - users: false.
- In MySQL Shell 8.0.21, attempting to import users to a MySQL HeatWave Service DB System causes the import to fail if the - rootuser account or another restricted user account name is present in the dump files, so the import of users to a MySQL DB System is not supported in that release.
 
- 
            excludeUsers:array of strings
- (Instance dump utility only) Exclude the named user accounts from the dump files. This option is available from MySQL Shell 8.0.22, and you can use it to exclude user accounts that are not accepted for import to a MySQL HeatWave Service DB System, or that already exist or are not wanted on the target MySQL instance. Specify each user account string in the format - "'for an account that is defined with a user name and host name, or- user_name'@'- host_name'"- "'for an account that is defined with a user name only. If you do not supply a host name, all accounts with that user name are excluded.- user_name'"
- 
            includeUsers:array of strings
- (Instance dump utility only) Include only the named user accounts in the dump files. Specify each user account string as for the - excludeUsersoption. This option is available from MySQL Shell 8.0.22, and you can use it as an alternative to- excludeUsersif only a few user accounts are required in the dump. You can also specify both options to include some accounts and exclude others.
- 
            excludeSchemas:array of strings
- (Instance dump utility only) Exclude the named schemas from the dump. Note that the - information_schema,- mysql,- ndbinfo,- performance_schema, and- sysschemas are always excluded from an instance dump.
- 
            includeSchemas:array of strings
- (Instance dump utility only) Include only the named schemas in the dump. This option is available from MySQL Shell 8.0.28. You cannot include the - information_schema,- mysql,- ndbinfo,- performance_schema, or- sysschemas by naming them on this option. If you want to dump one or more of these schemas, you can do this using the schema dump utility- util.dumpSchemas().
- 
            excludeTables:array of strings
- 
(Instance dump utility and schema dump utility only) Exclude the named tables from the dump. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed. Tables named by the excludeTablesoption do not have DDL files or data files in the dump. Note that the data for themysql.apply_status,mysql.general_log,mysql.schema, andmysql.slow_log tablesis always excluded from a schema dump, although their DDL statements are included, and you cannot include that data by naming the table in another option or utility.NoteSchema and table names containing multi-byte characters must be surrounded with backticks. 
- 
            includeTables:array of strings
- 
(Instance dump utility and schema dump utility only) Include only the named tables in the dump. This option is available from MySQL Shell 8.0.28. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed. NoteSchema and table names containing multi-byte characters must be surrounded with backticks. 
- 
            events: [ true | false ]
- (Instance dump utility and schema dump utility only) Include ( - true) or exclude (- false) events for each schema in the dump. The default is- true.
- 
            excludeEvents:array of strings
- (Instance dump utility and schema dump utility only) Exclude the named events from the dump. This option is available from MySQL Shell 8.0.28. Names of events must be qualified with a valid schema name, and quoted with the backtick character if needed. 
- 
            includeEvents:array of strings
- (Instance dump utility and schema dump utility only) Include only the named events in the dump. This option is available from MySQL Shell 8.0.28. Event names must be qualified with a valid schema name, and quoted with the backtick character if needed. 
- 
            routines: [ true | false ]
- (Instance dump utility and schema dump utility only) Include ( - true) or exclude (- false) functions and stored procedures for each schema in the dump. The default is- true. Note that user-defined functions are not included, even when- routinesis set to- true.
- 
            excludeRoutines:array of strings
- (Instance dump utility and schema dump utility only) Exclude the named functions and stored procedures from the dump. This option is available from MySQL Shell 8.0.28. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed. 
- 
            includeRoutines:array of strings
- (Instance dump utility and schema dump utility only) Include only the named functions and stored procedures in the dump. This option is available from MySQL Shell 8.0.28. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed. 
- 
            all: [ true | false ]
- 
(Table dump utility only) Setting this option to trueincludes all views and tables from the specified schema in the dump. The default isfalse. When you use this option, set thetablesparameter to an empty array, for example:shell-js> util.dumpTables("hr", [], "emp", { "all": true })
- 
            triggers: [ true | false ]
- (All dump utilities) Include ( - true) or exclude (- false) triggers for each table in the dump. The default is- true.
- 
            excludeTriggers:array of strings
- (All dump utilities) Exclude the named triggers from the dump. This option is available from MySQL Shell 8.0.28. Names of triggers must be qualified with a valid schema name and table name ( - schema.table.trigger), and quoted with the backtick character if needed. You can exclude all triggers for a specific table by specifying a schema name and table name with this option (- schema.table).
- 
            includeTriggers:array of strings
- (All dump utilities) Include only the named triggers in the dump. This option is available from MySQL Shell 8.0.28. Names of triggers must be qualified with a valid schema name and table name ( - schema.table.trigger), and quoted with the backtick character if needed. You can include all triggers for a specific table by specifying a schema name and table name with this option (- schema.table).
- 
            osBucketName: "string"
- The name of the Oracle Cloud Infrastructure Object Storage bucket to which the dump is to be written. By default, the - [DEFAULT]profile in the Oracle Cloud Infrastructure CLI configuration file located at- ~/.oci/configis used to establish a connection to the bucket. You can substitute an alternative profile to be used for the connection with the- ociConfigFileand- ociProfileoptions. For instructions to set up a CLI configuration file, see SDK and CLI Configuration File.
- 
            osNamespace: "string"
- The Oracle Cloud Infrastructure namespace where the Object Storage bucket named by - osBucketNameis located. The namespace for an Object Storage bucket is displayed in the Bucket Information tab of the bucket details page in the Oracle Cloud Infrastructure console, or can be obtained using the Oracle Cloud Infrastructure command line interface.
- 
            ociConfigFile: "string"
- An Oracle Cloud Infrastructure CLI configuration file that contains the profile to use for the connection, instead of the one in the default location - ~/.oci/config.
- 
            ociProfile: "string"
- The profile name of the Oracle Cloud Infrastructure profile to use for the connection, instead of the - [DEFAULT]profile in the Oracle Cloud Infrastructure CLI configuration file used for the connection.
- 
            ocimds: [ true | false ]
- 
Setting this option to trueenables checks and modifications for compatibility with MySQL HeatWave Service. The default isfalse. From MySQL Shell 8.0.23, this option is available for all the utilities, and before that release, it is only available for the instance dump utility and schema dump utility.When this option is set to true,DATA DIRECTORY,INDEX DIRECTORY, andENCRYPTIONoptions inCREATE TABLEstatements are commented out in the DDL files, to ensure that all tables are located in the MySQL data directory and use the default schema encryption. Checks are carried out for any storage engines inCREATE TABLEstatements other thanInnoDB, for grants of unsuitable privileges to users or roles, and for other compatibility issues. If any non-conforming SQL statement is found, an exception is raised and the dump is halted. Use thedryRunoption to list out all of the issues with the items in the dump before the dumping process is started. Use thecompatibilityoption to automatically fix the issues in the dump output.From MySQL Shell 8.0.22 to MySQL Shell 8.0.26, when this option is set to trueand an Object Storage bucket name is supplied using theosBucketNameoption, theociParManifestoption also defaults totrue, meaning that a manifest file is generated contains pre-authenticated requests (PARs) for all of the files in the dump, and the dump files can only be accessed using these PARs. From MySQL Shell 8.0.27, with the introduction of support for PARs for all objects in a bucket or objects in a bucket with a specific prefix, theociParManifestoption is set tofalseby default and is only enabled if set totrueexplicitly.NoteAs of MySQL Shell 8.0.30, if any of the dump utilities are run against MySQL 5.7, with "ocimds": true,util.checkForServerUpgradeis run automatically. Pre-upgrade checks are run depending on the type of objects included in the dump.
- 
            compatibility:array of strings
- 
Apply the specified requirements for compatibility with MySQL HeatWave Service for all tables in the dump output, altering the dump files as necessary. From MySQL Shell 8.0.23, this option is available for all the utilities, and before that release, it is only available for the instance dump utility and schema dump utility. The following modifications can be specified as an array of strings: - 
                  force_innodb
- Change - CREATE TABLEstatements to use the- InnoDBstorage engine for any tables that do not already use it.
- 
                  skip_invalid_accounts
- Remove user accounts created with external authentication plugins that are not supported in MySQL HeatWave Service. From MySQL Shell 8.0.26, this option also removes user accounts that do not have passwords set, except where an account with no password is identified as a role, in which case it is dumped using the - CREATE ROLEstatement.
- 
                  strip_definers
- Remove the - DEFINERclause from views, routines, events, and triggers, so these objects are created with the default definer (the user invoking the schema), and change the- SQL SECURITYclause for views and routines to specify- INVOKERinstead of- DEFINER. MySQL HeatWave Service requires special privileges to create these objects with a definer other than the user loading the schema. If your security model requires that views and routines have more privileges than the account querying or calling them, you must manually modify the schema before loading it.
- 
                  strip_restricted_grants
- Remove specific privileges that are restricted by MySQL HeatWave Service from - GRANTstatements, so users and their roles cannot be given these privileges (which would cause user creation to fail). From MySQL Shell 8.0.22, this option also removes- REVOKEstatements for system schemas (- mysqland- sys) if the administrative user account on an Oracle Cloud Infrastructure Compute instance does not itself have the relevant privileges, so cannot remove them.
- 
                  strip_tablespaces
- Remove the - TABLESPACEclause from- CREATE TABLEstatements, so all tables are created in their default tablespaces. MySQL HeatWave Service has some restrictions on tablespaces.
- 
                  ignore_missing_pks
- Make the instance, schema, or table dump utility ignore any missing primary keys when the dump is carried out, so that the - ocimdsoption can still be used without the dump stopping due to this check. Dumps created with this modification cannot be loaded into a MySQL HeatWave Service High Availability instance, because primary keys are required for MySQL HeatWave Service High Availability, which uses Group Replication. To add the missing primary keys instead, use the- create_invisible_pksmodification, or consider creating primary keys in the tables on the source server.
- 
                  ignore_wildcard_grants
- If enabled, ignores errors from grants on schemas with wildcards, which are interpreted differently in systems where the - partial_revokessystem variable is enabled.
- 
                  strip_invalid_grants
- If enabled, strips grant statements which would fail when users are loaded. Such as grants referring to a specific routine which does not exist. 
- 
                  create_invisible_pks
- 
Add a flag in the dump metadata to notify MySQL Shell’s dump loading utility to add primary keys in invisible columns, for each table that does not contain a primary key. This modification enables a dump where some tables lack primary keys to be loaded into a MySQL HeatWave Service High Availability instance. Primary keys are required for MySQL HeatWave Service High Availability, which uses Group Replication. The dump data is unchanged by this modification, as the tables do not contain the invisible columns until they have been processed by the dump loading utility. The invisible columns (which are named " my_row_id") have no impact on applications that use the uploaded tables.Adding primary keys in this way does not yet enable inbound replication of the modified tables to a High Availability instance, as that feature currently requires the primary keys to exist in both the source server and the replica server. If possible, instead of using this modification, consider creating primary keys in the tables on the source server, before dumping them again. From MySQL 8.0.23, you can do this with no impact to applications by using invisible columns to hold the primary keys. This is a best practice for performance and usability, and helps the dumped database to work seamlessly with MySQL HeatWave Service. NoteMySQL Shell’s dump loading utility can only be used to load dumps created with the create_invisible_pksmodification onto a target MySQL instance at MySQL Server 8.0.24 or later, due to a limitation on hidden columns in MySQL 8.0.23. The dump loading utility in versions of MySQL Shell before MySQL Shell 8.0.24 silently ignores the dump metadata flag and does not add the primary keys, so ensure that you use the latest version of the utility.
 
- 
                  
- 
            ociParManifest: [ true | false ]
- 
Setting this option to truegenerates a PAR for read access (an Object Read PAR) for each item in the dump, and a manifest file listing all the PAR URLs. The PARs expire after a week by default, which you can change using theociParExpireTimeoption.This option is available from MySQL Shell 8.0.22 for the instance dump utility and schema dump utility, and can only be used when exporting to an Object Storage bucket (so with the osBucketNameoption set). From MySQL Shell 8.0.23, this option is available for all the dump utilities.From MySQL Shell 8.0.22 to MySQL Shell 8.0.26, when the ocimdsoption is set totrueand an Object Storage bucket name is supplied using theosBucketNameoption,ociParManifestis set totrueby default, otherwise it is set tofalseby default. From MySQL Shell 8.0.27, with the introduction of support for PARs for all objects in a bucket or objects in a bucket with a specific prefix,ociParManifestis set tofalseby default and is only enabled if set totrueexplicitly.The user named in the Oracle Cloud Infrastructure profile that is used for the connection to the Object Storage bucket (the DEFAULTuser or another user as named by theociProfileoption) is the creator for the PARs. This user must havePAR_MANAGEpermissions and appropriate permissions for interacting with the objects in the bucket, as described in Using Pre-Authenticated Requests. If there is an issue with creating the PAR for any object, the associated file is deleted and the dump is stopped.To enable loading of dump files created with the ociParManifestoption enabled, create a read-only PAR for the manifest file (@.manifest.json) following the instructions in Using Pre-Authenticated Requests. You can do this while the dump is still in progress if you want to start loading the dump before it completes. You can create this PAR using any user account that has the required permissions. The PAR URL must then be used by the dump loading utility to access the dump files through the manifest file. The URL is only displayed at the time of creation, so copy it to durable storage.ImportantBefore using this access method, assess the business requirement for and the security ramifications of pre-authenticated access to a bucket or objects. A PAR gives anyone who has the PAR access to the targets identified in the request. Carefully manage the distribution of PARs. 
- 
            ociParExpireTime: "string"
- 
The expiry time for the PARs that are generated when the ociParManifestoption is set to true. The default is the current time plus one week, in UTC format.This option is available from MySQL Shell 8.0.22 for the instance dump utility and schema dump utility. From MySQL Shell 8.0.23, this option is available for all the dump utilities. The expiry time must be formatted as an RFC 3339 timestamp, as required by Oracle Cloud Infrastructure when creating a PAR. The format is YYYY-MM-DDTHH-MM-SSimmediately followed by either the letter Z (for UTC time), or the UTC offset for the local time expressed as[+|-]hh:mm, for example2020-10-01T00:09:51.000+02:00. MySQL Shell does not validate the expiry time, but any formatting error causes the PAR creation to fail for the first file in the dump, which stops the dump.
MySQL Shell supports dumping MySQL data to S3-compatible buckets, such as Amazon Web Services (AWS) S3.
MySQL Shell supports AWS S3 configuration in command line options, environment variables, and configuration files. Command line options override environment variables, configuration files, and default options.
For information on configuration requirements, see Section 4.7, “Cloud Service Configuration”.
- 
            s3BucketName: "string"
- The name of the S3 bucket to which the dump is to be written. By default, the - defaultprofile of the- configand- credentialsfiles located at- ~/.aws/are used to establish a connection to the S3 bucket. You can substitute alternative configurations and credentials for the connection with the- s3ConfigFileand- s3CredentialsFileoptions. For instructions on installing and configuring the AWS CLI, see Getting started with the AWS CLI.
- 
            s3CredentialsFile:"string"
- A credentials file that contains the user's credentials to use for the connection, instead of the one in the default location, - ~/.aws/credentials. Typically, the credentials file contains the- aws_access_key_idand- aws_secret_access_keyto use for the connection.
- 
            s3ConfigFile: "string"
- A configuration file that contains the profile to use for the connection, instead of the one in the default location, such as - ~/.aws/config. Typically, the config file contains the region and output type to use for the connection.
- 
            s3Profile: "string"
- The profile name of the s3 CLI profile to use for the connection, instead of the - defaultprofile.
- 
            s3Region: "string"
- The name of the region to use for the connection. 
- 
            s3EndpointOverride: "string"
- 
The URL of the endpoint to use instead of the default. When connecting to the Oracle Cloud Infrastructure S3 compatbility API, the endpoint takes the following format: https://. Replacenamespace.compat.objectstorage.region.oraclecloud.comnamespacewith the Object Storage namespace andregionwith your region identifier. For example, the region identifier for the US East (Ashburn) region isus-ashburn-1.For a namespace named axaxnpcrorw5 in the US East (Ashburn) region: https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com.
        The following example shows the dump of a MySQL instance to a
        folder, test, in an S3 bucket,
        Bucket001, with some compatibility options:
      
        util.dumpInstance("test",{s3bucketName: "Bucket001", threads: 4, 
        compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})
        The following example shows the dump of a MySQL instance to a
        prefix, test, in an object storage bucket,
        Bucket001, using a configuration profile,
        oci, the
        s3EndpointOverride to direct the connection
        to the OCI endpoint of the required tenancy and region, and some
        compatibility options:
      
        util.dumpInstance("test",{s3BucketName: "Bucket001", 
        s3EndpointOverride: "https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com", 
        s3Profile: "oci", threads: 4, 
        compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})MySQL Shell supports dumping to Microsoft Azure Blob Storage.
MySQL Shell supports Microsoft Azure Blob Storage configuration in command line options, environment variables, and configuration files. Command line options override environment variables, and configuration files.
For information on configuration requirements and the order of precedence of the configuration types, see Section 4.7, “Cloud Service Configuration”.
- 
            azureContainerName: "string"
- Mandatory. The name of the Azure container to which the dump is to be written. The container must exist. 
- 
            azureConfigFile: "string"
- 
Optional. A configuration file that contains the storage connection parameters, instead of the one in the default location, such as ~/.azure/config. If this is not defined, the default configuration file is used.azureContainerNamemust be defined, and not be empty.
- 
            azureStorageAccount: "string"
- Optional. The name of the Azure storage account to use for the operation. 
- 
            azureStorageSasToken: "string"
- Optional. Azure Shared Access Signature (SAS) token to be used for the authentication of the operation, instead of a key. 
        In the following example, the configuration uses a configuration
        string for the connection parameters, which means the dump
        command only requires the azureContainerName.
      
        Example config file:
      
        [cloud]
         name = AzureCloud
        [storage]
         connection_string=alphanumericConnectionString
        Example dumpInstance command, which exports
        the contents of the instance to a folder named
        prefix1, in a container named
        mysqlshellazure:
      
        util.dumpInstance("prefix1", {azureContainerName: "mysqlshellazure", threads: 4})
        Error numbers in the range 52000-52999 are specific to
        MySQL Shell's instance dump utility
        util.dumpInstance(), schema dump utility
        util.dumpSchemas(), and table dump utility
        util.dumpTables(). The following errors might
        be returned:
      
- 
Error number: 52000; Symbol:SHERR_DUMP_LOCK_TABLES_MISSING_PRIVILEGESMessage: User %s is missing the following privilege(s) for %s: %s. 
- 
Error number: 52001; Symbol:SHERR_DUMP_GLOBAL_READ_LOCK_FAILEDMessage: Unable to acquire global read lock 
- 
Error number: 52002; Symbol:SHERR_DUMP_LOCK_TABLES_FAILEDMessage: Unable to lock tables: %s. 
- 
Error number: 52003; Symbol:SHERR_DUMP_CONSISTENCY_CHECK_FAILEDMessage: Consistency check has failed. 
- 
Error number: 52004; Symbol:SHERR_DUMP_COMPATIBILITY_ISSUES_FOUNDMessage: Compatibility issues were found 
- 
Error number: 52005; Symbol:SHERR_DUMP_COMPATIBILITY_OPTIONS_FAILEDMessage: Could not apply some of the compatibility options 
- 
Error number: 52006; Symbol:SHERR_DUMP_WORKER_THREAD_FATAL_ERRORMessage: Fatal error during dump 
- 
Error number: 52007; Symbol:SHERR_DUMP_MISSING_GLOBAL_PRIVILEGESMessage: User %s is missing the following global privilege(s): %s. 
- 
Error number: 52008; Symbol:SHERR_DUMP_MISSING_SCHEMA_PRIVILEGESMessage: User %s is missing the following privilege(s) for schema %s: %s. 
- 
Error number: 52009; Symbol:SHERR_DUMP_MISSING_TABLE_PRIVILEGESMessage: User %s is missing the following privilege(s) for table %s: %s. 
- 
Error number: 52010; Symbol:SHERR_DUMP_NO_SCHEMAS_SELECTEDMessage: Filters for schemas result in an empty set. 
- 
Error number: 52011; Symbol:SHERR_DUMP_MANIFEST_PAR_CREATION_FAILEDMessage: Failed creating PAR for object '%s': %s 
- 
Error number: 52012; Symbol:SHERR_DUMP_DW_WRITE_FAILEDMessage: Failed to write %s into file %s 
- 
Error number: 52013; Symbol:SHERR_DUMP_IC_FAILED_TO_FETCH_VERSIONMessage: Failed to fetch version of the server. 
- 
Error number: 52014; Symbol:SHERR_DUMP_SD_CHARSET_NOT_FOUNDMessage: Unable to find charset: %s 
- 
Error number: 52015; Symbol:SHERR_DUMP_SD_WRITE_FAILEDMessage: Got errno %d on write 
- 
Error number: 52016; Symbol: SHERR_DUMP_SD_QUERY_FAILEDMessage: Could not execute '%s': %s 
- 
Error number: 52017; Symbol:SHERR_DUMP_SD_COLLATION_DATABASE_ERRORMessage: Error processing select @@collation_database; results 
- 
Error number: 52018; Symbol:SHERR_DUMP_SD_CHARACTER_SET_RESULTS_ERRORMessage: Unable to set character_set_results to: %s 
- 
Error number: 52019; Symbol:SHERR_DUMP_SD_CANNOT_CREATE_DELIMITERMessage: Can't create delimiter for event: %s 
- 
Error number: 52020; Symbol:SHERR_DUMP_SD_INSUFFICIENT_PRIVILEGEMessage: %s has insufficient privileges to %s! 
- 
Error number: 52021; Symbol:SHERR_DUMP_SD_MISSING_TABLEMessage: %s not present in information_schema 
- 
Error number: 52022; Symbol:SHERR_DUMP_SD_SHOW_CREATE_TABLE_FAILEDMessage: Failed running: show create table %s with error: %s 
- 
Error number: 52023; Symbol:SHERR_DUMP_SD_SHOW_CREATE_TABLE_EMPTYMessage: Empty create table for table: %s 
- 
Error number: 52024; Symbol:SHERR_DUMP_SD_SHOW_FIELDS_FAILEDMessage: SHOW FIELDS FROM failed on view: %s 
- 
Error number: 52025; Symbol:SHERR_DUMP_SD_SHOW_KEYS_FAILEDMessage: Can't get keys for table %s: %s 
- 
Error number: 52026; Symbol:SHERR_DUMP_SD_SHOW_CREATE_VIEW_FAILEDMessage: Failed: SHOW CREATE TABLE %s 
- 
Error number: 52027; Symbol:SHERR_DUMP_SD_SHOW_CREATE_VIEW_EMPTYMessage: No information about view: %s 
- 
Error number: 52028; Symbol:SHERR_DUMP_SD_SCHEMA_DDL_ERRORMessage: Error while dumping DDL for schema '%s': %s 
- 
Error number: 52029; Symbol:SHERR_DUMP_SD_TABLE_DDL_ERRORMessage: Error while dumping DDL for table '%s'.'%s': %s 
- 
Error number: 52030; Symbol:SHERR_DUMP_SD_VIEW_TEMPORARY_DDL_ERRORMessage: Error while dumping temporary DDL for view '%s'.'%s': %s 
- 
Error number: 52031; Symbol:SHERR_DUMP_SD_VIEW_DDL_ERRORMessage: Error while dumping DDL for view '%s'.'%s': %s 
- 
Error number: 52032; Symbol:SHERR_DUMP_SD_TRIGGER_COUNT_ERRORMessage: Unable to check trigger count for table: '%s'.'%s' 
- 
Error number: 52033; Symbol:SHERR_DUMP_SD_TRIGGER_DDL_ERRORMessage: Error while dumping triggers for table '%s'.'%s': %s 
- 
Error number: 52034; Symbol:SHERR_DUMP_SD_EVENT_DDL_ERRORMessage: Error while dumping events for schema '%s': %s 
- 
Error number: 52035; Symbol:SHERR_DUMP_SD_ROUTINE_DDL_ERRORMessage: Error while dumping routines for schema '%s': %s 
- 
Error number: 52036; Symbol:SHERR_DUMP_ACCOUNT_WITH_APOSTROPHEMessage: Account %s contains the ' character, which is not supported 
- 
Error number: 52037; Symbol:SHERR_DUMP_USERS_MARIA_DB_NOT_SUPPORTEDMessage: Dumping user accounts is currently not supported in MariaDB. Set the 'users' option to false to continue. 
- 
Error number: 52038; Symbol:SHERR_DUMP_INVALID_GRANT_STATEMENTMessage: Dump contains an invalid grant statement. Use the 'strip_invalid_grants' compatibility option to fix this. 
        Error numbers in the range 54000-54999 are for connection and
        network errors experienced by MySQL Shell's dump loading
        utility util.loadDump(), or by MySQL Shell's
        instance dump utility util.dumpInstance(),
        schema dump utility util.dumpSchemas(), and
        table dump utility util.dumpTables(). In most
        cases, the error code matches the HTTP error involved – for
        example, error 54404 occurs when the target of a URL is not
        found (HTTP 404 Not Found). The following errors might be
        returned: