PDF (US Ltr)
- 1.0Mb
HeatWave on AWS /
... /
Importing Data /
Importing Data /
Data Import Feature /
Importing Data Using the Data Import Feature
7.2.1.2 Importing Data Using the Data Import Feature
Use the data import feature in the HeatWave
Console to import data from an Amazon S3 bucket to a DB System in the same region.
This task requires the following:
- Data in an Amazon S3 bucket that you want to import. If your data is present in a MySQL instance running on-premises, in other cloud vendors as managed or unmanaged services, or another HeatWave on AWS instance, you have to export the data into an Amazon S3 bucket first.
- An IAM policy to access the Amazon S3 bucket that contains the data you want to import. See Creating an IAM Policy to Access an Amazon S3 Bucket.
- An IAM role ARN that grants the DB System access to the Amazon S3 bucket that contains the data you want to import. See Creating an IAM Role to Access an Amazon S3 Bucket.
- An established connection from the HeatWave Console to the DB System into which you want to import data.
Follow these steps to import the data:
- In the HeatWave Console, configure the DB System into which you want to import data with the IAM role ARN. See Editing a DB System.
- Click the Workspace tab, and then click the Data Imports tab.
- Click the Import Data button.
- In the Import data into DB System pane, enter the following
details:
- Basic information:
- Display name: Specify a name for the data
import operation. By default, a name is generated for you in the
format of
dataImportYYYYMMDDHHMMSS
. - Description: (Optional) Specify a description for the data import operation.
- Display name: Specify a name for the data
import operation. By default, a name is generated for you in the
format of
- Source:
- Select Bring your own data (unless you want to Import sample data instead).
- S3 URI: Specify the Amazon S3 URI. The Amazon
S3 bucket and DB System must be in
the same region.
Note:
You are responsible for managing the Amazon S3 bucket. Once you finish the data import, it is recommended that you review the Amazon S3 bucket's access permissions and remove access by the DB System if necessary. - Authentication method: Select either one of
the following authentication methods by clicking its radio
button:
- IAM role: Use the displayed data import role ARN, which was specified while creating or editing the DB System. See Creating a DB System and Editing a DB System.
- User access key: Enter the Access key ID and Secret access key. See Managing Access Keys.
- File type: Select either of the following
file types:
- MySQL dump files: Specify the
File parsing settings:
- Character set: Enter the
character set of the import data. The default
character set is
utf8mb4
. See Character Sets and Collations in MySQL. - Update GTID set: For data that contain
GTIDs, to enable GTID-based replication, apply the
gtid_executed
GTID set from the source MySQL instance, as recorded in the dump metadata, to thegtid_purged
GTID set on the target MySQL instance using one of the following options:- OFF: (Default) Do not update the GTID set. This is also the option for data that do not contain GTIDs.
- APPEND: Append the
gtid_executed
GTID set from the source MySQL instance to thegtid_purged
GTID set on the target MySQL instance. - REPLACE: Replace the
gtid_purged
GTID set on the target MySQL instance with thegtid_executed
GTID set from the source MySQL instance.
See the description for
gtid_purged
for more information.
- Character set: Enter the
character set of the import data. The default
character set is
- Text files:
- Names of data files in S3
bucket: Specify the names of data files in the
Amazon S3 bucket. Apart from text files, you can
specify text files in compressed formats such as
gzip (
.gz
) and zstd (.zst
). You can specify ranges of files using wildcard pattern matching. To match any single character, use?
, and to match any sequence of characters, use*
. For example,data_c?
, and/backup/replica/2021/*.tsv
. - File parsing settings:
- Character set: Enter the
character set to import data to the target MySQL
instance. It must correspond to the character set
given in the dump metadata that was used when the
MySQL dump was created by MySQL Shell instance
dump utility, schema dump utility, or table dump
utility. The character set must be permitted by
the
character_set_client
system variable and supported by the MySQL instance. The default character set isutf8mb4
. See Character Sets and Collations in MySQL. - Dialect: Select the dialect of the imported data file. You can select CSV (Unix), CSV, or TSV. The default dialect is CSV (Unix). The dialect selects the default values of the following parsing settings: Fields terminated by, Fields enclosed by, Fields escaped by and Fields optionally enclosed. To view the default settings per dialect, see Dialect settings.
- Skip rows: Specify the number of rows that is skipped from the beginning of the imported data file or, in the case of multiple import files, at the beginning of every file included in the file list. By default, no rows are skipped.
- Lines terminated by:
Specify one or more characters (or an empty
string) with which each of the lines are
terminated in the imported data file. For example,
\r\n
. The default is as per the specified dialect. - Fields terminated by:
Specify one or more characters (or an empty
string) with which each of the fields are
terminated in the imported data file. For example,
\t
. The default is as per the specified dialect. - Fields enclosed by:
Specify a single character (or an empty string)
with which the utility encloses each of the fields
in the imported data file. For example,
"
. The default is as per the specified dialect. - Fields escaped by: Specify
the character that is to begin escape sequences in
the imported data file. For example,
\
. The default is as per the specified dialect. - Fields optionally
enclosed: Select the option to enclose a field
only if it has a string data type such as
CHAR
,BINARY
,TEXT
, orENUM
, and deselect the option to enclose all of the fields in the imported data file.
- Character set: Enter the
character set to import data to the target MySQL
instance. It must correspond to the character set
given in the dump metadata that was used when the
MySQL dump was created by MySQL Shell instance
dump utility, schema dump utility, or table dump
utility. The character set must be permitted by
the
- Destination: Select the Schema and the Table in the DB System to which you want to import data.
- Names of data files in S3
bucket: Specify the names of data files in the
Amazon S3 bucket. Apart from text files, you can
specify text files in compressed formats such as
gzip (
- MySQL dump files: Specify the
File parsing settings:
- Click Import. The data import operation begins.
- Basic information:
The Data Imports tab is shown, on which you can
view details on your import operation..
Parent topic: Data Import Feature
Download
this Manual
PDF (US Ltr)
- 1.0Mb