MySQL 5.0 Reference Manual  /  ...  /  LOAD DATA INFILE Syntax

13.2.6 LOAD DATA INFILE Syntax

    INTO TABLE tbl_name
    [CHARACTER SET charset_name]
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
        [STARTING BY 'string']
        [TERMINATED BY 'string']
    [IGNORE number LINES]
    [SET col_name = expr,...]

The LOAD DATA INFILE statement reads rows from a text file into a table at a very high speed. LOAD DATA INFILE is the complement of SELECT ... INTO OUTFILE. (See Section, “SELECT ... INTO Syntax”.) To write data from a table to a file, use SELECT ... INTO OUTFILE. To read the file back into a table, use LOAD DATA INFILE. The syntax of the FIELDS and LINES clauses is the same for both statements. Both clauses are optional, but FIELDS must precede LINES if both are specified.

You can also load data files by using the mysqlimport utility; it operates by sending a LOAD DATA INFILE statement to the server. The --local option causes mysqlimport to read data files from the client host. You can specify the --compress option to get better performance over slow networks if the client and server support the compressed protocol. See Section 4.5.5, “mysqlimport — A Data Import Program”.

For more information about the efficiency of INSERT versus LOAD DATA INFILE and speeding up LOAD DATA INFILE, see Section, “Speed of INSERT Statements”.

The file name must be given as a literal string. On Windows, specify backslashes in path names as forward slashes or doubled backslashes. As of MySQL 5.0.19, the character_set_filesystem system variable controls the interpretation of the file name.

The server uses the character set indicated by the character_set_database system variable to interpret the information in the file. SET NAMES and the setting of character_set_client do not affect interpretation of input. If the contents of the input file use a character set that differs from the default, it is usually preferable to specify the character set of the file by using the CHARACTER SET clause, which is available as of MySQL 5.0.38. A character set of binary specifies no conversion.

LOAD DATA INFILE interprets all fields in the file as having the same character set, regardless of the data types of the columns into which field values are loaded. For proper interpretation of file contents, you must ensure that it was written with the correct character set. For example, if you write a data file with mysqldump -T or by issuing a SELECT ... INTO OUTFILE statement in mysql, be sure to use a --default-character-set option so that output is written in the character set to be used when the file is loaded with LOAD DATA INFILE.

Note that it is currently not possible to load data files that use the ucs2 character set.

If you use LOW_PRIORITY, execution of the LOAD DATA statement is delayed until no other clients are reading from the table. This affects only storage engines that use only table-level locking (such as MyISAM, MEMORY, and MERGE).

If you specify CONCURRENT with a MyISAM table that satisfies the condition for concurrent inserts (that is, it contains no free blocks in the middle), other threads can retrieve data from the table while LOAD DATA is executing. This option affects the performance of LOAD DATA a bit, even if no other thread is using the table at the same time.

CONCURRENT is not replicated. See Section, “Replication and LOAD Operations”, for more information.

The LOCAL keyword affects expected location of the file and error handling, as described later. LOCAL works only if your server and your client both have been configured to permit it. For example, if mysqld was started with --local-infile=0, LOCAL does not work. See Section 6.1.6, “Security Issues with LOAD DATA LOCAL”.

The LOCAL keyword affects where the file is expected to be found:

  • If LOCAL is specified, the file is read by the client program on the client host and sent to the server. The file can be given as a full path name to specify its exact location. If given as a relative path name, the name is interpreted relative to the directory in which the client program was started.

    When using LOCAL with LOAD DATA, a copy of the file is created in the server's temporary directory. This is not the directory determined by the value of tmpdir or slave_load_tmpdir, but rather the operating system's temporary directory, and is not configurable in the MySQL Server. (Typically the system temporary directory is /tmp on Linux systems and C:\WINDOWS\TEMP on Windows.) Lack of sufficient space for the copy in this directory can cause the LOAD DATA LOCAL statement to fail.

  • If LOCAL is not specified, the file must be located on the server host and is read directly by the server. The server uses the following rules to locate the file:

    • If the file name is an absolute path name, the server uses it as given.

    • If the file name is a relative path name with one or more leading components, the server searches for the file relative to the server's data directory.

    • If a file name with no leading components is given, the server looks for the file in the database directory of the default database.

In the non-LOCAL case, these rules mean that a file named as ./myfile.txt is read from the server's data directory, whereas the file named as myfile.txt is read from the database directory of the default database. For example, if db1 is the default database, the following LOAD DATA statement reads the file data.txt from the database directory for db1, even though the statement explicitly loads the file into a table in the db2 database:

LOAD DATA INFILE 'data.txt' INTO TABLE db2.my_table;

For security reasons, when reading text files located on the server, the files must either reside in the database directory or be readable by all. Also, to use LOAD DATA INFILE on server files, you must have the FILE privilege. See Section 6.2.1, “Privileges Provided by MySQL”. For non-LOCAL load operations, if the secure_file_priv system variable is set to a nonempty directory name, the file to be loaded must be located in that directory.

Using LOCAL is a bit slower than letting the server access the files directly, because the contents of the file must be sent over the connection by the client to the server. On the other hand, you do not need the FILE privilege to load local files.

LOCAL also affects error handling:

  • With LOAD DATA INFILE, data-interpretation and duplicate-key errors terminate the operation.

  • With LOAD DATA LOCAL INFILE, data-interpretation and duplicate-key errors become warnings and the operation continues because the server has no way to stop transmission of the file in the middle of the operation. For duplicate-key errors, this is the same as if IGNORE is specified. IGNORE is explained further later in this section.

The REPLACE and IGNORE keywords control handling of input rows that duplicate existing rows on unique key values:

  • If you specify REPLACE, input rows replace existing rows. In other words, rows that have the same value for a primary key or unique index as an existing row. See Section 13.2.7, “REPLACE Syntax”.

  • If you specify IGNORE, rows that duplicate an existing row on a unique key value are discarded.

  • If you do not specify either option, the behavior depends on whether the LOCAL keyword is specified. Without LOCAL, an error occurs when a duplicate key value is found, and the rest of the text file is ignored. With LOCAL, the default behavior is the same as if IGNORE is specified; this is because the server has no way to stop transmission of the file in the middle of the operation.

To ignore foreign key constraints during the load operation, issue a SET foreign_key_checks = 0 statement before executing LOAD DATA.

If you use LOAD DATA INFILE on an empty MyISAM table, all nonunique indexes are created in a separate batch (as for REPAIR TABLE). Normally, this makes LOAD DATA INFILE much faster when you have many indexes. In some extreme cases, you can create the indexes even faster by turning them off with ALTER TABLE ... DISABLE KEYS before loading the file into the table and using ALTER TABLE ... ENABLE KEYS to re-create the indexes after loading the file. See Section, “Speed of INSERT Statements”.

For both the LOAD DATA INFILE and SELECT ... INTO OUTFILE statements, the syntax of the FIELDS and LINES clauses is the same. Both clauses are optional, but FIELDS must precede LINES if both are specified.

If you specify a FIELDS clause, each of its subclauses (TERMINATED BY, [OPTIONALLY] ENCLOSED BY, and ESCAPED BY) is also optional, except that you must specify at least one of them.

If you specify no FIELDS or LINES clause, the defaults are the same as if you had written this:


(Backslash is the MySQL escape character within strings in SQL statements, so to specify a literal backslash, you must specify two backslashes for the value to be interpreted as a single backslash. The escape sequences '\t' and '\n' specify tab and newline characters, respectively.)

In other words, the defaults cause LOAD DATA INFILE to act as follows when reading input:

  • Look for line boundaries at newlines.

  • Do not skip over any line prefix.

  • Break lines into fields at tabs.

  • Do not expect fields to be enclosed within any quoting characters.

  • Interpret characters preceded by the escape character \ as escape sequences. For example, \t, \n, and \\ signify tab, newline, and backslash, respectively. See the discussion of FIELDS ESCAPED BY later for the full list of escape sequences.

Conversely, the defaults cause SELECT ... INTO OUTFILE to act as follows when writing output:

  • Write tabs between fields.

  • Do not enclose fields within any quoting characters.

  • Use \ to escape instances of tab, newline, or \ that occur within field values.

  • Write newlines at the ends of lines.


If you have generated the text file on a Windows system, you might have to use LINES TERMINATED BY '\r\n' to read the file properly, because Windows programs typically use two characters as a line terminator. Some programs, such as WordPad, might use \r as a line terminator when writing files. To read such files, use LINES TERMINATED BY '\r'.

If all the lines you want to read in have a common prefix that you want to ignore, you can use LINES STARTING BY 'prefix_string' to skip over the prefix, and anything before it. If a line does not include the prefix, the entire line is skipped. Suppose that you issue the following statement:

LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test

If the data file looks like this:

something xxx"def",2

The resulting rows will be ("abc",1) and ("def",2). The third row in the file is skipped because it does not contain the prefix.

The IGNORE number LINES option can be used to ignore lines at the start of the file. For example, you can use IGNORE 1 LINES to skip over an initial header line containing column names:


When you use SELECT ... INTO OUTFILE in tandem with LOAD DATA INFILE to write data from a database into a file and then read the file back into the database later, the field- and line-handling options for both statements must match. Otherwise, LOAD DATA INFILE will not interpret the contents of the file properly. Suppose that you use SELECT ... INTO OUTFILE to write a file with fields delimited by commas:

  FROM table2;

To read the comma-delimited file back in, the correct statement would be:


If instead you tried to read in the file with the statement shown following, it wouldn't work because it instructs LOAD DATA INFILE to look for tabs between fields:


The likely result is that each input line would be interpreted as a single field.

LOAD DATA INFILE can be used to read files obtained from external sources. For example, many programs can export data in comma-separated values (CSV) format, such that lines have fields separated by commas and enclosed within double quotation marks, with an initial line of column names. If the lines in such a file are terminated by carriage return/newline pairs, the statement shown here illustrates the field- and line-handling options you would use to load the file:

LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name

If the input values are not necessarily enclosed within quotation marks, use OPTIONALLY before the ENCLOSED BY keywords.

Any of the field- or line-handling options can specify an empty string (''). If not empty, the FIELDS [OPTIONALLY] ENCLOSED BY and FIELDS ESCAPED BY values must be a single character. The FIELDS TERMINATED BY, LINES STARTING BY, and LINES TERMINATED BY values can be more than one character. For example, to write lines that are terminated by carriage return/linefeed pairs, or to read a file containing such lines, specify a LINES TERMINATED BY '\r\n' clause.

To read a file containing jokes that are separated by lines consisting of %%, you can do this

  joke TEXT NOT NULL);
LOAD DATA INFILE '/tmp/jokes.txt' INTO TABLE jokes
  LINES TERMINATED BY '\n%%\n' (joke);

FIELDS [OPTIONALLY] ENCLOSED BY controls quoting of fields. For output (SELECT ... INTO OUTFILE), if you omit the word OPTIONALLY, all fields are enclosed by the ENCLOSED BY character. An example of such output (using a comma as the field delimiter) is shown here:

"1","a string","100.20"
"2","a string containing a , comma","102.20"
"3","a string containing a \" quote","102.20"
"4","a string containing a \", quote and comma","102.20"

If you specify OPTIONALLY, the ENCLOSED BY character is used only to enclose values from columns that have a string data type (such as CHAR, BINARY, TEXT, or ENUM):

1,"a string",100.20
2,"a string containing a , comma",102.20
3,"a string containing a \" quote",102.20
4,"a string containing a \", quote and comma",102.20

Occurrences of the ENCLOSED BY character within a field value are escaped by prefixing them with the ESCAPED BY character. Also note that if you specify an empty ESCAPED BY value, it is possible to inadvertently generate output that cannot be read properly by LOAD DATA INFILE. For example, the preceding output just shown would appear as follows if the escape character is empty. Observe that the second field in the fourth line contains a comma following the quote, which (erroneously) appears to terminate the field:

1,"a string",100.20
2,"a string containing a , comma",102.20
3,"a string containing a " quote",102.20
4,"a string containing a ", quote and comma",102.20

For input, the ENCLOSED BY character, if present, is stripped from the ends of field values. (This is true regardless of whether OPTIONALLY is specified; OPTIONALLY has no effect on input interpretation.) Occurrences of the ENCLOSED BY character preceded by the ESCAPED BY character are interpreted as part of the current field value.

If the field begins with the ENCLOSED BY character, instances of that character are recognized as terminating a field value only if followed by the field or line TERMINATED BY sequence. To avoid ambiguity, occurrences of the ENCLOSED BY character within a field value can be doubled and are interpreted as a single instance of the character. For example, if ENCLOSED BY '"' is specified, quotation marks are handled as shown here:

"The ""BIG"" boss"  -> The "BIG" boss
The "BIG" boss      -> The "BIG" boss
The ""BIG"" boss    -> The ""BIG"" boss

FIELDS ESCAPED BY controls how to read or write special characters:

  • For input, if the FIELDS ESCAPED BY character is not empty, occurrences of that character are stripped and the following character is taken literally as part of a field value. Some two-character sequences that are exceptions, where the first character is the escape character. These sequences are shown in the following table (using \ for the escape character). The rules for NULL handling are described later in this section.

    CharacterEscape Sequence
    \0 An ASCII NUL (X'00') character
    \b A backspace character
    \n A newline (linefeed) character
    \r A carriage return character
    \t A tab character.
    \Z ASCII 26 (Control+Z)
    \N NULL

    For more information about \-escape syntax, see Section 9.1.1, “String Literals”.

    If the FIELDS ESCAPED BY character is empty, escape-sequence interpretation does not occur.

  • For output, if the FIELDS ESCAPED BY character is not empty, it is used to prefix the following characters on output:

    • The FIELDS ESCAPED BY character


    • The first character of the FIELDS TERMINATED BY and LINES TERMINATED BY values

    • ASCII 0 (what is actually written following the escape character is ASCII 0, not a zero-valued byte)

    If the FIELDS ESCAPED BY character is empty, no characters are escaped and NULL is output as NULL, not \N. It is probably not a good idea to specify an empty escape character, particularly if field values in your data contain any of the characters in the list just given.

In certain cases, field- and line-handling options interact:

  • If LINES TERMINATED BY is an empty string and FIELDS TERMINATED BY is nonempty, lines are also terminated with FIELDS TERMINATED BY.

  • If the FIELDS TERMINATED BY and FIELDS ENCLOSED BY values are both empty (''), a fixed-row (nondelimited) format is used. With fixed-row format, no delimiters are used between fields (but you can still have a line terminator). Instead, column values are read and written using a field width wide enough to hold all values in the field. For TINYINT, SMALLINT, MEDIUMINT, INT, and BIGINT, the field widths are 4, 6, 8, 11, and 20, respectively, no matter what the declared display width is.

    LINES TERMINATED BY is still used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values. If you do not have a line terminator, you should set this to ''. In this case, the text file must contain all fields for each row.

    Fixed-row format also affects handling of NULL values, as described later.


    Fixed-size format does not work if you are using a multibyte character set.


    Before MySQL 5.0.6, fixed-row format used the display width of the column. For example, INT(4) was read or written using a field with a width of 4. However, if the column contained wider values, they were dumped to their full width, leading to the possibility of a ragged field holding values of different widths. Using a field wide enough to hold all values in the field prevents this problem. However, data files written before this change was made might not be reloaded correctly with LOAD DATA INFILE for MySQL 5.0.6 and up. This change also affects data files read by mysqlimport and written by mysqldump --tab, which use LOAD DATA INFILE and SELECT ... INTO OUTFILE.

Handling of NULL values varies according to the FIELDS and LINES options in use:

  • For the default FIELDS and LINES values, NULL is written as a field value of \N for output, and a field value of \N is read as NULL for input (assuming that the ESCAPED BY character is \).

  • If FIELDS ENCLOSED BY is not empty, a field containing the literal word NULL as its value is read as a NULL value. This differs from the word NULL enclosed within FIELDS ENCLOSED BY characters, which is read as the string 'NULL'.

  • If FIELDS ESCAPED BY is empty, NULL is written as the word NULL.

  • With fixed-row format (which is used when FIELDS TERMINATED BY and FIELDS ENCLOSED BY are both empty), NULL is written as an empty string. This causes both NULL values and empty strings in the table to be indistinguishable when written to the file because both are written as empty strings. If you need to be able to tell the two apart when reading the file back in, you should not use fixed-row format.

An attempt to load NULL into a NOT NULL column causes assignment of the implicit default value for the column's data type and a warning, or an error in strict SQL mode. Implicit default values are discussed in Section 11.6, “Data Type Default Values”.

Some cases are not supported by LOAD DATA INFILE:

  • Fixed-size rows (FIELDS TERMINATED BY and FIELDS ENCLOSED BY both empty) and BLOB or TEXT columns.

  • If you specify one separator that is the same as or a prefix of another, LOAD DATA INFILE cannot interpret the input properly. For example, the following FIELDS clause would cause problems:

  • If FIELDS ESCAPED BY is empty, a field value that contains an occurrence of FIELDS ENCLOSED BY or LINES TERMINATED BY followed by the FIELDS TERMINATED BY value causes LOAD DATA INFILE to stop reading a field or line too early. This happens because LOAD DATA INFILE cannot properly determine where the field or line value ends.

The following example loads all columns of the persondata table:

LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata;

By default, when no column list is provided at the end of the LOAD DATA INFILE statement, input lines are expected to contain a field for each table column. If you want to load only some of a table's columns, specify a column list:

LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata (col1,col2,...);

You must also specify a column list if the order of the fields in the input file differs from the order of the columns in the table. Otherwise, MySQL cannot tell how to match input fields with table columns.

Before MySQL 5.0.3, the column list must contain only names of columns in the table being loaded, and the SET clause is not supported. As of MySQL 5.0.3, the column list can contain either column names or user variables. With user variables, the SET clause enables you to perform transformations on their values before assigning the result to columns.

User variables in the SET clause can be used in several ways. The following example uses the first input column directly for the value of t1.column1, and assigns the second input column to a user variable that is subjected to a division operation before being used for the value of t1.column2:

  (column1, @var1)
  SET column2 = @var1/100;

The SET clause can be used to supply values not derived from the input file. The following statement sets column3 to the current date and time:

  (column1, column2)

You can also discard an input value by assigning it to a user variable and not assigning the variable to a table column:

  (column1, @dummy, column2, @dummy, column3);

Use of the column/variable list and SET clause is subject to the following restrictions:

  • Assignments in the SET clause should have only column names on the left hand side of assignment operators.

  • You can use subqueries in the right hand side of SET assignments. A subquery that returns a value to be assigned to a column may be a scalar subquery only. Also, you cannot use a subquery to select from the table that is being loaded.

  • Lines ignored by an IGNORE clause are not processed for the column/variable list or SET clause.

  • User variables cannot be used when loading data with fixed-row format because user variables do not have a display width.

When processing an input line, LOAD DATA splits it into fields and uses the values according to the column/variable list and the SET clause, if they are present. Then the resulting row is inserted into the table. If there are BEFORE INSERT or AFTER INSERT triggers for the table, they are activated before or after inserting the row, respectively.

If an input line has too many fields, the extra fields are ignored and the number of warnings is incremented.

If an input line has too few fields, the table columns for which input fields are missing are set to their default values. Default value assignment is described in Section 11.6, “Data Type Default Values”.

An empty field value is interpreted different from a missing field:

  • For string types, the column is set to the empty string.

  • For numeric types, the column is set to 0.

  • For date and time types, the column is set to the appropriate zero value for the type. See Section 11.3, “Date and Time Types”.

These are the same values that result if you assign an empty string explicitly to a string, numeric, or date or time type explicitly in an INSERT or UPDATE statement.

Treatment of empty or incorrect field values differs from that just described if the SQL mode is set to a restrictive value. For example, if sql_mode is set to TRADITIONAL, conversion of an empty value or a value such as 'x' for a numeric column results in an error, not conversion to 0. (With LOCAL or IGNORE, warnings occur rather than errors, even with a restrictive sql_mode value, and the row is inserted using the same closest-value behavior used for nonrestrictive SQL modes. This occurs because the server has no way to stop transmission of the file in the middle of the operation.)

TIMESTAMP columns are set to the current date and time only if there is a NULL value for the column (that is, \N) and the column is not declared to permit NULL values, or if the TIMESTAMP column's default value is the current timestamp and it is omitted from the field list when a field list is specified.

LOAD DATA INFILE regards all input as strings, so you cannot use numeric values for ENUM or SET columns the way you can with INSERT statements. All ENUM and SET values must be specified as strings.

BIT values cannot be loaded using binary notation (for example, b'011010'). To work around this, specify the values as regular integers and use the SET clause to convert them so that MySQL performs a numeric type conversion and loads them into the BIT column properly:

shell> cat /tmp/bit_test.txt
shell> mysql test
mysql> LOAD DATA INFILE '/tmp/bit_test.txt'
    -> INTO TABLE bit_test (@var1) SET b = CAST(@var1 AS UNSIGNED);
Query OK, 2 rows affected (0.00 sec)
Records: 2  Deleted: 0  Skipped: 0  Warnings: 0

mysql> SELECT BIN(b+0) FROM bit_test;
| bin(b+0) |
| 10       |
| 1111111  |
2 rows in set (0.00 sec)

On Unix, if you need LOAD DATA to read from a pipe, you can use the following technique (the example loads a listing of the / directory into the table db1.t1):

mkfifo /mysql/data/db1/ls.dat
chmod 666 /mysql/data/db1/ls.dat
find / -ls > /mysql/data/db1/ls.dat &
mysql -e "LOAD DATA INFILE 'ls.dat' INTO TABLE t1" db1

Here you must run the command that generates the data to be loaded and the mysql commands either on separate terminals, or run the data generation process in the background (as shown in the preceding example). If you do not do this, the pipe will block until data is read by the mysql process.

When the LOAD DATA INFILE statement finishes, it returns an information string in the following format:

Records: 1  Deleted: 0  Skipped: 0  Warnings: 0

Warnings occur under the same circumstances as when values are inserted using the INSERT statement (see Section 13.2.5, “INSERT Syntax”), except that LOAD DATA INFILE also generates warnings when there are too few or too many fields in the input row.

You can use SHOW WARNINGS to get a list of the first max_error_count warnings as information about what went wrong. See Section, “SHOW WARNINGS Syntax”.

If you are using the C API, you can get information about the statement by calling the mysql_info() function. See Section, “mysql_info()”.

Download this Manual
User Comments
  Posted by Ramam Pullella on August 20, 2007
Converting strings into dates while loading data using LOAD DATA INFILE:

In the following example, we are trying to convert the data in the file for date columns col3, col4 in formats 'mm/dd/yyyy', 'dd/mm/yyyy' into MySQL standard YYYY-mm-dd respectively.

load data infile '/tmp/xxx.dat'
into table xxx
fields terminated by '|'
lines terminated by '\n'
col3 = str_to_date(@col3, '%m/%d/%Y'),
col4 = str_to_date(@col4, '%d/%m/%Y')

You could convert into any format you want by using the date_format function around the str_to_date().

set col2 = date_format(str_to_date(@col2, 'format'), 'your format')

  Posted by Jeremy Krieg on December 4, 2007
I just ran into the same problem that the first two posters (Nathan Nuebner and Robert Lee) had with fixed-width imports. I suspect that the reason for this behaviour derives from the following statement from the above documentation:

'...column values are read and written using a field width wide enough to hold all values in the field.'

If you have a VARCHAR(20) column in a multi-byte character set (eg, UTF8), then the "field width wide enough to hold all values" in this field, measured in bytes, will actually be somewhat greater than 20. The two workarounds above worked because they both specified character sets which allocate one byte per character (latin1 and binary).

Specifying the character set in the LOAD DATA INFILE statement does not seem to work around the problem - that seems only to affect the incoming conversion from bytes to characters, it doesn't affect the number of bytes read.

The Latin1/binary examples above worked because they weren't trying to load multi-byte characters, however for someone who was trying to import multi-byte characters (or more specifically, to import character sets like UTF8 that use variable-width encoding for the characters) it would not work. There doesn't appear to be an easy workaround that I can see except to write an import utility in another programming language like Perl, Java or C.
  Posted by Max Floden on February 29, 2008
Step by step guide on how to import Excel data using LOAD DATA:
  Posted by Ryan Neve on July 18, 2008
To load a text file with fixed width columns, I used the form:
SET Date=str_to_date(SUBSTR(@var1,3,10),'%m/%d/%Y'),
  Posted by lvaro G. Vicario on January 28, 2013
Importing floating point numbers that use comma as decimal separator requires the same trick than dates:

LOAD DATA LOCAL INFILE 'C:/path/to/mytable.txt' IGNORE
INTO TABLE mytable
(int_col, @float_col)
SET float_col = replace(@float_col, ',', '.');

MySQL casts the value into the column type when it reads it from the file, *before* applying the transformations described in the SET clause. If you instruct it to read it into a variable the value is handled as string from the beginning.
  Posted by Clive le Roux on February 2, 2009
If you get "Skipped records" using "LOAD DATA LOCAL INFILE" copy the data file to the actual database server and do the load without the "LOCAL" keyword.
This will then stop when an error occurs, 9 times out of 10 it will be index issues and you will know why there are skipped records.

e.g. LOAD DATA LOCAL INFILE 'myinfile.txt';
Query OK, 288168 rows affected (1 min 44.49 sec)
Records: 494522 Deleted: 0 Skipped: 206354 Warnings: 0

LOAD DATA INFILE '/data/input/myinfile.txt';
Query OK, 252243 rows affected (0.02 sec)
ERROR 1062 (23000): Duplicate entry '5935009001-2008-08-03 04:19:18' for key 1
  Posted by Mattias Andersson on April 7, 2009
I've looked for a way to conditionally ignore rows in an INFILE for a long time. There may be an obvious way to do this but I have never seen one.
Today I discovered a very neat way to achieve this.
Assume that you are have an INFILE Containing names and genders of people. In my case, the INFILE has fixed fields. The first column is either 1 (=Male) or 2 (=Female) and column 2-18 contains the name of the person.

Now, if you would want to load the males (row[0] == 1) only, create the following table.

name varchar(255),
gender tinyint

Note that the Female partition is commented out.
Now load the data normally, but be sure to specify the IGNORE keyword.

LOAD DATA INFILE '/tmp/names.dmp' IGNORE INTO TABLE Names (@var)

The IGNORE prevents mysql to abort the import mid-file due to the missing partition.
Hope this is helpful to someone.

  Posted by Adrian Singer on April 7, 2009
Here's a great way to use LOAD DATA for UPDATEs, reaping the performance benefits of the super fast LOAD DATA when doing massive data updates:
  Posted by Chris Johnson on February 15, 2010
When the situation warrants, I will use load data infile because of its speed. In other situations I'll resort to using another method to load the data. I wish load data had the ability to output a .bad file. Without this, I must resort to writing code (PHP) to output bad records to a file.
I have an article about different methods to get data in MySQL, I prefer PHP's MySQLi prepared statements because of the lack of .bad files.
  Posted by Mike H on April 26, 2010
It seems you need to remove the byte order mark (BOM) from unicode input files for some reason.
  Posted by Patrick Zahra on April 29, 2010
You can load a CSV file on a hosted server where you have no write access.

First, create a temporary table with one LongText field.

INSERT your CSV into that table as one single cell.

Then do a SELECT INTO DUMPFILE to save the file somewhere on the server accessible to MySQL.

LOAD DATA INFILE will now be able to find your CSV.

You will likely be unable to delete this file, so it is best to overwrite it with another DUMPFILE query, this time giving it an empty string to erase its contents, and you can reuse it again later.
  Posted by Aissam Bazzaoui on September 20, 2010
for people who had encoding problem will loading a file, try convert the file to different encoding.

i used a tool called enca on my linux box (apt-get install enca) and converted the file to latin :

# enca import.csv
Universal transformation format 8 bits; UTF-8

just to check for the file encoding

# enconv -x latin1 import.csv

to convert it to latin1

and runned the following query to populate my table (encoded in utf8_general_ci)

SET collation_connection = 'utf8_general_ci'

load data local infile '/home/aissam/Documents/data/import.csv' INTO Table cscripts FIELDS TERMINATED BY ';';

good luck
  Posted by Aaron Cohen on October 1, 2010
Further to Mattias Andersson's comment, above, there *is* a convenient way to restrict the rows that end up in your table *if* your table has a unique index on a column of interest.

In my particular case, I have a CSV of all US Zipcodes. A number of entries in the CSV duplicate other entries. But, only one entry is marked as the "Preferred" entry, and that's the canonical entry that I want to have end up in my table. Unfortunately, that entry could appear anywhere in the file - not necessarily first (so I could use IGNORE) or last (so I could use REPLACE).

Since my table has a unique index on zip, the following allowed me to exclude all non-preferred rows from being imported:

INTO TABLE zipcodes
(city, state_code, @zip, area_code, county_fips, county_name, @preferred, timezone, dst, lat,
lon, msa, pmsa, @city_abbreviation, ma, zip_type)
SET allow_registrations = 1, zip = IF(@preferred='P', @zip, NULL)

(Note that the above is a string in a Ruby script that is programmatically executed.)

Since the IGNORE keyword is specified, when zip is set to NULL the row is ignored *assuming* the DBMS is configured to not auto-convert NULL to the empty string.

If instead the DBMS is configured to auto-convert NULL to the empty string, the single junk row must be deleted after the above completes.

DELETE FROM zipcodes WHERE zip = '';
  Posted by Mike Laird on October 13, 2010
Loading utf8 character encoded files that are comma separated into MySQL is a technology mine field for many people. The steps below are very specific to transferring data from Excel 2007 to MySQL in a 1 for 1 way, i.e., 4 columns in Excel 2007 into 4 fields in MySQL in the same column order. Other transfers are possible, but require fiddling with step h. below via information in the Help tab of MySQL Query Browser. This is not the only way to do it, but it works and the load is extremely fast.
a. Before starting the csv transformation and transfer, make sure the source data is “cleaned up”, including: sort by the first column and remove duplicate rows/tuples, remove all hyperlinks from copying from web sites, etc. Only use Excel 2007 (or later) because earlier Excel versions do not save data in utf8 character encoding. Some Microsoft manager of updates who is "standards resistant" says 'gotcha' every time you use an earlier version of Excel and try to get data into MySQL.
b. Get or make a filename.xlsx file in Excel 2007. This will store the text in utf8 character encoding. Do not have field name headings at the top of the columns.
c. Upload the filename.xlsx to Google Documents as a spreadsheet. If asked, check the box to put the spreadsheet into Google’s format.
d. Open the file in Google Documents. Check in the Google Docs spreadsheet that only text and/or numbers are in each cell (no hyperlinks, etc.). In the File command, download/export the file and select .csv as the file type. Download it to your own PC. Google Docs ‘magically’ cleans up Microsoft’s mess.
e. Save the filename.csv file on your own PC. Then change the file type to .txt You may change the filename, if you want to.
f. Move this new file, e.g., filename.txt, into the MySQL data directory that has the target schema of interest. An example is: MySQL Server 5.1/data/schema_name
g. Open MySQL Query Browser from Tools in the command bar in MySQL Administrator. MySQL Query Browser is in the MySQL Tools directory. Navigate to your schema_name
h. Enter the following commands into the top command entry bar – or double click the Data Manipulation Statements in the lower right corner, and double click LOAD DATA INFILE; then delete unnecessary lines and all square brackets [ ]. The numbers below should not be entered.
i. LOAD DATA INFILE ‘filename.txt’
ii. REPLACE (this will eliminate duplicates in the new data)
iii. INTO TABLE schema_name.table
iv. CHARACTER SET utf8 (optional, if your table is configured for utf8. Your data is already utf8; but it won’t hurt to do)
vi. ESCAPED BY ‘”’ (handles phrases with commas , )
i. Press the Execute button. You may get a message saying no result set returned. This is OK, if there is no error statement and number.
j. Drag the table icon into the command entry bar, and a SELECT statement appears. Execute it. The data table will appear.
k. Compare the SELECT result set to the original Excel 2007 data file to find errors, e.g., erroneous fields that you didn’t catch up front.
l. Use the MySQL Query Browser to remove row/tuple errors
i. Indicate row by clicking on it
ii. Click Edit at the bottom
iii. Right click and select delete row
iv. Click Apply Changes

  Posted by M Locherer on May 5, 2011
1|Proposed|2011-02-10 00:00:00
1|Scoped|2011-02-10 00:00:00
1|Proposed|2011-02-10 00:00:00
1|Scoped Out|2011-02-10 00:00:00
1|Proposed|2011-02-10 00:15:00

load data infile '/home/milo/ISExtract.txt'
into table ISExtract
fields terminated by '|'
lines terminated by '\r\n'
ignore 1 lines
(@ic, Status, @et)
ItemCount = convert(@IC, signed),
ExtractTime = str_to_date(@et, '%Y-%m-%d %H:%i:%s');

Don't forget apparmor if you are running Ubuntu - edit /etc/apparmor.d/usr.sbin.mysqld and add /home/milo to the accessible paths for the mysqld process.
lines terminated by '\r\n' --> because the file comes from a Windows source
Hope that helps!

  Posted by John Swapceinski on September 5, 2011
Create a table using the .csv file's header:

# pass in the file name as an argument: ./mktable filename.csv
echo "create table $1 ( "
head -1 $1 | sed -e 's/,/ varchar(255),\n/g'
echo " varchar(255) );"

  Posted by Marc MAURICE on November 13, 2011
A little script I made to import CSV to MySQL. Thanks to John's comment.
  Posted by Barbara Snyder on October 16, 2012
After only about two frustrating hours (I am a total novice), I FINALLY got this to work on Mac by saving as comma-delimited CSV. The trick was to use


along with (of course)


  Posted by Tamara Bookal on October 29, 2012
A sample use of LOAD DATA INFILE to parse and load data into MYSQL database table using php script:

"LOAD DATA INFILE '$myFile'" .

SET sdate = STR_TO_DATE(@col1,'%m/%d/%Y'),
acq = @col2,
iss = @col3,
tdate = STR_TO_DATE(@col4,'%m/%d/%Y'),
msg = @col5,
rsp = @col6,
rev = @col7,
tid = @col8,
trace = @col9,
ldate = STR_TO_DATE(@col10,'%m/%d/%Y'),
ltime = @col11,
Cnum = @col12,
amount = REPLACE(REPLACE(TRIM(@col13), '$', ''), ',','') ,
txn = @col14,
code = @col15,
anum = @col16,
upload_date = Now(),
upload_source = '".$_FILES['filename']['name']."'");

This is a combination of different samples pieced together. The @col represents the data elements in the file '$myfile' the table fields are assigned to these variables.

Hope this is useful to someone else.
  Posted by Wilzon Mariño Bueno on April 8, 2015
product 01,15,active,fruit
product 02,10,active,fruit
product 03,11,active,vegetables
product 04,15,active,fruit
product 05,19,active,fruit

LOAD DATA INFILE '/var/home/wilzonmb/data/product.csv'
INTO TABLE product
(product_name, product_price, product_status, category);

In this case, only inserted in fields mentioned, omitting such as ID, stock, etc.
  Posted by gere dses on October 11, 2015
product 01,15,active,fruit
product 02,10,active,fruit
product 03,11,active,vegetables
product 04,15,active,fruit
product 05,19,active,fruit

You can use LOAD DATA INFILE command to import csv file into table.

Check this link MySQL - LOAD DATA INFILE.

(col1, col2, col3, col4, col5...)

enter image description here

`projectId` bigint(20) NOT NULL,
`surveyId` bigint(20) NOT NULL,
`views` bigint(20) NOT NULL,
`dateTime` datetime NOT NULL
enter image description here

If every thing is fine.. Please execute following query to LOAD DATA FROM CSV FILE :

NOTE : Please add absolute path of your CSV file
LOAD DATA INFILE '/var/www/csv/data.csv'
If everything has done. you have exported data from CSV to table successfully
Sign Up Login You must be logged in to post a comment.