MySQL 5.5 Reference Manual  /  ...  /  mysqldump — A Database Backup Program

### 4.5.4 mysqldump — A Database Backup Program

The mysqldump client utility performs logical backups, producing a set of SQL statements that can be executed to reproduce the original database object definitions and table data. It dumps one or more MySQL databases for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used. Certain options might require other privileges as noted in the option descriptions.

To reload a dump file, you must have the privileges required to execute the statements that it contains, such as the appropriate CREATE privileges for objects created by those statements.

mysqldump output can include ALTER DATABASE statements that change the database collation. These may be used when dumping stored programs to preserve their character encodings. To reload a dump file containing such statements, the ALTER privilege for the affected database is required.

If you are performing a backup on the server and your tables all are MyISAM tables, you can also use mysqlhotcopy for this purpose.

Note

A dump made using PowerShell on Windows with output redirection creates a file that has UTF-16 encoding:

shell> mysqldump [options] > dump.sql

However, UTF-16 is not permitted as a connection character set (see Section 10.1.4, “Connection Character Sets and Collations”), so the dump file will not load correctly. To work around this issue, use the --result-file option, which creates the output in ASCII format:

shell> mysqldump [options] --result-file=dump.sql

#### Performance and Scalability Considerations

mysqldump advantages include the convenience and flexibility of viewing or even editing the output before restoring. You can clone databases for development and DBA work, or produce slight variations of an existing database for testing. It is not intended as a fast or scalable solution for backing up substantial amounts of data. With large data sizes, even if the backup step takes a reasonable time, restoring the data can be very slow because replaying the SQL statements involves disk I/O for insertion, index creation, and so on.

For large-scale backup and restore, a physical backup is more appropriate, to copy the data files in their original format that can be restored quickly:

• If your tables are primarily InnoDB tables, or if you have a mix of InnoDB and MyISAM tables, consider using the mysqlbackup command of the MySQL Enterprise Backup product. (Available as part of the Enterprise subscription.) It provides the best performance for InnoDB backups with minimal disruption; it can also back up tables from MyISAM and other storage engines; and it provides a number of convenient options to accommodate different backup scenarios. See Section 25.2, “MySQL Enterprise Backup Overview”.

• If your tables are primarily MyISAM tables, consider using the mysqlhotcopy instead, for better performance than mysqldump of backup and restore operations. See Section 4.6.9, “mysqlhotcopy — A Database Backup Program”.

mysqldump can retrieve and dump table contents row by row, or it can retrieve the entire content from a table and buffer it in memory before dumping it. Buffering in memory can be a problem if you are dumping large tables. To dump tables row by row, use the --quick option (or --opt, which enables --quick). The --opt option (and hence --quick) is enabled by default, so to enable memory buffering, use --skip-quick.

If you are using a recent version of mysqldump to generate a dump to be reloaded into a very old MySQL server, use the --skip-opt option instead of the --opt or --extended-insert option.

For additional information about mysqldump, see Section 7.4, “Using mysqldump for Backups”.

#### Invocation Syntax

There are in general three ways to use mysqldump—in order to dump a set of one or more tables, a set of one or more complete databases, or an entire MySQL server—as shown here:

shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases

To dump entire databases, do not name any tables following db_name, or use the --databases or --all-databases option.

mysqldump does not dump the INFORMATION_SCHEMA or performance_schema database by default. To dump either of these, name it explicitly on the command line and also use the --skip-lock-tables option. You can also name them with the --databases option. Before MySQL 5.5 mysqldump silently ignores INFORMATION_SCHEMA even if you name it explicitly on the command line.

mysqldump does not dump the performance_schema database.

Before MySQL 5.5.25, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.5.25, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

mysqldump also does not dump the NDB Cluster ndbinfo information database.

To see a list of the options your version of mysqldump supports, execute mysqldump --help.

Some mysqldump options are shorthand for groups of other options:

To reverse the effect of a group option, uses its --skip-xxx form (--skip-opt or --skip-compact). It is also possible to select only part of the effect of a group option by following it with options that enable or disable specific features. Here are some examples:

When you selectively enable or disable the effect of a group option, order is important because options are processed first to last. For example, --disable-keys --lock-tables --skip-opt would not have the intended effect; it is the same as --skip-opt by itself.

mysqldump can retrieve and dump table contents row by row, or it can retrieve the entire content from a table and buffer it in memory before dumping it. Buffering in memory can be a problem if you are dumping large tables. To dump tables row by row, use the --quick option (or --opt, which enables --quick). The --opt option (and hence --quick) is enabled by default, so to enable memory buffering, use --skip-quick.

If you are using a recent version of mysqldump to generate a dump to be reloaded into a very old MySQL server, you should not use the --opt or --extended-insert option. Use --skip-opt instead.

For additional information about mysqldump, see Section 7.4, “Using mysqldump for Backups”.

#### Option Syntax - Alphabetical Summary

mysqldump supports the following options, which can be specified on the command line or in the [mysqldump] and [client] groups of an option file. For information about option files used by MySQL programs, see Section 4.2.6, “Using Option Files”.

Table 4.11 mysqldump Options

FormatDescriptionIntroducedRemoved
--add-locksSurround each table dump with LOCK TABLES and UNLOCK TABLES statements
--all-databasesDump all tables in all databases
--allow-keywordsAllow creation of column names that are keywords
--apply-slave-statementsInclude STOP SLAVE prior to CHANGE MASTER statement and START SLAVE at end of output5.5.3
--bind-addressUse specified network interface to connect to MySQL Server5.5.8
--character-sets-dirDirectory where character sets are installed
--compactProduce more compact output
--compatibleProduce output that is more compatible with other database systems or with older MySQL servers
--complete-insertUse complete INSERT statements that include column names
--compressCompress all information sent between client and server
--create-optionsInclude all MySQL-specific table options in CREATE TABLE statements
--databasesInterpret all name arguments as database names
--debugWrite debugging log
--debug-checkPrint debugging information when program exits
--debug-infoPrint debugging information, memory, and CPU statistics when program exits
--default-authAuthentication plugin to use5.5.9
--default-character-setSpecify default character set
--defaults-group-suffixOption group suffix value
--delayed-insertWrite INSERT DELAYED statements rather than INSERT statements
--delete-master-logsOn a master replication server, delete the binary logs after performing the dump operation
--disable-keysFor each table, surround INSERT statements with statements to disable and enable keys
--dump-dateInclude dump date as "Dump completed on" comment if --comments is given
--dump-slaveInclude CHANGE MASTER statement that lists binary log coordinates of slave's master5.5.3
--enable-cleartext-pluginEnable cleartext authentication plugin5.5.47
--eventsDump events from dumped databases
--extended-insertUse multiple-row INSERT syntax
--fields-enclosed-byThis option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE
--fields-escaped-byThis option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE
--fields-optionally-enclosed-byThis option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE
--fields-terminated-byThis option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE
--flush-logsFlush MySQL server log files before starting dump
--flush-privilegesEmit a FLUSH PRIVILEGES statement after dumping mysql database
--forceContinue even if an SQL error occurs during a table dump
--helpDisplay help message and exit
--hex-blobDump binary columns using hexadecimal notation
--hostHost to connect to (IP address or hostname)
--ignore-tableDo not dump given table
--include-master-host-portInclude MASTER_HOST/MASTER_PORT options in CHANGE MASTER statement produced with --dump-slave5.5.3
--insert-ignoreWrite INSERT IGNORE rather than INSERT statements
--lines-terminated-byThis option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE
--lock-all-tablesLock all tables across all databases
--lock-tablesLock all tables before dumping them
--log-errorAppend warnings and errors to named file
--master-dataWrite the binary log file name and position to the output
--max_allowed_packetMaximum packet length to send to or receive from server
--net_buffer_lengthBuffer size for TCP/IP and socket communication
--no-autocommitEnclose the INSERT statements for each dumped table within SET autocommit = 0 and COMMIT statements
--no-create-dbDo not write CREATE DATABASE statements
--no-create-infoDo not write CREATE TABLE statements that re-create each dumped table
--no-set-namesSame as --skip-set-charset
--no-tablespacesDo not write any CREATE LOGFILE GROUP or CREATE TABLESPACE statements in output
--order-by-primaryDump each table's rows sorted by its primary key, or by its first unique index
--pipeOn Windows, connect to server using named pipe
--plugin-dirDirectory where plugins are installed5.5.9
--portTCP/IP port number to use for connection
--print-defaultsPrint default options
--protocolConnection protocol to use
--quickRetrieve rows for a table from the server a row at a time
--quote-namesQuote identifiers within backtick characters
--replaceWrite REPLACE statements rather than INSERT statements
--result-fileDirect output to a given file
--routinesDump stored routines (procedures and functions) from dumped databases
--set-charsetAdd SET NAMES default_character_set to output
--shared-memory-base-nameThe name of shared memory to use for shared-memory connections
--single-transactionIssue a BEGIN SQL statement before dumping data from server
--skip-add-drop-tableDo not add a DROP TABLE statement before each CREATE TABLE statement
--skip-compactDo not produce more compact output
--skip-disable-keysDo not disable keys
--skip-extended-insertTurn off extended-insert
--skip-optTurn off options set by --opt
--skip-quickDo not retrieve rows for a table from the server a row at a time
--skip-quote-namesDo not quote identifiers
--skip-set-charsetDo not write SET NAMES statement
--skip-triggersDo not dump triggers
--skip-tz-utcTurn off tz-utc
--socketFor connections to localhost, the Unix socket file to use
--sslEnable encrypted connection
--ssl-caPath of file that contains list of trusted SSL CAs
--ssl-capathPath of directory that contains trusted SSL CA certificates in PEM format
--ssl-certPath of file that contains X509 certificate in PEM format
--ssl-cipherList of permitted ciphers to use for connection encryption
--ssl-keyPath of file that contains X509 key in PEM format
--ssl-modeSecurity state of connection to server5.5.49
--ssl-verify-server-certVerify server certificate Common Name value against host name used when connecting to server
--tabProduce tab-separated data files
--tablesOverride --databases or -B option
--triggersDump triggers for each dumped table
--tz-utcAdd SET TIME_ZONE='+00:00' to dump file
--userMySQL user name to use when connecting to server
--verboseVerbose mode
--versionDisplay version information and exit
--whereDump only rows selected by given WHERE condition
--xmlProduce XML output

You can also set the following variables by using --var_name=value syntax:

A common use of mysqldump is for making a backup of an entire database:

shell> mysqldump db_name > backup-file.sql

You can load the dump file back into the server like this:

shell> mysql db_name < backup-file.sql

Or like this:

shell> mysql -e "source /path-to-backup/backup-file.sql" db_name

mysqldump is also very useful for populating databases by copying data from one MySQL server to another:

shell> mysqldump --opt db_name | mysql --host=remote_host -C db_name

It is possible to dump several databases with one command:

shell> mysqldump --databases db_name1 [db_name2 ...] > my_databases.sql

To dump all databases, use the --all-databases option:

shell> mysqldump --all-databases > all_databases.sql

For InnoDB tables, mysqldump provides a way of making an online backup:

shell> mysqldump --all-databases --master-data --single-transaction > all_databases.sql

This backup acquires a global read lock on all tables (using FLUSH TABLES WITH READ LOCK) at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If long updating statements are running when the FLUSH statement is issued, the MySQL server may get stalled until those statements finish. After that, the dump becomes lock free and does not disturb reads and writes on the tables. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.

For point-in-time recovery (also known as roll-forward, when you need to restore an old backup and replay the changes that happened since that backup), it is often useful to rotate the binary log (see Section 5.4.4, “The Binary Log”) or at least know the binary log coordinates to which the dump corresponds:

shell> mysqldump --all-databases --master-data=2 > all_databases.sql

Or:

shell> mysqldump --all-databases --flush-logs --master-data=2
> all_databases.sql

The --master-data and --single-transaction options can be used simultaneously, which provides a convenient way to make an online backup suitable for use prior to point-in-time recovery if tables are stored using the InnoDB storage engine.

For more information on making backups, see Section 7.2, “Database Backup Methods”, and Section 7.3, “Example Backup and Recovery Strategy”.

If you encounter problems backing up views, please read the section that covers restrictions on views which describes a workaround for backing up views when this fails due to insufficient privileges. See Section C.5, “Restrictions on Views”.

Posted by Jason Dias on February 16, 2004
For a faster "mysqldump" innodb tables

2. open the dump file put this statement at the beginning of the sql dump text file:

SET FOREIGN_KEY_CHECKS=0;

Very fast.
Posted by Tim To on February 24, 2004
After adding "SET FOREIGN_KEY_CHECKS=0;" remember to append the "SET FOREIGN_KEY_CHECKS=1;" at the end of the import file. The potential problem is that any data inconsistency that would've made the foreign key failed during import would have made it into the database even after the forieng keys are turned back on. This is especially true if the foreign keys aren't turned back on after a long period of time which can happen if the "SET FOREIGN_KEY_CHECKS=1;" was not appended to the import file in the first place.

Posted by Christopher Huhn on July 1, 2004
You can even do your mysqldump backups with logrotate.
Simply put something like this into /etc/logrotate.conf:

/var/backups/mysql/dump.sql {
daily
rotate 14
missingok
compress
postrotate
/usr/bin/mysqldump --defaults-extra-file=/.../backup-credentials.cnf --opt --flush-logs --all-databases > /var/backups/mysql/dump.sql
endscript
}

Posted by Vesa Kivistö on August 26, 2004
Following mysqldump import example for InnoDB tables is at least 100x faster than previous examples.

2. Edit the dump file and put these lines at the beginning:

SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;

3. Put these lines at the end:

SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;

Posted by Corey Tisdale on February 23, 2005
If you want to schedule a task on windows to backup and move your data somewhere, the lack of documentation and command-line tools in windows can make it a real beast. I hope this helps you keep your data safe.

First off, you will need a command line file compressor (or your should use one, anyway). I like GNU gzip. You can get it for windows here http://gnuwin32.sourceforge.net/packages/gzip.htm

Secondly, you will need to use windowsw FTP via command line. It took me all day to find documentation on this guy, so I hope this saves some time for somebody.

Anyway, you need two files -- the batch file and a script for your ftp client. The Batch file should look like this guy (it uses random numbers in the file name so that multiple backups are not overwritten):

@ECHO OFF

@REM Set dir variables. Use ~1 format in win2k
SET basedir=C:\BACKUP~1
SET workdir=c:\TEMP
SET mysqldir=c:\mysql\bin
SET gzipdir=c:\PROGRA~1\GnuWin32\bin
SET mysqluser=myrootuser

@REM Change to mysqldir
CD %mysqldir%

@REM dump database. This is all one line
mysqldump -u %mysqluser% -p%mysqlpassword% --all-databases >%workdir%\backup.sql

@REM Change to workdir
CD %workdir%

@REM Zip up database
%gzipdir%\gzip.exe backup.sql

@REM Move to random file name
MOVE backup.sql.gz backup.%random%.gz

@REM FTP file to repository
FTP -n -s:%basedir%\ftp-commands.txt

@REM Remove old backup files
del backup.sql
del backup.*.gz

@REM Change back to base dir
CD %basedir%

And your ftp script should look like this guy (and be named ftp-commands.txt so the above script can find it)

open
ftp.mybackuplocation.com
user
bin
put backup.*.gz
quit

Make sure both of the above files are in whatever directory you set up as %basedir% and test it out and make sure everything works for you. Then schedule it to run every day to protect your data!
Posted by Lon B on March 1, 2005
Corey's example is helpful, but I don't care for the random file name. Here is the manual script I use on Windows for kicking off a MYSQL backup.

You could easily add all the other bells and whistles of ZIP, FTP, and scheduling should you need it. Note that I didn't use a password or many of the other args for mysqldump, you can add those if ya need 'em.

@ECHO OFF
for /f "tokens=1-4 delims=/ " %%a in ('date/t') do (
set dw=%%a
set mm=%%b
set dd=%%c
set yy=%%d
)

SET bkupdir=C:\path\to\where\you\want\backups
SET mysqldir=D:\path\to\mysql
SET dbname=this_is_the_name_of_my_database
SET dbuser=this_is_my_user_name

@ECHO Beginning backup of %dbname%...

%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% > %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql
@ECHO Done! New File: dbBkup_%dbname%_%yy%%mm%%dd%.sql
pause
Posted by David Tonhofer on March 4, 2005
A little reformulation of the actions that occur during an online dump with log-point registration, i.e. a dump that does not unduly disturb clients using the database during the dump (N.B.: only from 4.1.8 on!) and that can be used to start a slave server from the correct point in the logs.

Use these options:

--single-transaction
--flush-logs
--master-data=1
--delete-master-logs

If you have several databases that are binary-logged and you want to keep a consistent binary log you may have to include all the databases instead of just some (is that really so?):

--all-databases

Now, these are the actions performed by the master server:

1) Acquire global read lock using FLUSH TABLES WITH READ LOCK. This also flushes the query cache and the query result cache. Caused by option --single-transaction.
2) All running and outstanding transactions terminate. MySQL server stalls for further updates.
3) Read lock on all tables acquired.
4) All the logs are flushed, in particular the binary log is closed and a new generation binary log is opened. Caused by option --flush-logs
5) Binary lock coordinates are read and written out so that the slave can position correctly in the binary log. Caused by --master-data=1
6) Read lock is released, MySQL server can proceed with updates. These updates will also go to the binary log and can thus be replayed by the slave. Meanwhile, the InnoDB tables are dumped in a consistent state, which is the state they were in in step 5. (Not guaranteed for MyISAM tables)
7) Dump terminates after a possibly long time.
8) Any old binary log files are deleted. Caused by --delete-master-logs.

--extended-insert: use multiple-row insert statements
--quick: do not do buffering of row data, good if tables are large

And there are format-influencing options:

--hex-blob: dump binary columns in hex
--complete-insert: use complete insert statements that include column names works nicely with --extended-insert

Posted by Mike Ng on May 16, 2005

You can pipe it to gzip to compress in windows. I didn't think it would work on windows, but apparently it does.

@ECHO Beginning backup of %dbname%...

%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% | gzip> %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql.gz

Of course,you need gng gzip in your path or directory.
Posted by Eric Martel on January 16, 2006
When using mysqldump on a replication master, if you want the slave(s) to follow, you may want to avoid the --delete-master-logs option, because it can delete binary logs before the "CHANGE MASTER" is read by the slaves, therefore breaking the replication (then you have to issue manually the "CHANGE MASTER" on the slave(s)). If you want to get rid of old and useless binary logs, it is better to issue a "PURGE MASTER" SQL command on the master after the mysqldump.
Posted by Wade Hedgren on January 22, 2006
I moved my MySQL installation from Linux to Windows 2003 and had to create a new backup script. I was using hotcopy but with windows it's not avaliable.

So, Inspired by Lon B and Corey Tisdale (above) I created a batch file that will create a mysqldump GZiped file for each database and put them into seperate folders. It also creates a log file. You will have to set the vars at the top to match your system.

You will also need GZip to do the compression...

It could still use some work (like no error trapping etc...) but it's in production for me now.

I used a utility "commail.exe" to send the log file to me after the backup is complete.

//--- Begin Batch File ---//
@echo off

:: Set some variables
set bkupdir=E:\MySQL\backup
set mysqldir=E:\MySQL
set logdir=E:\MySQL\logs
set zip=C:\GZip\bin\gzip.exe
set endtime=0

:GETTIME

:: get the date and then parse it into variables
for /F "tokens=2-4 delims=/ " %%i in ('date /t') do (
set mm=%%i
set dd=%%j
set yy=%%k
)

:: get the time and then parse it into variables
for /F "tokens=5-8 delims=:. " %%i in ('echo.^| time ^| find "current" ') do (
set hh=%%i
set ii=%%j
set ss=%%k
)

:: If this is the second time through then go to the end of the file
if "%endtime%"=="1" goto END

:: Create the filename suffix
set fn=_%yy%%mm%%dd%_%hh%%mm%%ss%

:: Switch to the data directory to enumerate the folders

:: Write to the log file
echo Beginning MySQLDump Process > %logdir%\LOG%fn%.txt
echo Start Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo --------------------------- >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt

:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (

:: Create the backup sub-directory is it does not exist
if not exist %bkupdir%\%%f\ (
echo Making Directory %%f
echo Making Directory %%f >> %logdir%\LOG%fn%.txt
mkdir %bkupdir%\%%f
) else (
echo Directory %%f Exists
echo Directory %%f Exists >> %logdir%\LOG%fn%.txt
)

:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %%f%fn%.sql.gz
echo Backing up database %%f%fn%.sql.gz >> %logdir%\LOG%fn%.txt
%mysqldir%\bin\mysqldump --user=%dbuser% --password=%dbpass% --databases %%f --opt --quote-names --allow-keywords --complete-insert | %zip% > %bkupdir%\%%f\%%f%fn%.sql.gz
echo Done...
echo Done... >> %logdir%\LOG%fn%.txt
)

:: Go back and get the end time for the script
set endtime=1
goto :GETTIME

:END
:: Write to the log file
echo. >> %logdir%\LOG%fn%.txt
echo --------------------------- >> %logdir%\LOG%fn%.txt
echo MySQLDump Process Finished >> %logdir%\LOG%fn%.txt
echo End Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt

popd

:: Send the log file in an e-mail
c:\commail\commail -host=smtp.yourcompany.com -from="server <server@yourcompany.com>" -to=serveradmins@yourcompany.com -subject="MySQL Backup" -msg=%logdir%\LOG%fn%.txt

//--- End Batch File ---//

Posted by Philip Sbrogna on February 2, 2006
Here's a bash wrapper for mysqldump I cron'd to run at night. It's not the sexiest thing but it's reliable.

It creates a folder for each day, a folder for each db & single bzip2'd files for each table. There are provisions for exclusions. See below where it skips the entire tmp & test db's and in all db's, tables tbl_session & tbl_parameter. It also cleans up files older than 5 days (by that time they've gone to tape).

Be sure to update <user> & <pwd>. Ideally these would be in constants but I couldn't get the bash escaping to work.

# setup
suffix=date +%Y%m%d
dest=/mirror/mysqldumps
cmd='/usr/bin/mysqldump'

databases=(echo 'show databases;' | mysql -u <user> --password='<pwd>' | grep -v ^Database$) for d in "${databases[@]}"; do
if [[ $d != 'tmp' &&$d != 'test' ]]
then
echo "DATABASE ${d}" s="use${d}; show tables;"
tables=(echo ${s} | mysql -u <user> --password='<pwd>' | grep -v '^Tables_in_') for t in "${tables[@]}"; do
if [[ $t != 'tbl_parameter' &&$t != 'tbl_session' ]]
then
echo " TABLE ${t}" path="${dest}/${suffix}/${d}"
mkdir -p ${path}${cmd} --user=<user> --password='<pwd>' --quick --add-drop-table --all ${d}${t} | bzip2 -c > ${path}/${t}.sql.bz2
fi
done
fi
done

# delete old dumps (retain 5 days)
find ${dest} -mtime +5 -exec rm {} \; Posted by Julien GARINO on December 17, 2009 You always wanted to BACKUP your most important database somewhere in your Linux system, as well as send the dump by email, so that you can recover the entire content if the system crashes. You can use these 2 scripts. First Step: -Install the mutt client that will transfer emails on the command-line : "apt-get install mutt" or "yum install mutt" -Create the backup directory : "mkdir /home/backups" Second Step: - Copy these 2 scripts on your root directory or your user directory : #!/bin/sh # Script name : auto_mysql_dump.sh # Backup the dbname database dir=date +%Y-%m-%d dbname=mybase if [ -d /home/backups ]; then mkdir /home/backups/$dir
mysqldump -B --user=user_of_my_base --password=pwd_of_my_base --host=host_of_my_base $dbname > /home/backups/$dir/$dbname.sql if [$?=0 ]; then
#Bzip2 the dump.sql
bzip2 -z9v /home/backups/$dir/$dbname.sql
#Remove the dump.sql from disk
rm -f /home/backups/$dir/$dbname.sql
fi
fi
# End of script auto_mysql_dump.sh

#!/bin/sh
# Script Name : auto_mail_dump.sh
# Sends an email with the dump realized before
dir=date +%Y-%m-%d
dbname=mybase
mutt -s "Today backup" -a /home/backups/$dir/$dbname.sql.bz2 user@tosend.com < /dev/null
# End of script auto_mail_dump.sh

"chmod 700 auto_mysql_dump.sh"
"chmod 700 auto_mail_dump.sh"

Third step:
-Edit the CronTab to schedule the execution of the two scripts.
"crontab -e" (you will use the vi editor)
We consider that the 2 scripts are in the /root directory
-I want the dump to be executed at 8.30 everyday
-I want the mail to be sent at 9.00 everyday
Thus I add these 2 rows after the existing lines :
Hit the "i" to insert new characters...

30 8 * * * /root/auto_mysql_dump.sh > /dev/null
00 9 * * * /root/auto_mail_dump.sh > /dev/null

Save the crontab by hitting : "Esc" + ":wq" (means Write and Quit)

What you should do now :
Once you've written the scripts, test-them !

Enjoy the automatic backup from now on :-)
Posted by on March 22, 2006
When you need to import the data from a mysqldump, instead of using "shell>mysql < dump.sql" using "mysql> source dump.sql" is much better.

This way of importing the data avoids problems with language specific characters being turned into garble.
Posted by Mathieu van Loon on June 29, 2006
Here's a DOS script that will backup all your databases to a seperate file in a new folder, zip the folder, encrypt the zip and email the encrypted zip to one or many adresses. If the backup is larger than a specified limit only the logfile is emailed. The unencrypted zipfile is left on your local machine.

The script is also available at http://www.jijenik.com/projects/mysqlbackup/

Many thanks to Wade Hedgren whose script formed the basis for this version.

//--- Begin Batch File ---//
::
:: Creates a backup of all databases in MySQL.
:: Zip, encrypts and emails the backup file.
::
:: Each database is saved to a seperate file in a new folder.
:: The folder is zipped and then deleted.
:: the zipped backup is encrypted and then emailed, unless the file exceeds the maximum filesize
:: In all cases the logfile is emailed.
:: The encrypted backup is deleted, leaving the unencrypted zipfile on your local machine.
::
:: Version 1.1
::
:: Changes in version 1.1 (released June 29th, 2006)
:: - backups are now sent to the address specified by the mailto variable
::
:: The initial version 1.0 was released on May 27th, 2006
::
::
:: This version of the script was written by Mathieu van Loon (mathieu-public@jijenik.com)
:: It is based heavily on the script by Wade Hedgren (see comments at http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)
::
:: This script requires several freeware libraries:
:: - zipgenius (a compression tool), www.zipgenius.it
:: - blat (an emailer tool), www.blat.net
:: - doff (extracts datetime, ignores regional formatting), www.jfitz.com/dos/index.html
::
:: Some areas where this script could be improved:
:: - include error trapping and handling
:: - make steps such as encryption and email optional
:: - allow the user to specify a single database on the command line
::
@echo off

::
:: Configuration options
::

:: The threshold for emailing the backup file. If the backup is larger
:: it will not be emailed (the logfile is always sent).
set maxmailsize=10000000

:: The passphrase used to encrypt the zipfile. Longer is more secure.
set passphrase=secret

:: Name of the database user
set dbuser=root

:: Password for the database user

:: Recipients of database backup, comma seperated, enclosed in quotes
set mailto="backups@example.com,backups2@example.com"

:: Email server
set mailsmtp=localhost

:: Email subject
set mailsubject="MySQL Backup"

:: directory where logfiles are stored
set logdir=C:\DatabaseBackups\logs

:: directory where backup files are stored
set bkupdir=C:\DatabaseBackups

:: Install folder of MySQL
set mysqldir=C:\Program Files (x86)\MySQL\MySQL Server 4.1

:: Data directory of MySQL (only used to enumerate databases, we use mysqldump for backup)
set datadir=C:\Program Files (x86)\MySQL\MySQL Server 4.1\data

:: Path of zipgenius compression tool
set zip=C:\Program Files (x86)\ZipGenius 6\zg.exe

:: Path of blat mail tool
set mail=C:\DatabaseBackups\Backupscript\libraries\Blat250\full\blat.exe

:: Path of doff date tool (specify only the folder not the exe)
set doff=C:\DatabaseBackups\Backupscript\libraries\doff10

::
::
:: NO NEED TO CHANGE ANYTHING BELOW
::
::

:: get the date and then parse it into variables
pushd %doff%
for /f %%i in ('doff.exe yyyymmdd_hhmiss') do set fn=%%i
for /f %%i in ('doff.exe dd-mm-yyyy hh:mi:ss') do set nicedate=%%i
popd

set logfile="%logdir%\%fn%_Backuplog.txt"

:: Switch to the data directory to enumerate the folders

:: Write to the log file
echo Beginning MySQLDump Process > %logfile%
echo Start Time = %nicedate% >> %logfile%
echo --------------------------- >> %logfile%
echo. >> %logfile%

:: Create the backup folder
if not exist "%bkupdir%\%fn%\" (
echo Making Directory %fn%
echo Making Directory %fn% >> %logfile%

mkdir "%bkupdir%\%fn%"
)

:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (

:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %fn%_%%f.sql
echo Backing up database %fn%_%%f.sql >> %logfile%
"%mysqldir%\bin\mysqldump" --user=%dbuser% --password=%dbpass% --databases %%f --opt --quote-names --allow-keywords --complete-insert > "%bkupdir%\%fn%\%fn%_%%f.sql"
echo Done... >> %logfile%
)

:: return from data dir
popd

pushd %bkupdir%

echo Zipping databases
echo Zipping databases >> %logfile%
REM C9 : maximum compression
REM AM : Delete source files
REM F1 : Store relative path
REM R1 : include subfolders
REM K0 : Do not display progress

"%zip%" -add "%fn%_MySQLBackup.zip" C9 AM F1 R1 K0 +"%bkupdir%\%fn%"

echo Crypting zipfile
echo Crypting zipfile >> %logfile%

REM C : Create non-executable zip
REM S : Do not delete after x tries
REM 3 : Use AES encryption
"%zip%" -encrypt "%fn%_MySQLBackup.zip" C S 3 "%passphrase%" %mailfrom%

echo Deleting directory %fn%
echo Deleting directory %fn% >> %logfile%

rmdir /s /q "%bkupdir%\%fn%"

:: Go back and get the end time for the script
set endtime=1

:: return from backup dir
popd

:: update the nicedate for the log
pushd %doff%
for /f %%i in ('doff.exe dd-mm-yyyy hh:mi:ss') do set nicedate=%%i
popd

:: Write to the log file
echo. >> %logfile%
echo --------------------------- >> %logfile%
echo MySQLDump Process Finished >> %logfile%
echo End Time = %nicedate% >> %logfile%
echo. >> %logfile%

:: Send the log file in an e-mail, include the backup file if it is not too large
:: We use the CALL Trick to enable determination of the filesize (type CALL /? at prompt for info)
:: note that you _must_ specify the full filename as the argument

pushd %bkupdir%
Call :MAILFILE "%bkupdir%\%fn%_MySQLBackup.czip"
echo Backup completed
goto :EOF

:MAILFILE

if /i %~z1 LSS %maxmailsize% (
echo Emailing backup file
"%mail%" %logfile% -q -attach %1 -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
) ELSE (
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B.
echo Emailing logfile only
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B. only emailing logfile. >> %logfile%

"%mail%" %logfile% -q -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
)

echo Deleting encrypted backup file
del %1

popd
//--- End Batch File ---//
Posted by Don Gilman on July 25, 2006
RE: Mathieu van Loon

Excellent, I had this installed and configured in about 10 minutes. I do have one minor fix however.

You aren't getting the time portion of the DOFF command captured into your variable. It appears that the output formatting string MUST NOT CONTAIN ANY BLANKS so I changed mine to:

for /f %%i in ('doff.exe dd-mm-yyyy_at_hh:mi:ss') do set nicedate=%%i

This is terrific, wish I found it 10 hrs ago (darn mySQL Administrator Backup - such a waste!!!
***
Now the problem is that my backups won't restore.... I am backing up multiple instances of MediaWiki, Mantis, and Joomla. I'm playing around with the
--max_allowed_packet= nnn and that should fix it based on manual backups working. Now is that nnn bytes or an abbreviation? Hmmm.

Posted by Enrico Modanese on July 13, 2006
I often get errors [MySQL 4.* and 5.*] on reloading a dump of databases having big blobs. I found the solution disabling the --extended-insert (that comes inside the multiple option --opt, enabled by default) with --skip-extended-insert. I think this way is safer, but it is also more more slow.
Posted by Roddi Walker on October 8, 2006
Here's a python script that does rolling WinRAR'd backups on Windows. It should be trivial to change to Linux, or another compression program.
1) this was a quick hack, so please test thoroughly before using in production. Still, I hope it will be a useful basis for your own script.
2) the --single-transaction switch is used as I am backing up InnoDB tables.
3) mysqldump is run with the root user. It would be A Good Thing to make this more secure - eg. create a backup user with read-only permissions to the tables.
4) <tab> is the tab character. Indentation is significant in Python.

import glob
import os
import time

# configuration
baseBackupFileName = "backupName"
maxBackups = 3
mySqlDumpCommand = "d:\\programs\\mysql\\bin\\mysqldump --user=root --password=rootpass --single-transaction DBName Table1Name Table2Name Table3Name"
winRarPath = "\"c:\\Program Files\\WinRAR\\WinRAR.exe\"" # path is quoted as it contains spaces

print "--- START ---"

# create new backup
newBackupFileName = baseBackupFileName + time.strftime("_%Y%m%d_%H%M%S", time.localtime())
os.system(mySqlDumpCommand+" > "+newBackupFileName+".sql")

# compress new backup
os.system(winRarPath+" a "+newBackupFileName+" "+newBackupFileName+".sql")
os.remove(newBackupFileName+".sql")
print "Created new backup \""+newBackupFileName+".rar\""

# delete old backups
oldBackupFileNames = glob.glob(baseBackupFileName+"_*_*.rar")
oldBackupFileNames.sort()
if len(oldBackupFileNames) > maxBackups:
<tab>for fileName in oldBackupFileNames[0:len(oldBackupFileNames)-maxBackups]:
<tab><tab>os.remove(fileName)
<tab><tab>print "Deleted old backup \""+fileName+"\""

print "--- END ---"

Posted by Nick Littlestone on June 20, 2007
It seems one needs to be careful when using --skip-opt with databases containing non-ascii latin1 characters, especially if one has not been paying much attention to character sets.

I am just using default character sets - normally latin1. However, the dump produced by mysqldump is, perhaps surprisingly, in utf8. This seems fine, but leads to trouble with the --skip-opt option to mysqldump, which turns off --set-charset but leaves the dump in utf8.
This seems to lead to a dump that will be silently incorrectly reloaded if strings in the database contain non-ascii latin1 characters.
(Is this a documentation flaw, a design flaw or a bug??)
Perhaps the fact that mysqldump uses utf8 by default, and the importance of the --set-charset option should be more prominently documented (see the documentation for the --default-character-set attribute for the current mention of the use of utf8)
Posted by Ryan Haynes on July 11, 2007
I am fairly new to bash scripting, however I encountered the problem of all the databases going into one *.sql file. If you have a large amount of databases to backup it can take forever to restore your backup. This is a script I wrote to accomplish what I felt was needed. It grabs the name of the database and puts them in separate *.sql.bz2 files with a corresponding timetamp. Please let me know if this helps and perhaps if I can make it more elegant.

#!/bin/sh

TIME_1=date +%s

cd /backup/mysql

DBS="$(mysql --user=youruser --password=yourpass -Bse 'show databases')" for db in${DBS[@]}
do
echo ${db}-$(date +%m-%d-%y).sql.bz2 is being saved in /backup/mysql
# remember to add the options you need with your backups here.
mysqldump --user=youruser --password=yourpass $db --single-transaction -R | bzip2 -c >${db}-$(date +%m-%d-%y).sql.bz2 done TIME_2=date +%s elapsed_time=$(( ( $TIME_2 -$TIME_1 ) / 60 ))

## just a sanity check to make sure i am not running a dump for 4 hours

echo "This mysql dump ran for a total of $elapsed_time minutes." > mysql_dump_runtime.txt # delete old databases. I have it setup on a daily cron so # anything older than 60 minutes is fine for del in$(find /backup/mysql -name '*.sql.bz2' -mmin +60)
do
echo This directory is more than one day old and it is being removed: $del rm$del
done
Posted by Bill Hernandez on August 22, 2007
#!/bin/bash

# SEE : http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
# SEE : http://safari.oreilly.com/0596526784/date_and_time_string_formatting_with_strftime
#
# Improved by Bill Hernandez (Plano, Texas) on Tuesday, August 21, 2007 (12:55 AM)
# ( 1 ) Backs up all info to time stamped individual directories, which makes it easier to track
# ( 2 ) Now maintains a single log that contains additional information
# ( 3 ) Includes a file comment header inside each compressed file
# ( 4 ) Used more variables instead of hard-code to make routine easier to use for something else
# ( 5 ) Where I have mysql5, you may have to replace it with mysql
#
# Posted by Ryan Haynes on July 11 2007 6:29pm

# DO NOT DELETE AUTOMATICALLY FOR NOW, MAYBE LATER

DELETE_EXPIRED_AUTOMATICALLY="TRUE"

# DELETE EXPIRED BACKUPS THAT ARE MORE THAN
# expire_minutes=$(( 1 * 30 )) # 30 minutes old # expire_minutes=$(( 60 * 24 )) # 1 day old
# expire_minutes=$(( 60 * 24 * 7 )) # 7 days old # expire_minutes=$(( 60 * 24 * 30 )) # 30 days old

expire_minutes=$(( 60 * 24 * 7 )) # 7 days old if [$expire_minutes -gt 1440 ]; then
expire_days=$(($expire_minutes /1440 ))
else
expire_days=0
fi

function pause(){
read -p "$*" } # pause "HIT RETURN, and then enter your sudo password..." echo "Please enter your sudo password..." sudo echo mysql_username="your_mysql_username" mysql_password="your_mysql_password" current_dir=pwd echo -n "Current working directory is : " echo$current_dir
echo "------------------------------------------------------------------------"

TIME_1=date +%s
TS=$(date +%Y.%m.%d\-%I.%M.%p) BASE_DIR=/backups/mysql BACKUP_DIR=${BASE_DIR}/$TS BACKUP_LOG_NAME=mysql_dump_runtime.log BACKUP_LOG=${BASE_DIR}/${BACKUP_LOG_NAME} sudo mkdir -p$BACKUP_DIR
sudo chown mysql:admin $BACKUP_DIR sudo chmod 775$BASE_DIR
sudo chmod -R 777 $BACKUP_DIR cd$BACKUP_DIR
echo -n "Changed working directory to : "
pwd

echo "Saving the following backups..."
echo "------------------------------------------------------------------------"

DBS="$(mysql5 --user=${mysql_username} --password=${mysql_password} -Bse 'show databases')" for db in${DBS[@]}
do
normal_output_filename=${db}.sql compressed_output_filename=${normal_output_filename}.bz2
echo $compressed_output_filename # remember to add the options you need with your backups here. echo "--$compressed_output_filename - $TS" >$normal_output_filename
echo "-- Logname : logname" >> $normal_output_filename # mysqldump5 --user=${mysql_username} --password=${mysql_password}$db --single-transaction -R | bzip2 -c > $compressed_output_filename mysqldump5 --user=${mysql_username} --password=${mysql_password}$db --single-transaction -R >> $normal_output_filename bzip2 -c$normal_output_filename > $compressed_output_filename rm$normal_output_filename
done
echo "------------------------------------------------------------------------"

TIME_2=date +%s

elapsed_seconds=$(( ($TIME_2 - $TIME_1 ) )) elapsed_minutes=$(( ( $TIME_2 -$TIME_1 ) / 60 ))

# just a sanity check to make sure i am not running a dump for 4 hours

cd $BASE_DIR echo -n "Changed working directory to : " pwd echo "Making log entries..." if [ ! -f$BACKUP_LOG ]; then
echo "------------------------------------------------------------------------" > ${BACKUP_LOG_NAME} echo "THIS IS A LOG OF THE MYSQL DUMPS..." >>${BACKUP_LOG_NAME}
echo "DATE STARTED : [${TS}]" >>${BACKUP_LOG_NAME}
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME} echo "[BACKUP DIRECTORY ] [ELAPSED TIME]" >>${BACKUP_LOG_NAME}
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME} fi echo "[${TS}] This mysql dump ran for a total of $elapsed_seconds seconds." >>${BACKUP_LOG_NAME}
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME} # delete old databases. I have it setup on a daily cron so anything older than 60 minutes is fine if [$DELETE_EXPIRED_AUTOMATICALLY == "TRUE" ]; then
counter=0
for del in $(find$BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes}) do counter=$(( counter + 1 ))
echo "[${TS}] [Expired Backup - Deleted]$del" >> ${BACKUP_LOG_NAME} done echo "------------------------------------------------------------------------" if [$counter -lt 1 ]; then
if [ $expire_days -gt 0 ]; then echo There were no backup directories that were more than${expire_days} days old:
else
echo There were no backup directories that were more than ${expire_minutes} minutes old: fi else echo "------------------------------------------------------------------------" >>${BACKUP_LOG_NAME}
if [ $expire_days -gt 0 ]; then echo These directories are more than${expire_days} days old and they are being removed:
else
echo These directories are more than ${expire_minutes} minutes old and they are being removed: fi echo "------------------------------------------------------------------------" echo "\${expire_minutes} = ${expire_minutes} minutes" counter=0 for del in$(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
do
counter=$(( counter + 1 )) echo$del
rm -R $del done fi fi echo "------------------------------------------------------------------------" cd echo$current_dir
echo -n "Restored working directory to : "
pwd

Posted by Ludovico Caldara on October 29, 2007
if you want to check the result of mysqldump in a shell, you should pay attention. this will not work:

mysqldump --all-databases | gzip -c > dumpfile.sql.gz
if [ $? -eq 0 ] ... fi since the "|gzip" will always return 0. instead you should use a pipe: ### create a pipe named "pipe" mkfifo pipe ### compress the pipe in background gzip < pipe > dumpfile.sql.gz & ### write directly to the pipe mysqldump --all-databases > pipe ### get the real return code of mysqldump result=$?
### wait until the gzip completes
wait
### now it is safe to remove the pipe
rm pipe

kind regards....
--
Ludovico
Posted by Lance Keay on November 13, 2007
Hi all,
And thanks for the great script examples - I've taken a bit of the batch files and made a perl script for backing up mysql databases - it's pretty crude, but it's what I'm using right now to back up the servers nightly.

It's done on a win2003 server box that has perl and gzip (http://gnuwin32.sourceforge.net/packages.html)installed.

I chose perl 'cause if I need to port the script to 'nix it's trivial. (well, so's the script, really) And it's easy to bang out small scripts in!

Hope this helps someone!

=============

#!/usr/bin/perl

## DB info struct
@DBsToBackup =(
##host 1

##host 2
);

$backupdir = 'D:\\Database\\DBdumps'; print "Starting dump of databases...\n\n"; foreach$dbinfo (@DBsToBackup){
($Second,$Minute, $Hour,$Day, $Month,$Year, $WeekDay,$DayOfYear, $IsDST) = localtime(time) ;$Year += 1900 ; $Month += 1;$outputFilename = sprintf("%04d%02d%02d[%02d%02d%02d]", $Year,$Month, $Day,$Hour, $Minute,$Second ) . "_$$dbinfo[1]_.sql.gz"; print "dumping:$$dbinfo[0] / $$dbinfo[1]...\n"; ex = "\"C:\\Program Files\\MySQL\\MySQL\ Server\ 5.0\\bin\\mysqldump\" --user=$$dbinfo[2] --password=$$dbinfo[3] --host=$$dbinfo[0] --add-drop-database --add-drop-table dbinfo[1] | gzip > ${backupdir}\\$outputFilename";

print $ex ."\n\n"; print $ex;
}

print "DONE dumping\n";

print "deleting dumps older than 14 days...\n";
print forfiles /P $backupdir /D -14 /C "cmd /c del @path\@file" /S; print "script finished\n"; ============== You can find this script and more at http://www.lancetek.com Thanks, Lance Posted by Dave Walker on December 18, 2007 Host on a shared server? Then you will most probably not be able to create or retrieve a MySQL dump. For a solution try this: http://www.dwalker.co.uk/phpmysqlautobackup/ it will export your database and email to you. Its GPL open source. Posted by Bret Warren on March 2, 2008 This is a variation from the example above to deal with huge dump files that are too big to edit: 1. mysqldump --opt --user=username --password database > dumbfile.sql 2. Create file to execute prior to the dump file: echo 'SET AUTOCOMMIT = 0; SET FOREIGN_KEY_CHECKS=0; ' > pre.sql 3. Create a file to execute after the dump file is imported: echo 'SET FOREIGN_KEY_CHECKS = 1; COMMIT; SET AUTOCOMMIT = 1; ' > post.sql 4. cat pre.sql dumpfile.sql post.sql | mysql --user=username --password database Posted by Steve Hand on March 7, 2008 In case this helps anyone. This backs up all databases and tables and keeps all files for a week, placing them in a directory structure of the format: /backup_dir/db_name/day/table.sql.gz Useful if you want to restore to a particular days data. It checks new backups are different to last before overwriting files. This helps if you are rsyncing your filesystems as normally mysql writes a date into the dump so the files always appear to differ even if the data is the same. Also saves a directory with just your schema in, checks and repairs tables where necessary and defrags tables on a Sunday.  <?php // Change Me$username = "root";    $password = "password";$backup_dir = "/mnt/backup";    $dump = "/usr/bin/mysqldump";$grep = "/bin/grep";    $gzip = "/bin/gzip"; // This should not need changing from here function sql_dict($sql){        $x = mysql_query($sql);        if ($x) return mysql_fetch_assoc($x);    }        function cleanup($dir){$d = dir($dir); echo "Path: " .$d->path . "\n";        while (false !== ($entry =$d->read())) {            if ($entry=="." or$entry=="..") continue;            $e = str_replace(".sql.gz","",$entry);            $x = sql_dict("describe$e");            if (!$x) { print "Removing old backup file [$entry]\n";                unlink("$dir/$entry");            }        }        $d->close(); } function crc32_file($filename)    {          global $gzip;$x = exec("$gzip --list --verbose$filename");        $x = explode(" ",$x);        return $x[1]; } if (mysql_connect("localhost",$username,$password)) print "Connected.\n"; else die("Failed to connect to database.");$dbs = mysql_query("show databases");    if ($dbs) while ($db = mysql_fetch_array($dbs, MYSQL_ASSOC)) {$db = $db['Database']; if ($db=="information_schema") continue;        if (mysql_select_db($db)) print "Selected [$db]\n";        else die("Failed to select db [$db]"); foreach (array("schema","data") as$pass){            $sql = mysql_query("show tables");$day = date("l");            if ($pass=="schema")$dir = "/$backup_dir/$db/schema";            else $dir = "/$backup_dir/$db/$day";            if (!file_exists($dir)) system("mkdir -p$dir");            if (!file_exists($dir)) die("Couldn't Create$dir");            if ($pass=="data"){$latest = "/$backup_dir/$db/latest";                unlink($latest); system("/bin/ln -s \"$dir\" \"$latest\""); } cleanup($dir);            if ($sql) while ($s = mysql_fetch_assoc($sql)) { if (!isset($s["Tables_in_{$db}"])) { print "no result"; print_r($sql);                    die();                }                $t =$s["Tables_in_{$db}"]; if ($pass=="schema" ) $data = "--no-data"; else$data = "--lock-tables";                 $tab =$t;                 $lim = 30; if (strlen($tab)>$lim)$tab = substr($tab,0,$lim-3)."...";                 while (strlen($tab)<30)$tab .= " ";                print "BACKUP: $pass :$day : $db :$tab : ";                if ($pass=="data"){ print "Check : ";$check = sql_dict("check table $t");$check = $check['Msg_text']; print "$check : ";                    if ($check != "OK") { print "Repair";$repair = sql_dict("repair table $t");$repair = $repair['Msg_text']; print " :$repair : ";                    }                    if ($day=="Sunday"){ // optimize print "Optimize : ";$type = sql_dict("show table status like '$t'");$type = $type['Engine']; if ($type=="MyISAM") sql("optimize table $t"); if ($type=="InnoDB") sql("alter table $t engine='InnoDB'"); } } if (isset($argv[1])){                    print "Skipping dump\n";                } else {                    $temp = "/tmp/backup.$t.sql.gz";                    $out = "$dir/$t.sql.gz"; print "Dump : ";$cmd = "$dump -u$username -p$password$data --quick --add-drop-table $db$t | $grep -v 'Dump completed' |$gzip -n > $temp"; system($cmd);                    print "CRC32 : ";                    if (!file_exists($out)){ print "Saving : ";$cmd = "/bin/mv $temp$out";                        system($cmd); } else {$md5  = crc32_file($temp);$nmd5 = crc32_file($out); if ($md5!=$nmd5) { print "Saving : ";$cmd = "/bin/mv $temp$out";                            system($cmd); } else { print "Skipped : "; unlink($temp);                        }                    }                    $size = filesize($out);                    print "[$size]\n"; } } } }?>  Posted by Alexander Driantsov on June 16, 2008 Here is example of usage mysqldump with rdiff-backup inside of noe server with synchronization using rsync: #!/bin/sh # Incremental backup script using rdiff # Author: Driantsov Alexander # Requirements: # * rdiff-backup - http://www.nongnu.org/rdiff-backup/ # * rsync # * ssh ;) #### BACKUP_ADMIN_EMAIL="yourmail@yourserver.com" BACKUP_MAIL_SUBJECT="hostname: SQL Backup Synchronization Result" BACKUP_LOG="/var/log/backup-sql" BACKUP_PRIO="20" # Priority for the MySQL dump and rdiff-backup Min: 20 Max: -20 BACKUP_TMP_DIR="/var/backup/mysql_tmp" # New dumps will be stored here BACKUP_DIFF_DIR="/var/backup/hosting/mysql" # Diffs of dumps will be stored there SYNC_SRV="BAC.KUP.SER.VER" # Remote server for backup storage SYNC_USER="backup_user" # User at remote storage SYNC_SPEED="200" # Limit Synchronization Bandwidth to this number of KB/s SYNC_DIR="/backup/hosting/mysql" #Directory on Remote server to synchronize backups in MYSQL_USER="admin" # MySQL user MYSQL_PASSWD=cat /etc/psa/.psa.shadow # Password for MySQL. You may obtain password from /etc/psa/.psa.shadow if you are using Plesk on your server. #Implementaition RSCONSTR="$SYNC_USER@$SYNC_SRV" # Dump echo "Backup Started at date" >$BACKUP_LOG
load_average=uptime|awk '{print $10" "$11" "$12}' echo "Load overage at start:$load_average" >> $BACKUP_LOG echo "\nBackingUP MySQL:" >>$BACKUP_LOG
for i in mysql -u$MYSQL_USER -p$MYSQL_PASSWD -Bse "show databases"; do echo "Backing up database $i..." >>$BACKUP_LOG ; nice -n $BACKUP_PRIO mysqldump --single-transaction --quick --skip-extended-insert -u$MYSQL_USER -p$MYSQL_PASSWD$i > $BACKUP_TMP_DIR/$i.dump; done

echo "MySQL dump completed at date" >> $BACKUP_LOG load_average=uptime|awk '{print$10" "$11" "$12}'
echo "\nLoad overage after MySQL dumps: $load_average\n" >>$BACKUP_LOG
# Diff
echo "Looking for difference in database" >> $BACKUP_LOG nice -n$BACKUP_PRIO rdiff-backup $BACKUP_TMP_DIR$BACKUP_DIFF_DIR/$HOSTNAME 2>&1 >>$BACKUP_LOG
echo "Done" >> $BACKUP_LOG echo "Rdiff-backup completed at date" >>$BACKUP_LOG
load_average=uptime|awk '{print $10" "$11" "$12}' echo "\nLoad overage after rdiff-backup:$load_average\n" >> $BACKUP_LOG # Synchronize sql_result=rsync -avz --bwlimit$SYNC_SPEED $BACKUP_DIFF_DIR$RSCONSTR:$SYNC_DIR|tail -n 2 free_space_info=ssh$RSCONSTR df -h --sync -t ext3

echo -en " MySQL backup synchronization:\n $sql_result \nSynchronization completed at date \n\nInformation on free space on remote backup server: \n$free_space_info \n\n Backup Log: \n\ncat $BACKUP_LOG"| mail -s "$BACKUP_MAIL_SUBJECT" $BACKUP_ADMIN_EMAIL Posted by Bradford Mitchell on July 18, 2008 This is an example of a Windows batch script that implements a rotating archive of backups on a daily, weekly, and monthly basis. It also provides an installation option for the creation of the backup directories and an option to add a scheduled task to the system to run the batch file. I started with what Lon B posted and many editions/revisions later this was produced. I hope you find it as useful as we have. ~~~ BEGIN FILE ~~~ @ECHO OFF SET VERSIONMAJOR=10 SET VERSIONMINOR=6 FOR /f "tokens=1-4 delims=/ " %%a IN ('date/t') DO ( SET dw=%%a SET mm=%%b SET dd=%%c SET yy=%%d ) REM *** VERIFY AND UPDATE THESE SETTINGS BEFORE INITIAL RUN *** REM *** mysqldir must point to the \bin directory! *** SET bkupdir=C:\MySQL-Backups SET mysqldir=C:\wamp\bin\mysql\mysql5.0.51b\bin SET dbhost=localhost SET dbuser= SET dbpass= REM *** END USER CONFIGURABLE SETTINGS *** IF /i "%1" == "--INSTALL" GOTO INSTALLER IF /i "%1" == "--CREATEDIRS" GOTO CREATEDIRS IF /i "%1" == "--ADDSCHEDULEDTASK" GOTO TASKSCHED IF ""%1"" == """" GOTO ALLDB IF /i "%1" == "--ALL" GOTO ALLDB IF /i "%1:~0,2%" == "--" GOTO PARAMERROR SET ALLDBS=0 SET dbnames=%1 SET dbnamesf=%1 SHIFT :setArgs IF ""%1""=="""" GOTO BKUP SET dbnames=%dbnames% %1 SET dbnamesf=%dbnamesf%_%1 SHIFT GOTO setArgs :ALLDB SET ALLDBS=1 SET dbnames=ALL DATABASES SET dbnamesf=ALL_DATABASES :BKUP @ECHO MySQLdump script for Windows v%VERSIONMAJOR%.%VERSIONMINOR% > %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO MySQLdump script for Windows v%VERSIONMAJOR%.%VERSIONMINOR% @ECHO. IF NOT EXIST %bkupdir%\INSTALLED.OK ( @ECHO DIRECTORY STRUCTURE NOT IN PLACE. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO PLEASE RUN %0 --INSTALL OR %0 --CREATEDIRS >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO FAILED TO BACKUP DATABASES. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO DIRECTORY STRUCTURE NOT IN PLACE. @ECHO PLEASE RUN %0 --INSTALL OR %0 --CREATEDIRS @ECHO FAILED TO BACKUP DATABASES. GOTO BOTTOM ) @ECHO Beginning backup of %dbnames%... >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Beginning backup of %dbnames%... IF %ALLDBS% == 1 ( SET dumpparams=--host=%dbhost% -u %dbuser% -p%dbpass% -A -f -x -q --create-options --flush-privileges -r %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql ) ELSE ( SET dumpparams=--host=%dbhost% -u %dbuser% -p%dbpass% -f -x -q --create-options --flush-privileges -r %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql --databases %dbnames% ) %mysqldir%\mysqldump %dumpparams% >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Done! New File: dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Done! New File: dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Daily\dbBkup_%dbnamesf%_%dw%.sql /A > NUL @ECHO Created Daily Backup: Daily\dbBkup_%dbnamesf%_%dw%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Created Daily Backup: Daily\dbBkup_%dbnamesf%_%dw%.sql REM Check to see if it's time for the Weekend backup IF /i "%dw%" NEQ "Sat" GOTO SKIPWKBK IF EXIST %bkupdir%\Weekly\safety_%dbnamesf%_%yy%%mm%%dd%.txt GOTO WKCUR IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql GOTO WKCUR IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql GOTO WKPRE IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql GOTO WKPR2 MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_3.sql > NUL :WKPR2 MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql > NUL :WKPRE MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql > NUL :WKCUR COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql /A > NUL @ECHO. > %bkupdir%\Weekly\safety_%dbnamesf%_%yy%%mm%%dd%.txt @ECHO Created Weekly Backup: Weekly\dbBkup_%dbnamesf%_Current.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Created Weekly Backup: Weekly\dbBkup_%dbnamesf%_Current.sql :SKIPWKBK REM if (day >= 28) write EoM backup IF %dd% GEQ 28 ( COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Monthly\dbBkup_%dbnamesf%_%mm%.sql /A > NUL @ECHO Created End of Month Backup: Monthly\dbBkup_%dbnamesf%_%mm%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Created End of Month Backup: Monthly\dbBkup_%dbnamesf%_%mm%.sql ) DEL /q /f %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql @ECHO Backup stored in rotating archives. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO End MySQLdump Script >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log @ECHO Backup stored in rotating archives. @ECHO. @ECHO End MySQLdump Script GOTO BOTTOM :INSTALLER @ECHO VERIFY: Path to mysqldump: %mysqldir% @ECHO VERIFY: Path to backups: %bkupdir% @ECHO VERIFY: MySQL User: %dbuser% @ECHO VERIFY: MySQL Pass: %dbpass% @ECHO VERIFY: MySQL Host: %dbhost% IF NOT EXIST %bkupdir%\INSTALLED.OK ( @ECHO ALERT: Backup directory does not exist. Create base directory and subdirectories? SET /p domkdir=[Y/N]: IF /i "%domkdir%" == "N" ( @ECHO ALERT: CANNOT CONTINUE WITHOUT DIRECTORIES IN PLACE. GOTO BOTTOM ) ) :CREATEDIRS IF NOT EXIST %bkupdir%\INSTALLED.OK ( MD "%bkupdir%" > NUL MD "%bkupdir%\Daily" > NUL MD "%bkupdir%\Weekly" > NUL MD "%bkupdir%\Monthly" > NUL @ECHO INSTALLED CORRECTLY > %bkupdir%\INSTALLED.OK ) GOTO BOTTOM :TASKSCHED @ECHO Preparing add Scheduled Task... :STUPIDUSER1 SET /p taskuser=Domain\User to run task: IF /i ""%taskuser%"" == """" GOTO STUPIDUSER1 :STUPIDUSER2 SET /p taskpwd1=Password: SET /p taskpwd2=Confirm Password: IF %taskpwd1% NEQ %taskpwd2% GOTO STUPIDUSER2 :STUPIDUSER3 SET /p taskname=Task name: IF /i ""%taskname%"" == """" GOTO STUPIDUSER3 SET /p taskparam=Parameters to pass to batch file: SCHTASKS /Create /SC DAILY /ST 04:00:00 /TN "%taskname%" /TR "%~f0 %taskparam%" /RU "%taskuser%" /RP %taskpwd1% GOTO BOTTOM :PARAMERROR @ECHO ERROR: Unknown Parameter Passed. @ECHO Current supported parameters: @ECHO --ALL - Backup all databases, same as passing nothing to batch file @ECHO --ADDSCHEDULEDTASK - Adds a scheduled task for this process @ECHO --CREATEDIRS - Creates Directory Structure @ECHO --INSTALL - Creates directory structure and outputs configuration settings that need verification :BOTTOM Posted by Agusti Sanchez on October 28, 2008 To export a limited amount of records, you may use the following: mysqldump -u [username] -p [dbname] {--opt} --where="true limit 100" > dump.sql This adds the clause "where true limit 100" when creating the SELECT statement. (Adding "true" is a workaround: I ignore whether there is a way to add "limit 100" without adding the WHERE keyword). Original post: http://www.geeknewz.com/board/index.php?automodule=blog&blogid=76&showentry=434 Posted by Chris Salch on February 22, 2009 I wrote a perl script a that uses mysqldump to backup and compress an arbitrary list of databases. You can find the script and a rough explination here: http://kc5vzm.com/cms/wiki/view/MySQL+-+Simple+Backup+Script Posted by Jonathan Watt on May 28, 2009 To drop ALL tables in your database (fill out the first line as appropriate): MYSQL="mysql -h HOST -u USERNAME -pPASSWORD -D DB_NAME"$MYSQL -BNe "show tables" | awk '{print "set foreign_key_checks=0; drop table " $1 ";"}' |$MYSQL
unset MYSQL

This can be useful if you need to empty a database in order to restore a backup made by mysqldump, but you couldn't use --add-drop-database because you don't have CREATE DATABASE privileges on the command line (e.g. you're on shared hosting). mysqldump adds DROP TABLE by default, but if tables may have been added or renamed since the time of your backup (e.g. by some sort of update process that you're trying to revert from), failing to drop those tables will likely cause serious headaches later on.

Of course this raises the question of why MySQL doesn't support "DROP TABLE *;" (in which case mysqldump could just insert that)?
Posted by Meg Liebenstein on August 21, 2009
we can also write sql to write sql to drop tables:

select concat('drop table ',TABLE_SCHEMA,'.',TABLE_NAME,';')
from information_schema.tables
where <TABLE_SCHEMA = db or what ever you want!)
Posted by Joseph Edmonds on February 11, 2010
If you need to split a dumpfile back into its component parts, for example to retrieve a particular table, you might find this little PHP script handy:

http://www.edmondscommerce.co.uk/blog/mysql/updated-mysql-dump-splitter-and-cleaner/
Posted by kedar vaijanapurkar on March 11, 2010
Here is a shell script for mysqldump extractor: "mydumpsplitter".
This shell script will be grabbing the tables you want and pass it to tablename.sql.
It’s capable to understand regular expressions as I’ve added sed -r option.
Also MyDumpSplitter can split the dump in to individual table dumps.

http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script/
Posted by Boyd Hemphill on March 15, 2010
There is an irony of SQL injection here but this is handy when I am building small files to represent tables for unit tests:

mysqldump --skip-opt exp --tables foo --where "foo_parent_id = 72 limit 100" > ~/foo.dmp

(note the injection of the limit clause in the where)

I now have 100 insert statements. Foo itself has several million rows, so this is quite handy.

Posted by Vlatko Šurlan on July 4, 2010
There is an interesting script sample here: http://www.docplanet.org/linux/backing-up-linux-web-server-live-via-ssh/
showing a way to dump mysql databases directly into gzip and then into ssh connection, thus creating a gzipped dump archive that never resided on the server hard drive. This can be a handy way to ensure that backup does not fill up the server hard drive.
Posted by Alasdair Barclay on September 6, 2010
The post by Bradford Mitchell on July 18 2008 7:11pm gave a nice script with, among other functions, the ability to automatically create a Windows Scheduled Task. However, if the name of the script contains spaces, the creation of the scheduled task will fail.

To get it to work, amend the following line

as follows:

The escaped quote characters around the script filename (%~f0) will now allow this to work.
Posted by David Bennett on September 15, 2010
Extract a single table from mysqldump output.

This Perl script will extract a single table definition and it's data from a large mysqldump file. Very nice tool for quick restorations from backups.

#!/usr/bin/perl -w
use strict;

# Usage: perl extract_table_from_mysqldump.pl {table name} < {full mysqldump} > {single table dump}

my $regex='^\-\- Table structure for table '.$ARGV[0].'';
my $output=0; while (<STDIN>) { if ($output && m/^\-\- Table structure/) { exit; }
if (!$output && m/$regex/) { $output=1; } if ($output) { print; }
}

--Dave Bennett (dbennett at bensoft com)
Posted by Siva Kranthi Kumar on October 29, 2010
Bat file commands to take SQL DB backup with compression

With MKS Toolkit (http://www.mkssoftware.com/products/tk/)
-----------------------------------------------------------

set DateTime=%date:~7,2%-%date:~4,2%-%date:~-4%
set WorkingDirectory=D:\Backup Directory\
set MySQLDirectory=D:\mysqldump.exe Directory\

"%MySQLDirectory%mysqldump" --host="127.0.0.1" --user="root" --password="" --routines --triggers --events dbname > "%WorkingDirectory%DBName %DateTime%.sql"

zip -m "%WorkingDirectory%DBName %DateTime%" "DBName %DateTime%.sql"

-----------------------------------------------

set DateTime=%date:~7,2%-%date:~4,2%-%date:~-4%
set WorkingDirectory=D:\Backup Directory\
set MySQLDirectory=D:\mysqldump.exe Directory\

"%MySQLDirectory%mysqldump" --host="127.0.0.1" --user="root" --password="" --routines --triggers --events dbname > "%WorkingDirectory%DBName %DateTime%.sql"

7z a -tzip "%WorkingDirectory%DBName %DateTime%" "DBName %DateTime%.sql"

if ERRORLEVEL 1 goto :EOF
del "%WorkingDirectory%SnD %DateTime%.sql"
:EOF
Posted by Milo de Vries on December 1, 2010
Creating a backup scheduled task on Windows with a oneliner:

schtasks /create /sc daily /st 04:30 /ru SYSTEM /tn MySQL_backup /tr "\"C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqldump.exe\" -B <DB_NAME> -u <USER_NAME> -p<PASSWORD> -r C:\MySQL_backup\<DB_NAME>_%date:~0,2%.sql

This creates a daily task to run at 04:30, creating a seperate backupfile each day (<DB_NAME>_ma.sql, <DB_NAME>_di.sql, <DB_NAME>_wo.sql etc.)
Posted by Ilan Hazan on April 6, 2011
Restoring a dump table into the MySQL master server can lead to serious replication delay.
To overcome the replication delay, caused by restoring the dump table on the master, there is a need to widespread the massive inserts. This can be done by the MySQL SLEEP command.
See http://www.mysqldiary.com/as-restoring-a-dump-table-into-the-mysql-master-you-better-get-some-sleep/

Posted by Arbaoui Mehdi on May 15, 2011
Posted by Alejandro Arauz on October 10, 2011
The mysqldump command is very helpful but sometimes you don't have the necessary permissions on the server to run this command (like in shared enviroments).

I had this problem before and searching for backup tools I found MySqlBackupFTP (http://mysqlbackupftp.com).It is easy to use and it has a free version that allows you to connect to a remote phpMyAdmin instance.

I hope it helps others with the same problem.
Posted by DBA Lead on June 29, 2012
Fast, parallel restore from SQL dumps (mysqldump) for MySQL. The technique is very simple and straightforward.

http://www.geeksww.com/tutorials/database_management_systems/mysql/tips_and_tricks/fast_parallel_restore_from_sql_dumps_mysqldump_for_mysql.php
Posted by Bill Plimpton on March 14, 2014
To get return code of mysql use:

mysqldump $MYSQL_OPTS$database $tables |gzip -1 >$path
if [ ${PIPESTATUS[0]} != 0 ]; then echo " ## Backup of$db tables $tables FAILED! ##" else echo "Backup of$db \$tables successful!"
fi