Description of MySQL database backup method

  • 2020-05-06 11:46:06
  • OfStack

It is important to back up your database in the event of a lost or corrupted database table. If a system crash occurs, you want to be able to restore your table to the state it was in when the crash occurred, with as little data loss as possible. Sometimes it is the MySQL administrator who causes the damage. Administrators already know that the tables are broken, and trying to edit them directly with an editor like vi or Emacs is never a good thing for the tables!  
The two main ways to backup a database are to use the mysqldump program or to copy the database files directly (e.g., cp, cpio, tar, etc.). Each approach has its advantages and disadvantages:  
mysqldump works in conjunction with MySQL servers. The direct copy method takes place outside the server, and you must take steps to ensure that no customers are modifying the tables you will copy. If you want to use file system backup to back up your database, the same problem can occur: if the database table is modified during the file system backup, the subject of the backed table file is not consistent, and it will be meaningless for the later recovery table. The difference between a file system backup and a direct copy of a file is that with the latter you have complete control over the backup process, so you can take steps to ensure that the server keeps the table from interfering.  
mysqldump is slower than direct copy.  
mysqldump generates text files that can be ported to other machines, even those with different hardware structures. Direct copy files cannot be ported to other machines unless you are copying tables in MyISAM storage format. The ISAM table can only be copied on machines with similar hardware structures. The MyISAM table storage format introduced in MySQL   3.23 solves this problem because the format is machine independent, so direct copying of files can be ported to machines with different hardware structures. As long as two conditions are met: another machine must also run MySQL   3.23 or later, and the files must be presented in MyISAM format, not ISAM format.  


Regardless of which backup method you use, if you need to restore your database, there are a few principles you should follow to ensure the best results:  
Implement backups regularly. Establish a plan and stick to it.  
Let the server execute the update log. Updating the log will help you when you need to recover data after a crash. After you restore the data to the backup state with the backup file, you can use the changes after the backup again by running the query in the update log, which restores the tables in the database to the state when the crash occurred.  
In file system backup terms, database backup files represent complete dumping (full   dump), while update logs represent progressive dumping (incremental   dump).  
Use a uniform and understandable backup file naming mechanism. Examples such as backup1, buckup2 are not particularly meaningful. When implementing your recovery, you'll waste time trying to figure out what's in the file. You may find it useful to use database names and dates as backup file names. For example:  
%mysqldump   samp_db   > /usr/archives/mysql/samp_db.1999-10-02  
%mysqldump   menagerie   > /usr/archives/mysql/menagerie.1999-10-02  
You may want to compress them after generating a backup. Backups are usually large! You also need to let your backup files expire to prevent them from filling your disk, just as you let your log files expire.  
Back up your backup files with the file system. If you run into a complete crash that wipes out not only your data directory, but also the disk drive that contains your database backup, you're in real trouble.  
Also back up your update log.  
Put your backup files on a different file system than the one used for your database. This reduces the likelihood that the file system containing the data directory will be filled due to the backup being generated.  

The same techniques used to create backups are also useful for copying the database to another machine. Most commonly, a database is moved to a server running on another host, but you can also move data to another server on the same host.  
1   backs up and copies the database  
using mysqldump
When you use the mysqldumo program to generate a database backup file, by default, the file contents contain the CREATE statement that creates the table being dumped and the INSERT statement that contains the row data in the table. In other words, the output produced by mysqldump can be used later as input to mysql to rebuild the database.  
You can dump the entire database into a single text file, as follows:  
%mysqldump   samp_db   > /usr/archives/mysql/samp_db.1999-10-02  
The beginning of the output file looks like this:  
#   MySQL   Dump   6.0#   #   Host:   localhost   Database:   samp_db#-------------  
--------------------------#   Server   version   3.23.2-alpha-log##   Table   st  
ructure   for   table   'absence'#CREATE   TABLE   absence(   student_id   int(10)  
unsigned   DEFAULT   '0'   NOT   NULL,   date   date   DEFAULT   '0000-00-00'   NOT   NUL  
L,   PRIMARY   KEY   (student_id,date));##   Dumping   data   for   table   'absence'  
#INSERT   INTO   absence   VALUES   (3,'1999-09-03');INSERT   INTO   absence   VALUE  
S   (5,'1999-09-03');INSERT   INTO   absence   VALUES   (10,'1999-09-08');......  
 

The rest of the file consists of more INSERT and CREATE   TABLE statements.  
If you want to compress the backup, use the following command:  
%mysqldump   samp_db   |   gzip   > /usr/archives/mysql/samp_db.1999-10-02.gz  
If you want a large database, the output files will also be large and may be difficult to manage. If you wish, you can list individual table names after the database names on the mysqldump command line to dump their contents, which breaks the dump into smaller, more manageable files. The following example shows how to dump some tables from the samp_db database into separate files:  
%mysqldump   samp_db   student   score   event   absence   > grapbook.sql  
%mysqldump   samp_db   member   president   > hist-league.sql  
If you are generating backup files that are intended to refresh the contents of another database on a regular basis, you may want to use the -- add-drop-table option. This tells the server to write DROP   TABLE   IF   EXISTS statements to the backup file, and then, when you pull out the backup file and load it into the second database, you won't get an error if the table already exists.  
If you dump a database so you can move it to another server, you don't even have to create a backup file. Ensure that the data inventory is on another host and then pipe the database so that mysql can read mysqldump's output directly. For example, if you want to copy database samp_db to boa.snake.net from host pit-viper.snake.net, you can do this easily:  
%mysqladmin   -h   boa.snake.net   create   samp_db  
%mysqldump   samp_db   |   mysql   -h   boa.snake.net   samp_db  
Later, if you want to refresh the database on boa.snake.net again, skip the mysqladmin command, but add -- add-drop-table to mysqldump to avoid the table error:  
%mysqldump   --add-drop-table   samp_db   |   mysql   -h   boa.snake.net   samp_db  
Other useful mysqldump options include  
The combination of -- flush-logs and -- lock-tables will help you with your database checkpoint. -- lock-tables locks all the tables you are dumping, while -- flush-logs closes and re-opens the update log file. The new update log will only include queries that modify the database from the backup point. This will set your update log check point backup time. However, if you have customers who need to perform an update, locking all the tables for customer access during the backup is not a good thing.  
If you use -- flush-logs to set checkpoints to backup, it's probably best to dump the entire database.  
If you dump individual files, it's harder to synchronize the update log checkpoint with the backup files. During recovery, you typically extract the update log contents on a database basis, and there is no option to extract updates for individual tables, so you must extract them yourself.  
By default, mysqldump reads the entire contents of a table into memory before writing. This is often really unnecessary, and in fact is almost a failure if you have a large table. You can use the --quick option to tell mysqldump to write each row as soon as it retrieves one. To further optimize the dumping process, use --opt instead of --quick. -- the opt option opens other options to speed up the data dump and read it back.  
Backup with --opt is probably the most common method because of the speed advantage. However, to warn you -- the opt option does have a cost --opt optimizes your backup process, not other customers' access to the database. The opt option prevents anyone from updating any tables you are dumping by locking all tables at once. You can easily see the effect on general database access. While your database is typically used very frequently, just tune the backup once a day.  
An option that has the opposite effect of --opt is --dedayed. This option causes mysqldump to write INSERT   DELAYED statements instead of INSERT statements. delayed is helpful if you load data files into another database and you want this operation to have the least impact on the queries that are likely to appear in that database.  
The compress option is helpful when you copy a database to another machine because it reduces the number of bytes transferred over the network. Here's an example, notice that compress is only given for programs that communicate with servers on remote hosts, not for programs that connect to local hosts:  
%mysqldump   --opt   samp_db   |   mysql   --compress   -h   boa.snake.net   samp_db  

There are many options for mysqldump, see the MySQL reference manual.  
  USES the direct copy database backup and copy method  

Another way to not involve mysqldump in backing up databases and tables is to directly copy database table files. Typically, this USES utilities such as cp, tar, or cpio. The examples in this article use cp.  
When you use a direct backup method, you must ensure that the table is not in use. If the server changes a table while you are copying it, copying is meaningless.  
The best way to ensure the integrity of your copy is to shut down the server, copy the files, and then restart the server. If you don't want to shut down the server, lock it while performing a table check. If the server is running, the same constraints apply to copying files, and you should use the same locking protocol to "quiet down" the server.  
Assuming the server is down or you have locked the table you want to copy, the following shows how to backup the entire samp_db database to a backup directory (DATADIR represents the server's data directory) :  
%cd   DATADIR%cp   -r   samp_db   /usr/archive/mysql  

A single table can be backed up as follows:  
%cd   DATADIR/samp_db%cp   member.*   /usr/archive/mysql/samp_db%cp   score.*  
/usr/archive/mysql/samp_db   ....  

When you are done, you can restart the server (if it is turned off) or release the locks that are placed on the table (if you let the server run).  
To copy a database from one machine to another with a direct copy file, simply copy the file to the appropriate data directory on another server host. Make sure the files are in MyIASM format or the two machines have the same hardware structure, otherwise your database will have strange content on the other host. You should also ensure that the server on the other machine does not access the database tables while you are installing them.  

3   replication database (Replicating   Database)  

Replication (Replication) is similar to copying a database to another server, but its exact meaning is to ensure that the two databases are fully synchronized in real time. This feature will be available in version 3.23 and is not mature enough to be covered in detail in this article.  
Restore data  
with backup
Database corruption occurs for many reasons, and to varying degrees. If you are lucky, you may only damage one or two tables (such as power loss), and if you are unlucky, you may have to replace an entire data directory (such as disk corruption). Recovery is also required in some cases, such as when a user mistakenly drops a database or table. Whatever the cause of these unfortunate events, you will need to implement some sort of recovery.  
If the tables are damaged but not lost, try to fix them with myisamchk or isamchk. If such damage can be fixed with a repair program, you may not need to use backup files at all. For the table repair process, see database maintenance and repair.  
The recovery process involves two sources of information: your backup files and an update log. The backup file restores the table to the state it was in when the backup was performed, whereas the table has generally been modified between the backup and the problem, and the update log contains the queries used to make those changes. You can repeat the query using the log file as input to mysql. This is exactly why update logging is enabled.  
The recovery process varies depending on how much information you have to recover. In fact, it is easier to recover the entire database than a single table because it is easier to use the update log for the database than for a single table.  
4.1   restores the entire database  

First, if the database you want to recover is an mysql database with authorization tables, you need to run the server with the -- skip-grant-table option. Otherwise, it will complain that it cannot find the authorization form. After you have recovered the tables, execute mysqladmin   flush-privileges to tell the server to load the authorization labels and use them.  
Copy the contents of the database directory to somewhere else if you need them later.  
Reload the database with the latest backup files. If you use the mysqldump generated file, use it as input to mysql. If you copy files directly from the database, copy them directly back to the database directory, however, at this point you need to shut down the database before copying the files, and then restart it.  
Use the update log to repeat the query to modify the database tables after the backup. For any applicable update logs, use them as input to mysql. Specify -- the one-database option causes mysql to only execute queries for databases you are interested in recovering. If you know you need to use all the update log files, you can use this command in the directory that contains the log:  

%   ls   -t   -r   -1   update.[0-9]*   |   xargs   cat   |   mysql   --one-database   db_name  

The ls command generates a single column list of update log files, sorted according to the order in which the server generated them (idea: if you modify any of the files, you will change the sort order, which results in an incorrect order being applied to the update log).  
Chances are you'll be using a few updates. For example, the update logs that have been generated since you backed up are named update.392, update.393, etc. You can rerun them like this:  
%mysql   --one-database   db_name   <   update.392  
%mysql   --one-database   db_name   <   update.393  
.....  
If you are performing a recovery and using the update log to recover lost information due to an erroneously suggested DROP   DATABASE, DROP   TABLE or DELETE statements, be sure to remove those statements from the update log before using them.  
4.2   restore a single table  

Restoring a single table is more complex. If you use a backup file generated by mysqldump and it does not contain data for the tables you are interested in, you need to extract them from the relevant rows and use them as input to mysql. That's the easy part. The hard part is pulling out fragments from the update logs that apply only to the table. You will find this helpful with the mysql_find_rows utility, which extracts multi-line queries from the update log.  
Another possibility is to use another server to restore the entire database and then copy the desired table files to the original database. This could be really easy! When you copy files back to the database directory, make sure the original database's server is down.

Related articles: