The Oracle pipeline solves the Exp and Imp mass data processing problem

  • 2020-06-15 10:25:04
  • OfStack

First by exp unloaded to the file system, data produced 1. dmp file, and then again by imp will data into the database when necessary. For 1, small and medium-sized database, the database of exp dmp files may be less than 2 GB, but for a large database, data produced by the exp at number 10 to hundreds of GB. And now most of the operating system is 32 bit, The maximum file allowed by the file system is 2GB. This obviously cannot be used by the file system to store the data generated by exp. This is part of the problem.

Using UNIX as an example, let's first look at how to exceed the 2GB limit by leveraging common UNIX technologies such as plumbing (named pipe), data copy tool dd, and data compression (compress). Each of these technologies is discussed below.

· A pipe is a pseudo-file. It exists in memory and is used for fast I/O operations. The buffer of the pipe is fifO, that is, the write pipe process writes to the buffer head and the read pipe process reads the end of the pipe.
· dd allows us to copy data from one device to another.
· compress is an UNIX data compression tool.
Before implementing exp, we can check the size of the generated dmp file.
1. Generate pipes:
$ mknod /tmp/mypipe p
2. exp results are output to the pipe
$ exp file=/tmp/mypipe < . Other exp options > &
3. Read the pipe and output the result to the pipe, just to see the data volume:
$ dd if=/tmp/mypipe of=/dev/null
The result returns the exp (.dmp) file size in blocks (512 bytes).
Now we can start talking about how to do it,
1. File compression
2. Direct exp to tape
3. Comprehensive approach
First, let's look at the file compression method. By using file compression, we can avoid the dmp file size exceeding the 2GB limit as much as possible.
1. Start the compress process to read data from the pipe and output it to the disk file.
$ mknod /tmp/exp_pipe p
$ compress < /tmp/exp_pipe > export.dmp.Z &
exp to pipe.
$ exp file=/tmp/exp_pipe ... &
The situation is similar for imp.
Look at the direct exp to tape method.
$ exp file=/dev/rmt/0m volsize=4G
$ imp file=/dev/rmt/0m volsize=4G
Finally, take a look at the exp approach to naked devices,
$ exp file=/dev/rdsk/c0t3d0s0 volsize=4G
imp is similar. All of the above methods enable us to avoid the 2GB restriction.
Below to discuss how to shorten the exp cable need to be a matter of time. At present most users will use a exp to tape method because there is not enough free hard drive space. And the whole process of exp much of the time spent on writing tape. Face UNIX tools on through our discussion, we can do exp to the tube first, then the pipeline data compressed output to tape. This tape can greatly reduce the quantity of the data, in a way so as to shorten writing tape time. Another point of view, from inside the Oracle Starting with version 7.3, Oracle allows users to do direct path export, or skip Oracle
$ make -f $ORACLE_HOME/rdbms/lib/ expst
$ make -f $ORACLE_HOME/rdbms/lib/ impst
The above steps expst the executable file (exp single task) and impst exp/imp usage is exactly the same. Use them than using exp/imp can save up to 30% of the time. But when you use them must pay attention to the 1 point, which is 1 to ensure when using expst/impst there cannot be any other users in the use of Oracle database, otherwise the database will have unpredictable consequences or even destroyed.
A: Yesterday, a friend asked me on the Internet how to import a file exported and compressed through pipe. I made a special experiment:
The export and import of Oracle cannot directly refer to the standard output and input, but it can be realized through unix pipe, and the speed of exp/imp can be accelerated through pipelines
The data is exported through the pipe
1. Establish the pipeline through mknod-p
mknod /home/exppipe p -- Create a pipe under directory /home
2. Export data through exp and gzip to the established pipeline and compress
exp test/test @ orcl file = / home exppipe &
gzip < /home/exppipe > exp.dmp.gz
3. Delete the established pipe after the export completes successfully
rm - rf/home/exppipe
According to my tests, the compressed export file is only 1/10 the normal size
Import the generated files through the pipe
Step 1 Build a pipeline
mknod /home/exppipe p
2. Import the generated compressed file
imp system/passwd@orcl file=/home/exppipe fromuser=test touser=macro &
gunzip < exp.dmp.gz > /home/exppipe
This imports successfully, and then deletes the pipe
3. Delete the pipe

To sum up, PiPe is a great way to speed things up while avoiding the single file exceeding the 2G limit.

Related articles: