Oracle database export of exp Import of imp description

  • 2020-06-07 05:28:07
  • OfStack

buffer: Download the data buffer, in bytes, default to the operating system
consistent: Data involved during download is kept as read only, which defaults to n
direct: Use pass-through and default to n
feeback: Display the number of processing records, default is 0, that is, do not display
file: Output file, default expdat.dmp
filesize: Output file size, default to operating system maximum
indexes: Whether to download the index, default is n, which means the definition of the index and not the data. exp does not download the index data
log: log file, default to none, displayed in standard output
owner: Specifies the user name of the download
query: Select a subset of the records
rows: Whether to download table records or not
tables: Table list of output table names
Export the entire instance
exp dbuser/oracle file=oradb.dmp log=oradb.log full=y consistent=y direct=y
user shall have dba authority
Export all objects for a user
exp dbuser/oracle file=dbuser.dmp log=dbuser.log owner=dbuser buffer=4096000 feedback=10000
Export 1 or more tables
exp dbuser/oracle file=dbuser.dmp log=dbuser.log tables=table1,table2 buffer=4096000 feedback=10000
Export a portion of the data from a table
exp dbuser/oracle file=dbuser. log=dbuser.log =table1 tables= 4096000 feedback=10000 query=\ "where col1=\'... \ 'and col2 \ < ... \"
Cannot be used for nested tables
Export a table as multiple fixed-size files
exp dbuser/oracle file = 1. dmp, 2. dmp, 3. dmp,... filesize = 1000 m tables = emp buffer feedback = 4096000 = 10000
This approach is typically used when a table has a large amount of data and a single dump file may exceed the filesystem limits
Through path mode
direct=y, which replaces the buffer option, the query option is not available
Improves download speed
consistent options
After export is started, consistent=y freezes updates to data objects operated on export from other sessions, thus ensuring that the dump result is 1 sex. However, this process should not be too long to avoid running out of rollback segments and online logging
imp
Upload the dmp file downloaded by exp to the database.
buffer: Upload data buffer, in bytes, default to OS dependent
commit: Whether to perform a commit after uploading a record in the upload data buffer
feeback: Display the number of processing records, default is 0, that is, do not display
file: Input file, default is ES111en.dmp
filesize: Enter the file size, which defaults to the operating system maximum
fromuser: Specify source user side
ignore: Whether to ignore object creation error, default is n, it is usually a normal phenomenon that the object has been created before uploading, so this option is recommended as y
indexes: Whether to upload an index, default is n, which refers to the definition of the index and not the data. If the index was created at the time of upload, this option is invalid even if it is n. imp updates the index data automatically
log: log file, default to none, displayed in standard output
rows: Whether to upload table records or not
tables: List of table names entered
touser: Specifies the intended user
Import the entire instance
imp dbuser/oracle file=oradb.dmp log=oradb.log full=y buffer=4096000 commit=y ignore=y feedback=10000
Import all objects for a user
imp dbuser/oracle file=dbuser.dmp log=dbuser.log fromuser=dbuser touser=dbuser2 buffer=2048000 commit=y ignore=y feedback=10000
Import one or more tables
imp dbuser2/oracle file=user.dmp log=user.log tables=table1,table2 fromuser=dbuser touser=dbuser2 buffer=2048000 commit=y ignore=y feedback=10000
Import a table as multiple fixed-size files
imp dbuser/oracle file = \ [1. dmp, 2. dmp, 3. dmp,... \) filesize=1000m tables= dbuser touser=dbuser2 buffer=4096000 ignore= feedback=10000

Related articles: