What Is Import and Export
What Is Import and Export
Oracle's export (exp) and import (imp) utilities are used to perform logical database backup and recovery.
When exporting, database objects are dumped to a binary file which can then be imported into another
Oracle database.
These utilities can be used to move data between different machines, databases or schema. However, as
they use a proprietary binary file format, they can only be used between Oracle databases. One cannot
export data and expect to import it into a non-Oracle database.
Various parameters are available to control what objects are exported or imported. To get a list of
available parameters, run the exp or imp utilities with the help=yes parameter.
The export/import utilities are commonly used to perform the following tasks:
Backup and recovery (small databases only, say < +50GB, if bigger, use RMAN instead)
Move data between Oracle databases on different platforms (for example from Solaris to
Windows)
Reorganization of data/ eliminate database fragmentation (export, drop and re-import tables)
Upgrade databases from extremely old versions of Oracle (when in-place upgrades are not
supported by the Database Upgrade Assistant any more)
Detect database corruption. Ensure that all the data can be read
Transporting tablespaces between databases
Etc.
From Oracle 10g, users can choose between using the old imp/exp utilities, or the newly
introduced Datapumputilities, called expdp and impdp. These new utilities introduce much needed performance
improvements, network based exports and imports, etc.
NOTE: It is generally advised not to use exports as the only means of backing-up a database. Physical
backup methods (for example, when you use RMAN) are normally much quicker and supports point in
time based recovery (apply archivelogs after recovering a database). Also, exp/imp is not practical for
large database environments.
The following examples demonstrate how the imp/exp utilities can be used:
BUFFER=10000000
FILE=account.dmp
FULL=n
OWNER=scott
GRANTS=y
COMPRESS=y
NOTE: If you do not like command line utilities, you can import and export data with the "Schema
Manager" GUI that ships with Oracle Enterprise Manager (OEM).
Method 1:
select substr(sql_text,instr(sql_text,'INTO "'),30) table_name,
rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd
hh24:mi:ss'))*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
and open_versions > 0;
For this to work one needs to be on Oracle 7.3 or higher (7.2 might also be OK). If the import has more
than one table, this statement will only show information about the current table being imported.
Method 2:
Use the FEEDBACK=N import parameter. This parameter will tell IMP to display a dot for every N rows
imported. For example, FEEDBACK=1000 will show a dot after every 1000 row.
Note: It is also advisable to drop indexes before importing to speed up the import process. Indexes can
easily be recreated after the data was successfully imported.
Oracle also ships some previous catexpX.sql scripts that can be executed as user SYS enabling older
imp/exp versions to work (for backwards compatibility). For example, one can run
$ORACLE_HOME/rdbms/admin/catexp7.sql on an Oracle 8 database to allow the Oracle 7.3 exp/imp
utilities to run against an Oracle 8 database.
[ ]Can one export to multiple files?/ Can one beat the Unix 2 Gig limit?
From Oracle8i, the export utility supports multiple output files. This feature enables large exports to be
divided into files whose sizes will not exceed any operating system limits (FILESIZE= parameter). When
importing from multi-file export you must provide the same filenames in the same sequence in the FILE=
parameter. Look at this example:
Use the following technique if you use an Oracle version prior to 8i:
Create a compressed export on the fly. Depending on the type of data, you probably can export up to 10
gigabytes to a single file. This example uses gzip. It offers the best compression I know of, but you can
also substitute it with zip, compress or whatever.
In case of low-performance system, it is better to add RECORDLENGTH parameter with tiny value to
ensure that gzip has enough time to extract data before imp reads it:
Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer "64000")
Use DIRECT=yes (direct mode export)
Stop unnecessary applications to free-up resources for your job.
If you run multiple export sessions, ensure they write to different physical disks.
DO NOT export to an NFS mounted filesystem. It will take forever.
IMPORT:
Create an indexfile so that you can create indexes AFTER you have imported data. Do this by
setting INDEXFILE to a filename and then import. No data will be imported but a file containing index
definitions will be created. You must this file afterwards and supply the passwords for the schemas
on all CONNECT statements.
Place the file to be imported on a separate physical disk from the oracle data files
Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora
file
Set the LOG_BUFFER to a big value and restart oracle.
Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments
offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as
your biggest table (I think?)
Use COMMIT=N in the import parameter file if you can afford it
Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the
statistics
Remember to run the indexfile previously created