Schema-Level Data Migration Using Data Pump (expdp/impdp)
Real-Time Scenario
In my logistics project, I had to migrate schema data (CONTAINER, TERMINAL, DOCUMENT) from a staging schema
to a production schema. We used Oracle Data Pump (expdp/impdp) due to high data volume.
Why Data Pump?
- High performance
- Supports metadata + data
- Schema remapping
- FLASHBACK_TIME for consistency
- Automation-friendly
Interview Summary
I migrated schema using expdp/impdp with REMAP_SCHEMA, ensured data consistency using FLASHBACK_TIME,
validated post-migration using queries for row counts, indexes, and object status.
Step-by-Step Approach
1. Pre-Requisites:
- Create directory object
- Check tablespaces/privileges
- Capture row counts
2. Export (expdp):
expdp stg_user/password SCHEMAS=stg_user ... FLASHBACK_TIME=SYSTIMESTAMP
3. Import (impdp):
impdp prod_user/password REMAP_SCHEMA=stg_user:prod_user ... TABLE_EXISTS_ACTION=REPLACE
4. Post-Migration Validation:
- Validate object counts
Schema-Level Data Migration Using Data Pump (expdp/impdp)
- Row count check
- Recompile invalid objects
- Check indexes & constraints
5. Rollback Strategy:
expdp backup before import, use impdp if rollback needed
6. Automation:
Shell script to run export & import
7. Logistics-Specific Checks:
- Validate GATE_IN/GATE_OUT in container
- Check terminal appointment statuses
Validation Queries
SELECT COUNT(*) FROM dba_objects WHERE owner='PROD_USER';
SELECT * FROM user_indexes WHERE status='UNUSABLE';
SELECT constraint_name, status FROM dba_constraints WHERE owner='PROD_USER';
Interview Wrap-Up Answer
In my logistics project, I migrated schema from staging to production using expdp/impdp with REMAP_SCHEMA. I
ensured data consistency using FLASHBACK_TIME and performed post-migration validations for object count, index
status, and data accuracy. The entire process was automated using shell scripts with rollback backup in place.