Oracle Memory Management
Banking Scenario: Online Transaction Processing System
Zaheer Abbas Mitaigiri
Lead Oracle Database Consultant.
OCP,OCE,OCS.
Let me explain Oracle memory management concepts using a banking domain example to make it more concrete.
Imagine a large bank with a high-volume online banking system that handles:
•Customer account inquiries (shared pool/SGA)
•Transaction processing (buffer cache/SGA)
•Complex financial reporting (PGA)
•Batch processing for end-of-day reconciliation (PGA)
Memory Configuration Approaches
1. Automatic Memory Management (AMM) - Not Recommended for Banking
Problem: A regional bank tried using AMM (MEMORY_TARGET=24G) for their core banking system. During peak hours
(10-11 AM), transaction processing slowed dramatically because:
•The system kept reallocating memory between SGA (for transactions) and PGA (for reports)
•Resulted in constant memory thrashing
•ATM transactions took 5-8 seconds instead of the usual 1-2 seconds
Solution: They switched to ASMM for better control.
2. Automatic Shared Memory Management (ASMM) - Recommended Approach
Configuration:
Core memory parameters for banking DB (16-core server, 64GB RAM)
ALTER SYSTEM SET SGA_TARGET=40G SCOPE=SPFILE;
ALTER SYSTEM SET SGA_MAX_SIZE=48G SCOPE=SPFILE;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET=12G SCOPE=SPFILE;
ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=16G SCOPE=SPFILE;
Why this works for banking:
•Shared Pool (8G): Caches frequently executed SQL (account balance checks, transfer transactions)
•Buffer Cache (28G): Keeps frequently accessed account data in memory
•PGA (12G): Dedicated memory for complex interest calculations and month-end reports
•Log Buffer (256M): Ensures quick commit times for financial transactions
3. Manual Memory Management - For Specialized Banking Workloads
A bank's fraud detection system uses specialized machine learning algorithms that require precise memory control:
ALTER SYSTEM SET DB_CACHE_SIZE=20G SCOPE=SPFILE;
ALTER SYSTEM SET SHARED_POOL_SIZE=6G SCOPE=SPFILE;
ALTER SYSTEM SET LARGE_POOL_SIZE=2G SCOPE=SPFILE;
ALTER SYSTEM SET JAVA_POOL_SIZE=1G SCOPE=SPFILE;
LARGE_POOL_SIZE - For parallel payment processing
JAVA_POOL_SIZE - For Java-based banking apps
Huge Pages in Banking - Critical for Performance
Example: A national bank processing 5,000 transactions/second:
1.Without Huge Pages:
•40GB SGA = 10,485,760 x 4KB pages
•High TLB misses slowed transactions by 15%
2.With Huge Pages (2MB):
•40GB SGA = 20,480 huge pages
•Reduced TLB misses by 90%
•Transaction processing time improved by 12%
Configuration:
For a 48GB SGA on Linux
vm.nr_hugepages = 24576 # (48GB / 2MB)
Monitoring Memory in Banking Systems
Critical queries for banking DBAs:
-- SGA usage
SELECT * FROM v$sga_target_advice;
-- PGA usage for teller transactions vs. batch processing
SELECT program, pga_used_mem/1024/1024 "PGA Used (MB)"
FROM v$process ORDER BY 2 DESC;
-- Buffer cache hit ratio (should be >95% for banking)
SELECT 1-(phy.value/(cur.value + con.value)) "Buffer Cache Hit Ratio"
FROM v$sysstat cur, v$sysstat con, v$sysstat phy WHERE cur.name = 'db block gets'
AND con.name = 'consistent gets'
AND phy.name = 'physical reads';
Banking-Specific Memory Considerations
1. ATM Transactions: Need low-latency access → prioritize buffer cache
2. Month-End Processing: Requires large PGA for sorting → increase PGA_AGGREGATE_TARGET temporarily
3. Regulatory Reports: Memory-intensive → schedule during off-peak hours
4. Fraud Detection: Complex analytics → may need dedicated manual memory settings
By understanding these memory management concepts, banking DBAs can ensure their Oracle databases support critical
financial operations with optimal performance and stability.
The Translation Lookaside Buffer (TLB) is a small, high-speed cache inside the CPU that stores recent virtual-to-physical
memory address translations. It helps speed up memory access by avoiding the slower process of looking up memory
mappings in the main page table (which is stored in RAM).
In high-performance banking systems (like Oracle databases processing thousands of transactions per second), TLB misses
can severely degrade performance because:
•Each TLB miss forces the CPU to fetch the correct memory mapping from the page table in RAM.
•This adds latency, slowing down transaction processing.
•When using standard 4KB memory pages, a large Oracle SGA (e.g., 40GB) requires millions of page table entries,
increasing TLB misses.
TLB in the Context of Oracle & Huge Pages
Problem with Standard 4KB Pages
•If a bank’s Oracle database has a 40GB SGA, it needs:
• 40GB / 4KB = 10,485,760 page table entries.
•The CPU’s TLB can only cache a small fraction of these (e.g., 1,000 entries).
•Frequent TLB misses occur, slowing down ATM transactions, balance checks, and fraud detection queries.
Solution: Huge Pages (2MB or 1GB)
•Reduces the number of page table entries:
• 40GB / 2MB = 20,480 entries (vs. 10 million with 4KB pages).
•Fewer TLB misses → Faster memory access → Better transaction throughput.
•Memory stays pinned in RAM (no swapping), critical for real-time banking systems.
Real-World Banking Example
A national bank upgraded its Oracle database to use 2MB Huge Pages:
•Before:
• 5,000 TLB misses per second → ATM response time = 8ms.
•After:
• Only 500 TLB misses per second → ATM response time = 2ms (4x faster).
How to Check TLB Misses in Linux
perf stat -e dTLB-load-misses,dTLB-store-misses -p <Oracle_PID>
For banking databases, TLB optimization via Huge Pages is critical to:
Reduce memory access latency
Improve transaction speed
Stabilize high-volume financial workloads
Recommendation: Always use Huge Pages (2MB/1GB) for Oracle SGA in banking systems instead of relying on
standard 4KB pages.