Concepts: Chkconfig - List Oracleasm /etc/init.d/oracleasm
Concepts: Chkconfig - List Oracleasm /etc/init.d/oracleasm
Concepts: Chkconfig - List Oracleasm /etc/init.d/oracleasm
Concepts
ASM manages only database files OMF including datafiles, tempfiles, control files,
online redo logs, archive logs, RMAN backup sets, server parameter files, change
tracking files, and flashback logs.
An ASM instance is required on each node
Does not have datafiles, controlfile, redologs and dictionary
Processes dedicated for ASM are RBAL and ARBn
Stripe size is 128k for controlfiles and logs, 1M for all others
ASM startup
ASM startup is configured using:
chkconfig --list oracleasm
So there is a corresponding script used to start it up in:
/etc/init.d/oracleasm
Migrate a database to ASM
#RMAN must be used to migrate data files to ASM storage
alter system set db_create_file_dest='+DATA' scope=spfile;
alter system set control_files='' scope=spfile;
shutdown immediate;
startup nomount;
restore controlfile from '/u01/ORCL/control1.ctl';
alter database mount;
backup as copy database format '+DATA';
switch database to copy;
recover database;
alter database open;
alter tablespace temp add tempfile;
alter database tempfile '/u01/ORCL/temp1.dbf DROP';
Install
#OUI and DBCA may be used for installation from any node in RAC
#RAC required
*.cluster_database=#true for RAC
*.instance_type='asm'
*.cluster_database_instances=4
*.large_pool_size=41943040 #12M-16M is good
*.processes=60
*.remote_login_passwordfile='exclusive'
*.sga_max_size=157286400
*.shared_pool_size=67108864
*.user_dump_dest='/ORA/dbs00/oracle/admin/+ASM/udump'
*.background_dump_dest='/ORA/dbs00/oracle/admin/+ASM/bdump'
*.core_dump_dest='/ORA/dbs00/oracle/admin/+ASM/cdump'
+ASM1.instance_number=l
+ASM2.instance_number=2
NOTE: -> Note: may be worth try to tune *.db_cache_size=50000000 (default is 24M)
diskgroup creation
#from ASM instance
#Normal redundancy means defining two failure groups so using a two-way mirroring
CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY
FAILGROUP ctlr1 DISK '/dev/raw/raw1','/dev/raw/raw2'
FAILGROUP ctlr2 DISK '/dev/raw/raw3','/dev/raw/raw4';
CREATE DISKGROUP diskgroupl EXTERNAL REDUNDANCY DISK 'ORCL:VOLl', '0RCL:V0L2',
'0RCL:V0L3';
Various commands
ALTER DISKGROUP ALL MOUNT
#View disk infos reading from headers
ASMCMD>lsdsk -kpIt
#header_stat=FORMER
#the disk was part of a dropped disk group but contents have not been removed
#use dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=4 to make it a CANDIDATE
#manually drop a disk group 10g, you must manually delete headers
drop diskgroup dgroup1;
dd if=/dev/zero of=/dev/raw/raw1 bs=1024 count=4
#11g, drop a disk group
drop diskgroup dgroup1 force including contents;
/etc/init.d/oracleasm start
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm status
#add a disk to asm. Capital name
/etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
[/etc/sysconfig/oracleasm]
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="multipath sd"
#ORACLEASM_SCANORDER=""
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sda sdb sdc sde"
#ORACLEASM_SCANEXCLUDE=""
ASM tools
srvctl start/stop asm -n node_name : start/stop ASM instance
srvctl remove asm -n node_name : delete ASM instance
srvctl config asm -n node_name : Verify ASM instance
tablespace creation
#from client database
CREATE TABLESPACE tbs_asm1 DATAFILE '+DGROUP1' SIZE 32M;
Fixed views
X$KFALS: ASM aliases
X$KFDSK: ASM disks
X$KFFIL: ASM files
X$KFGRP: ASM disk groups
X$KFGMG: ASM operations
X$KFKID: ASM disk performance
X$KFNCL: ASM clients
X$KFMTA: ASM templates
Background processes
RBAL: Rebalances slaves
ARBn: Rebalances data extent
GMON: Diskgroup monitor
PSP0: Start and stop ARBn
File Names
Numeric File
+DiskGroup/File.Incarnation
+diskgroupl/256.l
Templates
PARAMETERFILE Parameter file
DUMPSET Dump set
CONTROLFILE Control file
ARCHIVELOG Archived redo log
ONLINE LOG Online redo log
DATAFILE Datafile
TEMPFILE Tempfile
BACKUPSET Backupset
AUTOBACKUP Autobackup control file
XTRANSPORT Transportable tablespace
CHANGETRACKING Change tracking file
FLASHBACK Flashback log
DATAGUARDCONFIG Data Guard configuration
cd
du
find
help
ls
lsct
lsdg
mkalias
mkdir
pwd
rm
rmalias
* Mirroring *
2 way, 3 way , external redundancy
Normal redundancy: 2 failure groups, 2 way mirroring, all local disk belong to same
failure group, only 1 preferred failure group for group
High redundancy: 3 failure groups, 3 way mirroring, maximum of 2 failure groups
for site with local disks,up to 2 preferred failure group for group
External redundancy: No failure groups
* ASM Preferred Mirror Read *
Requires compatible.rdbms=11.1
Once you configure a preferred mirror read (see asm_preferred_read_failure_groups),
every node can read from its local disks, only local
create diskgroup dg6 external redundancy disk '/dev/raw/raw1' attribute
'au_size'='8M'; attribute 'compatible.asm'='11.1';
* OS User *
SYSASM instead of SYSDBA, member of OSASM group
grant sysasm to aldo;
Compatibility Params
Compatibility can only be advanced
ASM 11g supports both 11g and 10g, compatible.asm and compatible.rdbms must be
manually advanced since default values are 10.1
* Attributes *
compatible.rdbms #Default 10.1. The minimum db version to mount a disk group, once
increased cannot be lowered. Must be advanced after advancing compatible.asm
#11.1 enable ASM Preferred Mirror Read, Fast Mirror Resync,
Variable Size Extents, different Allocation Unit sizes(see AU Allocation Uint)
compatible.asm #Default 10.1. Control ASM data structure, cannot be lower than
compatible.rdbms.
#Must be advanced before advancing compatible.rdbms
template.redundancy: unprotect, mirror, high
template.tname.striping: coarse, fine
* Check command *
Verify ASM disk group metadata directories, cross check files extent maps and
allocation tables, check link between metadata directory and file directory,
check the link of the alias directory, check for unreachable blocks on metadata
directories, repair[def.]/norepair, disk consistency if verified
10g: check all, file, disk, disks in failgroup; -> 11g: check;
Mount
alter diskgroup t dismount;
alter diskgroup t mount RESTRICT; or startup retrict;
#clients wont be able to access disk group, if you add a disk a rebalance is
performed
alter diskgroup t dismount;
alter diskgroup t mount [NOFORCE(def.) | FORCE];
#NOFORCE wont mount an incomplete disk group.
#FORCE you must restore missing disk before disk_repair_time, FORCE requires at
least one disk offline, FORCE fails if all disk are online
drop diskgroup g1 force include contents;
#Command fail if disk in use, must specify with
ASMCMD
cp +DATA/.../TBSFV.223.333 +DATA/.../pippo.bak #copy a file locally
cp +DATA/.../TBSFV.223.333 /home/.../pippo.bak #copy a file to the OS and viceversa
cp +DATA/.../TBSFV.223.333 +DATA/.../pippo.bak \sys@mydb . +ASM2 :
+D2/jj/.../pippo.dbf #Copy to a remote ASM instance
lsdsk <-d><-i><-[l]k><-[l]s><-p>;
#list visible disk. In connected mode(default) reads V$... and GV$..., in non-
connected scans disk headers after a warning message.
<-I> force non-connected mode
<-k> detailed infos
<-s> shows I/O stats
<-p> status
<-t> repair related infos
<-d> limits to disk group
read from headers
remap dg5 d1 5000-7500;
#remap a range of unreadable bad disk sectors with correct content. Repair blocks
that have I/O errors. EM may also be used
md_backup [-b backup_file(def. ambr_backup_intermediate_file)] [-g
'diskgroup_name,diskgroup_name,...'];
#backup into a text file metadata infos
mkdir +DGROUP1/abc
mkalias TBSF.23.1222 +DGROUP1/abc/users.dbf
MD_RESTORE command
recreate diskgroups and restore its metadata only from the previously backed up file.
Cannot recover corrupted data
md_restore [-b backup_file(def. ambr_backup_intermediate_file)]
<-t[FULL(create diskgroups and restore its metadata), NODG(restore metadata for an
existing diskgroup), NEWDG(new diskgroup and restore metadata)]>;
<-f> write commands to a file
<-g> select diskgroups, all if undefined
<-o> rename diskgroup
<-i> ignore errors
md_restore -t newdg -o 'DGNAME=dg3:dg4' -b your_file
#restore dg3 giving a different name dg4