[go: up one dir, main page]

0% found this document useful (0 votes)
209 views84 pages

Oracle RAC and ActiveCluster v01

Uploaded by

jalvarez82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
209 views84 pages

Oracle RAC and ActiveCluster v01

Uploaded by

jalvarez82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Version Changes

© 2017 Pure Storage, Inc. All rights reserved.


Pure Storage, FlashStack, and the Pure Storage Logo are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and
other countries. Oracle is a trademark of Oracle Corporation. Other company, product, or service names may be trademarks or
service marks of others.
The Pure Storage products described in this documentation are distributed under a license agreement restricting the use, copying,
distribution, and decompilation/reverse engineering of the products. The Pure Storage products described in this documentation
may only be used in accordance with the terms of the license agreement. No part of this documentation may be reproduced in any
form by any means without prior written authorization from Pure Storage, Inc. and its licensors, if any. Pure Storage may make
improvements and/or changes in the Pure Storage products and/or the programs described in this documentation at any time
without notice.
THIS DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND
WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE,
OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE
LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION
CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.

Pure Storage, Inc.


CTO Office
650 Castro Street, Suite 400
Mountain View, CA 94041

http://www.purestorage.com

This report is the proprietary information of Pure Storage, Inc.




But perhaps the most important advantage of this architecture is the degree to which it simplifies
disaster recovery, both for testing and for recovering from actual disasters. If application servers
are configured to distribute the workload over the entire cluster, recovery is zero-touch (requiring
no human intervention), zero-RPO, and zero-RTO. Moreover, systems using this architecture
self-heal when failed sites come back online.






Although Purity ActiveCluster is quite simple to configure, this report includes configuration of
the entire solution software stack, in part to provide the rationale for certain decisions. The
example section describes storage and RAC node configuration, installation of Oracle Grid
Infrastructure, ASM configuration, Oracle RAC database software installation, and
Swingbench installation, setup, and database creation. For readers familiar with all of these
except ActiveCluster configuration, the latter consists of four steps:

purearray connect --management-address 10.219.224.132


--type sync-replication

purepod create RAC-ACTIVECLUSTER
purepod add --array FLASHARRAY-B RAC-ACTIVECLUSTER

purevol create --size 2T RAC-ACTIVECLUSTER::ACTIVEDB-DATA-01

purehost create --preferred-array FLASHARRAY-A acrac1


purehost create --preferred-array FLASHARRAY-B acrac2
purehost connect --vol RAC-ACTIVECLUSTER::ACTIVEDB-DATA-01 acrac1
purehost connect --vol RAC-ACTIVECLUSTER::ACTIVEDB-DATA-01 acrac2
This example uses one replication port per controller because the test configuration includes only
a single switch. A fully resilient production configuration would include two switches with one
replication port on each controller connected to each switch for maximum network resilience.
Do not enter Replication
Address; the arrays detect it
automatically.
yum install iscsi-initiator-utils -y
yum install lsscsi

yum install device-mapper-multipath

chkconfig iscsid on
service iscsid start
chkconfig iscsi on
service iscsi start
/etc/iscsi/initiatorname.iscsi
[root@acrac1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:a5338d66785

[root@acrac2 ~]# cat /etc/iscsi/initiatorname.iscsi


InitiatorName=iqn.1988-12.com.oracle:d12ce777e42a
acrac2

acrac1
acrac2
acrac1 acrac2
acrac1

acrac1

acrac1

acrac2
FlashArrays also support FlashRecover asynchronous replication, which prioritizes local write
performance over RPO. FlashRecover is suitable for less critical write-intensive systems such as
extract-transform-load (ETL)-based reporting and decision support.
With both synchronous and asynchronous replication available, users can make the right
performance-RPO tradeoff for each application’s service level requirement.
Unlike rotating disk storage, the number of IOPS a FlashArray can deliver is independent of the
number of LUNs in an ASM diskgroup. This makes it possible to significantly reduce the number
of objects managed by ASM. Ten 1-terabyte LUNs provide the same performance as a single 10-
terabyte LUN. To minimize management overhead, therefore, the example uses a single LUN per
ASM diskgroup. Moreover, because (a) FlashArray volumes (LUNs) can be resized instantly, and
(b) ASM 12c supports device resizing, there is no need to add LUNs to an ASM diskgroup to
increase its capacity or performance.

acrac1
acrac2

The following steps are specific to iSCSI arrays. Different steps would be requires for arrays that
use Fibre Channel to connect to hosts.

ETH4

pureuser@FLASHARRAY-A> pureport list


Name WWN Portal IQN Failover
CT0.ETH4 - 192.168.150.120:3260 iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -
CT1.ETH4 - 192.168.150.121:3260 iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -

pureuser@FLASHARRAY-B> pureport list


Name WWN Portal IQN Failover
CT0.ETH4 - 192.168.150.130:3260 iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -
CT1.ETH4 - 192.168.150.131:3260 iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -

192.168.150.*
[root@acrac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-iscsi1

HWADDR=00:25:B5:A0:00:0F
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.150.25
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=iscsi1
UUID=2f75d5a4-b316-4234-aa35-2cec390d1abb
DEVICE=iscsi1
ONBOOT=yes

ifup

[root@acrac1 ~]# ifup ifcfg-iscsi1


iscsiadm
[root@acrac1 ~]# iscsiadm -m discovery -t st -p 192.168.150.120
192.168.150.120:3260,1 iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1
192.168.150.121:3260,1 iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1
[root@acrac1 ~]#
[root@acrac1 ~]# iscsiadm -m discovery -t st -p 192.168.150.130
192.168.150.130:3260,1 iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561
192.168.150.131:3260,1 iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561

iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -p 192.168.150.120 -l


iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -p 192.168.150.121 -l

iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -p 192.168.150.130 -l


iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -p 192.168.150.131 -l
lsscsi

[root@acrac1 ~]# lsscsi | grep PURE


[1:0:0:1] disk PURE FlashArray 8888 /dev/sdc
[1:0:0:2] disk PURE FlashArray 8888 /dev/sde
[1:0:0:3] disk PURE FlashArray 8888 /dev/sdb
[1:0:0:4] disk PURE FlashArray 8888 /dev/sdd
[1:0:0:5] disk PURE FlashArray 8888 /dev/sdr
[2:0:0:1] disk PURE FlashArray 8888 /dev/sdg
[2:0:0:2] disk PURE FlashArray 8888 /dev/sdi
[2:0:0:3] disk PURE FlashArray 8888 /dev/sdf
[2:0:0:4] disk PURE FlashArray 8888 /dev/sdh
[2:0:0:5] disk PURE FlashArray 8888 /dev/sds
[3:0:0:1] disk PURE FlashArray 8888 /dev/sdl
[3:0:0:2] disk PURE FlashArray 8888 /dev/sdp
[3:0:0:3] disk PURE FlashArray 8888 /dev/sdj
[3:0:0:4] disk PURE FlashArray 8888 /dev/sdn
[3:0:0:5] disk PURE FlashArray 8888 /dev/sdt
[4:0:0:1] disk PURE FlashArray 8888 /dev/sdm
[4:0:0:2] disk PURE FlashArray 8888 /dev/sdq
[4:0:0:3] disk PURE FlashArray 8888 /dev/sdk
[4:0:0:4] disk PURE FlashArray 8888 /dev/sdo
[4:0:0:5] disk PURE FlashArray 8888 /dev/sdu
acrac2

iscsiadm -m iface -I pureiscsci -o new

iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -p 192.168.150.120


iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -p 192.168.150.121

iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -p 192.168.150.130


iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -p 192.168.150.131

iscsiadm -m iface -I pureiscsci -o update -n iface.initiatorname -v pureiscsiinit

scsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -I pureiscsci


-p 192.168.150.120 --login
iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -I pureiscsci
-p 192.168.150.121 --login
iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -I pureiscsci
-p 192.168.150.130 --login
iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -I pureiscsci
-p 192.168.150.131 --login

iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -p 192.168.150.120 --op


update -n node.startup -v automatic
iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.654655f3e760c6d1 -p 192.168.150.121 --op
update -n node.startup -v automatic

iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -p 192.168.150.130 --op


update -n node.startup -v automatic
iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.42050ed4de8bf561 -p 192.168.150.131 --op
update -n node.startup -v automatic

iscsiadm -m node -L automatic

[root@acrac1 ~]# lsscsi | grep PURE


--- response not shown ---

[root@acrac1 ~]# vi /etc/udev/rules.d/99-pure-storage.rules

# Recommended settings for Pure Storage FlashArray.

# Use noop scheduler for high-performance solid-state storage


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds


ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/sh -c
'echo 60 > /sys/$DEVPATH/device/timeout'"
service multipathd start

[root@acrac1 rules.d]# multipath -ll


3624a93702e81aa5e8c3c4aec00011010 dm-7 PURE ,FlashArray
size=2.0T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=0 status=active
|- 1:0:0:1 sdc 8:32 active undef running
|- 2:0:0:1 sdg 8:96 active undef running
|- 3:0:0:1 sdl 8:176 active undef running
`- 4:0:0:1 sdm 8:192 active undef running
3624a93702e81aa5e8c3c4aec00011014 dm-10 PURE ,FlashArray
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=0 status=active
|- 1:0:0:4 sdd 8:48 active undef running
|- 2:0:0:4 sdh 8:112 active undef running
|- 3:0:0:4 sdn 8:208 active undef running
`- 4:0:0:4 sdo 8:224 active undef running
3624a93702e81aa5e8c3c4aec00011013 dm-9 PURE ,FlashArray
size=500G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=0 status=active
|- 1:0:0:3 sdb 8:16 active undef running
|- 2:0:0:3 sdf 8:80 active undef running
|- 3:0:0:3 sdj 8:144 active undef running
`- 4:0:0:3 sdk 8:160 active undef running
3624a93702e81aa5e8c3c4aec00011012 dm-8 PURE ,FlashArray
size=3.0T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=0 status=active
|- 1:0:0:2 sde 8:64 active undef running
|- 2:0:0:2 sdi 8:128 active undef running
|- 3:0:0:2 sdp 8:240 active undef running
`- 4:0:0:2 sdq 65:0 active undef running

sdc sdg
/etc/multipath.conf

[root@acrac1 etc]# vi multipath.conf

defaults {
find_multipaths yes
polling_interval 10
}

devices {
device {
vendor "PURE"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
path_checker tur
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0
hardware_handler "1 alua"
prio alua
failback immediate
}
}

multipaths {
multipath {
wwid 3624a93702e81aa5e8c3c4aec00011014
alias dg_acrac_crs
}
multipath {
wwid 3624a93702e81aa5e8c3c4aec00011cbd
alias dg_acrac_mgmt
}
multipath {
wwid 3624a93702e81aa5e8c3c4aec00011013
alias dg_acrac_activedb_control_redo
}
multipath {
wwid 3624a93702e81aa5e8c3c4aec00011010
alias dg_acrac_activedb_data
}
multipath {
wwid 3624a93702e81aa5e8c3c4aec00011012
alias dg_acrac_activedb_fra
}
}

blacklist {
}

multipath.conf

multipath.conf
multipathd start

[root@acrac1 ~]# multipath -ll


dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdg 8:96 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:1 sdl 8:176 active ready running
`- 4:0:0:1 sdm 8:192 active ready running
dg_acrac_mgmt (3624a93702e81aa5e8c3c4aec00011cbd) dm-11 PURE ,FlashArray
size=50G features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:5 sdr 65:16 active ready running
| `- 2:0:0:5 sds 65:32 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:5 sdt 65:48 active ready running
`- 4:0:0:5 sdu 65:64 active ready running
dg_acrac_activedb_fra (3624a93702e81aa5e8c3c4aec00011012) dm-8 PURE ,FlashArray
size=3.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:2 sde 8:64 active ready running
| `- 2:0:0:2 sdi 8:128 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:2 sdp 8:240 active ready running
`- 4:0:0:2 sdq 65:0 active ready running
dg_acrac_activedb_control_redo (3624a93702e81aa5e8c3c4aec00011013) dm-9 PURE ,FlashArray
size=500G features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:3 sdb 8:16 active ready running
| `- 2:0:0:3 sdf 8:80 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:3 sdj 8:144 active ready running
`- 4:0:0:3 sdk 8:160 active ready running
dg_acrac_crs (3624a93702e81aa5e8c3c4aec00011014) dm-10 PURE ,FlashArray
size=10G features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:4 sdd 8:48 active ready running
| `- 2:0:0:4 sdh 8:112 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:4 sdn 8:208 active ready running
`- 4:0:0:4 sdo 8:224 active ready running

vi /etc/udev/rules.d/99-oracle-asmdevices.rules

#All volumes whose names start with dg_acrac_* #


ENV{DM_NAME}=="dg_acrac*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"

#All volumes which starts with dg_acrac_activedb* #


ENV{DM_NAME}=="dg_acrac-activedb*", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

service multipathd restart


[root@acrac2 rules.d]# ls -lL /dev/mapper/dg*
brw-rw----. 1 grid asmadmin 252, 9 Aug 22 18:40 /dev/mapper/dg_acrac_activedb_control_redo
brw-rw----. 1 grid asmadmin 252, 7 Aug 22 18:40 /dev/mapper/dg_acrac_activedb_data
brw-rw----. 1 grid asmadmin 252, 8 Aug 22 18:40 /dev/mapper/dg_acrac_activedb_fra
brw-rw----. 1 grid asmadmin 252, 10 Aug 22 18:40 /dev/mapper/dg_acrac_crs
brw-rw----. 1 grid asmadmin 252, 11 Sep 5 20:33 /dev/mapper/dg_acrac_mgmt
fdisk
dg_acrac_crs
[root@acrac1 ~]# fdisk /dev/mapper/dg_acrac_crs
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x054dbe73.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (8192-20971519, default 8192):
Using default value 8192
Last sector, +sectors or +size{K,M,G} (8192-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
fdisk

fdisk /dev/mapper/dg_acrac_mgmt
fdisk /dev/mapper/dg_acrac_activedb_control_redo
fdisk /dev/mapper/dg_acrac_activedb_data
fdisk /dev/mapper/dg_acrac_activedb_fra
acrac1 acrac2
srvctl add service -db ACTIVEDB -service SWINGBENCH_AB -preferred
"ACTIVEDB1,ACTIVEDB2" -failovertype TRANSACTION -replay_init_time 300 -failoverretry
30 -failoverdelay 3 -commit_outcome TRUE -failover_restore LEVEL1 -drain_timeout 60 -
stopoption IMMEDIATE

srvctl start service -db ACTIVEDB -service SWINGBENCH_AB.puresg.com

acrac1
srvctl add service -db ACTIVEDB -service SWINGBENCH_A.puresg.com -preferred ACTIVEDB1
-available ACTIVEDB2 -failovertype TRANSACTION -replay_init_time 300 -failoverretry 30
-failoverdelay 3 -commit_outcome TRUE -failover_restore LEVEL1 -drain_timeout 60 -
stopoption IMMEDIATE

srvctl start service -db ACTIVEDB -service SWINGBENCH_A.puresg.com


acrac2
srvctl add service -db ACTIVEDB -service SWINGBENCH_B.puresg.com -preferred ACTIVEDB2
-available ACTIVEDB1 -failovertype TRANSACTION -replay_init_time 300 -failoverretry 30
-failoverdelay 3 -commit_outcome TRUE -failover_restore LEVEL1 -drain_timeout 60 -
stopoption IMMEDIATE

srvctl start service -db ACTIVEDB -service SWINGBENCH_B.puresg.com

SQL> GRANT EXECUTE ON DBMS_APP_CONT TO SOE;

$SWINGHOME/bin/swingbench
$SWINGHOME/bin/swingbench.xml

vi swingbench.xml

<ConnectString>(DESCRIPTION= (TRANSPORT_CONNECT_TIMEOUT=5) (RETRY_COUNT=6) (FAILOVER=ON)


(ADDRESS = (PROTOCOL = TCP) (HOST = acrac-scan)
(PORT = 1521) ) (CONNECT_DATA= (SERVER = DEDICATED) (SERVICE_NAME =
SWINGBENCH_AB.puresg.com))) </ConnectString>



select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 250
2 250





sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac1 |-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdg 8:96 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 4:0:0:1 sdm 8:192 active ready running
`- 3:0:0:1 sdl 8:176 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac2 |-+- policy='queue-length 0' prio=50 status=active
| |- 3:0:0:1 sdj 8:144 active ready running
| `- 4:0:0:1 sdo 8:224 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running
acrac2

acrac2
select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 250
2 250
acrac1
-1 acrac2

sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac1 |-+- policy='queue-length 0' prio=-1 status=enabled
| |- 3:0:0:1 sdl 8:176 active ready running
| `- 4:0:0:1 sdm 8:192 active ready running
`-+- policy='queue-length 0' prio=50 status=active
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac2 |-+- policy='queue-length 0' prio=0 status=enabled
| |- 3:0:0:1 sdj 8:144 failed faulty running
| `- 4:0:0:1 sdo 8:224 failed faulty running
`-+- policy='queue-length 0' prio=10 status=active
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running



• acrac2

But perhaps the most important result demonstrated by this test is that no administrative action
whatsoever was required during RAC node acrac2‘s switch to using FLASHARRAY-A.
select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 250
2 250
sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac1 |-+- policy='queue-length 0' prio=-1 status=enabled
| |- 3:0:0:1 sdl 8:176 active ready running
| `- 4:0:0:1 sdm 8:192 active ready running
`-+- policy='queue-length 0' prio=50 status=active
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac2 |-+- policy='queue-length 0' prio=-1 status=enabled
| |- 4:0:0:1 sdo 8:224 active ready running
| `- 3:0:0:1 sdj 8:144 active ready running
`-+- policy='queue-length 0' prio=10 status=active
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running
sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac1 |-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdg 8:96 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 4:0:0:1 sdm 8:192 active ready running
`- 3:0:0:1 sdl 8:176 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac2 |-+- policy='queue-length 0' prio=50 status=active
| |- 3:0:0:1 sdj 8:144 active ready running
| `- 4:0:0:1 sdo 8:224 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running





puresgn5k# show interface eth1/19-22 brief

--------------------------------------------------------------------------------
Ethernet VLAN Type Mode Status Reason Speed Port
Interface Ch #
--------------------------------------------------------------------------------
Eth1/19 40 eth trunk up none 10G(D) --
Eth1/20 40 eth trunk up none 10G(D) --
Eth1/21 40 eth trunk up none 10G(D) --
Eth1/22 40 eth trunk up none 10G(D) –

select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 250
2 250

puresgn5k# config t
Enter configuration commands, one per line. End with CNTL/Z.
puresgn5k(config)# interface eth1/19-22
puresgn5k(config-if-range)# shutdown
puresgn5k(config-if-range)# show interface eth1/19-22 brief

--------------------------------------------------------------------------------
Ethernet VLAN Type Mode Status Reason Speed Port
Interface Ch #
--------------------------------------------------------------------------------
Eth1/19 40 eth trunk down Administratively down 10G(D) --
Eth1/20 40 eth trunk down Administratively down 10G(D) --
Eth1/21 40 eth trunk down Administratively down 10G(D) --
Eth1/22 40 eth trunk down Administratively down 10G(D) --
select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 250
2 250
sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=-1 status=enabled
acrac1 | |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdg 8:96 active ready running
`-+- policy='queue-length 0' prio=10 status=active
|- 3:0:0:1 sdl 8:176 active ready running
`- 4:0:0:1 sdm 8:192 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
acrac2 | |- 4:0:0:1 sdo 8:224 active ready running
| `- 3:0:0:1 sdj 8:144 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running

acrac1
acrac2





• acrac1
puresgn5k# config t
Enter configuration commands, one per line. End with CNTL/Z.
puresgn5k(config)# interface eth1/19-22
puresgn5k(config-if-range)# no shutdown
puresgn5k(config-if-range)# show interface eth1/19-22 brief

--------------------------------------------------------------------------------
Ethernet VLAN Type Mode Status Reason Speed Port
Interface Ch #
--------------------------------------------------------------------------------
Eth1/19 40 eth trunk up none 10G(D) --
Eth1/20 40 eth trunk up none 10G(D) --
Eth1/21 40 eth trunk up none 10G(D) --
Eth1/22 40 eth trunk up none 10G(D) --

select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 250
2 250
sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac1 |-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdg 8:96 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:1 sdl 8:176 active ready running
`- 4:0:0:1 sdm 8:192 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac2 |-+- policy='queue-length 0' prio=50 status=active
| |- 4:0:0:1 sdo 8:224 active ready running
| `- 3:0:0:1 sdj 8:144 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running


• acrac1
acrac2

select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 249
2 251
acrac2 ora_pmon_ACTIVEDB2

oracle@acrac2:/home/oracle [ACTIVEDB2]
> ps -ef | grep pmon
oracle 2561 1 0 Sep06 ? 00:00:10 ora_pmon_ACTIVEDB2
grid 18643 1 0 Aug26 ? 00:00:57 asm_pmon_+ASM2
grid 20074 1 0 Aug29 ? 00:00:37 mdb_pmon_-MGMTDB
oracle@acrac2:/home/oracle [ACTIVEDB2]
> date ; kill -9 2561
Thu Sep 7 17:42:02 SGT 2017

acrac1

acrac2
acrac2 acrac1

select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 500

acrac2

acrac2
acrac1
acrac2 acrac1
sudo multipath -ll dg_acrac_activedb_data

dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE


,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac1 |-+- policy='queue-length 0' prio=50 status=active
| |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdg 8:96 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 3:0:0:1 sdl 8:176 active ready running
`- 4:0:0:1 sdm 8:192 active ready running
dg_acrac_activedb_data (3624a93702e81aa5e8c3c4aec00011010) dm-7 PURE
,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
acrac2 |-+- policy='queue-length 0' prio=50 status=active
| |- 4:0:0:1 sdo 8:224 active ready running
| `- 3:0:0:1 sdj 8:144 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
`- 2:0:0:1 sdg 8:96 active ready running
acrac2

acrac1

select inst_id, count(*) from gv$session where username = 'SOE' group by inst_id;
INST_ID COUNT(*)
---------- ----------
1 499
2 1
#Setup OEL 7.x Host with pre-requisites for Oracle Installation
#Storage devices may need to be adjusted as appropriate
#To be run as root

#!/bin/bash

##
### Create Oracle user accounts and groups
##
# Groups
echo "dba::501:oracle,oinstall,oemop" >> /etc/group
echo "oinstall::502:oracle" >> /etc/group

groupadd -g 503 oper


groupadd -g 504 asmadmin
groupadd -g 505 asmdba
groupadd -g 506 asmoper
groupadd -g 507 backupdba
groupadd -g 508 dgdba

# Users
useradd -m -u 555 -g oinstall -G dba,asmadmin,asmdba,oper -d /home/oracle -s /bin/bash
-c "Oracle Superuser" oracle
useradd -m -u 556 -g oinstall -G dba,asmadmin,asmdba,asmoper -d /home/grid -s
/bin/bash -c "Oracle Grid" grid

passwd -x -1 oracle
passwd -x -1 grid

passwd oracle << EOF


change_th15_p@ssw0rd
change_th15_p@ssw0rd
EOF

passwd grid << EOF


change_th15_p@ssw0rd
change_th15_p@ssw0rd
EOF

# Filesystem creation for Oracle


# In this example we assume the OS is already installed and
# we have at least 100GB free on the standard 'ol' volume group

# Create Volume & Filesystem for Grid Infrastructure Binaries


lvcreate -L40g -noragridvol ol
mkfs -t ext4 /dev/ol/oragridvol
mkdir -p /u01/app/grid
chmod 775 /u01/app/grid
echo "/dev/ol/oragridvol /u01/app/grid ext4 defaults,discard 1 2 " >>
/etc/fstab
chown -R grid:oinstall /u01/app/grid
mount /u01/app/grid
chown -R grid:oinstall /u01/app/grid

# Optional Volume & Filesystem for OEM Agent (skip if OEM is not used in the
environment
lvcreate -L5g -noraagentvol ol
mkfs -t ext4 /dev/ol/oraagentvol
mkdir -p /u01/app/oracle/product/agent
chmod 775 /u01/app/oracle/product/agent
echo "/dev/ol/oraagentvol /u01/app/oracle/product/agent ext4 defaults,discard
1 2 " >> /etc/fstab
chown -R oracle:oinstall /u01/app/oracle/product/agent
mount /u01/app/oracle/product/agent
chown -R oracle:oinstall /u01/app/oracle/product/agent

# DB Binaries
lvcreate -L25g -noradbvol ol
mkfs -t ext4 /dev/ol/oradbvol
mkdir -p /u01/app/oracle/product/db
chmod 775 /u01/app/oracle/product/db
echo "/dev/ol/oradbvol /u01/app/oracle/product/db ext4 defaults,discard
1 2 " >> /etc/fstab
chown -R oracle:oinstall /u01/app/oracle/product/db
mount /u01/app/oracle/product/db
chown -R oracle:oinstall /u01/app/oracle/product/db

# Create local Volume & Filesystem for diagnostics and log files area
lvcreate -L25g -noralocalvol ol
mkfs -t ext4 /dev/ol/oralocalvol
mkdir -p /u01/app/oracle/local
chmod 775 /u01/app/oracle/local
echo "/dev/ol/oralocalvol /u01/app/oracle/local ext4 defaults,discard 1
2 " >> /etc/fstab
chown -R oracle:oinstall /u01/app/oracle/local
mount /u01/app/oracle/local
chown -R oracle:oinstall /u01/app/oracle/local

##
### Install RPM packages required by Oracle
##
umask 022
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
ORACLE_LIST="binutils compat-libstdc++ elfutils-libelf elfutils-libelf-devel expat gcc
gcc-c++ glibc glibc-common glibc-devel glibc-headers libaio libaio-devel libgcc
libstdc++ libstdc++-devel make pdksh sysstat unixODBC unixODBC-devel oracleasm
oracleasmlib oracleasm-support cvuqdisk openssh-clients"

for rpm in `echo $ORACLE_LIST`


do
yum -y install $rpm
done

##
### Oracle profile
##
cat >> /home/oracle/.bash_profile <<EOF

export ORACLE_SID=
export ORACLE_HOSTNAME=
export ORACLE_BASE=/u01/app/oracle/global
export GRID_HOME=/u01/app/grid/product/12cR2
export DB_HOME=/u01/app/oracle/product/db/12cR2
export ORA_INV=/u01/app/grid/oraInventory
export ADR_HOME=\$ORACLE_BASE/diag
export ORA_ADMIN=\$ORACLE_BASE/admin
export AGENT_HOME=/u01/app/oracle/product/agent/agent13c/agent_13.1.0.0.0
export ORACLE_HOME=\$DB_HOME
export TNS_ADMIN=\$GRID_HOME/network/admin
export LDAP_ADMIN=$TNS_ADMIN
export SQLPATH=\$ORACLE_HOME/sqlscripts
export NLS_LANG='english_united kingdom.we8iso8859p1'
export NLS_DATE_FORMAT='DD Mon YYYY HH24:MI:SS'
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib
export
PATH=\$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/etc:/usr/local/bin:/sbin:\$PATH:
export EDITOR=vi
stty erase ^H

umask 022
ulimit -n 65536
ulimit -u 16384

alias agenthome='export ORACLE_HOME=\$AGENT_HOME;export


LD_LIBRARY_PATH=\$ORACLE_HOME/lib;export PATH=\$ORACLE_HOME/bin:\$PATH'
alias gridhome='export ORACLE_HOME=\$GRID_HOME;export
LD_LIBRARY_PATH=\$ORACLE_HOME/lib;export PATH=\$ORACLE_HOME/bin:\$PATH'
alias dbshome='export ORACLE_HOME=\$DB_HOME;export
LD_LIBRARY_PATH=\$ORACLE_HOME/lib;export PATH=\$ORACLE_HOME/bin:\$PATH'

export PS1='\`/usr/bin/whoami\`@\${HOSTNAME}:\${PWD} [\${ORACLE_SID}]


> '

EOF

##
### Grid profile
##
cat >> /home/grid/.bash_profile <<EOF
#
# These values need setting by the DBA
#
export ORACLE_SID=+ASM1
export ORACLE_HOSTNAME=
#
#
export ORACLE_BASE=/u01/app/grid/local
export GRID_HOME=/u01/app/grid/product/12cR2
export DB_HOME=/u01/app/oracle/product/db/12cR2
export ORA_INV=/u01/app/grid/oraInventory
export ADR_HOME=\$ORACLE_BASE/diag
export ORA_ADMIN=\$ORACLE_BASE/admin
export AGENT_HOME=/u01/app/oracle/product/agent/agent13c/agent_13.1.0.0.0
export ORACLE_HOME=\$GRID_HOME
export TNS_ADMIN=\$GRID_HOME/network/admin
export LDAP_ADMIN=$TNS_ADMIN
export NLS_LANG='english_united kingdom.we8iso8859p1'
export NLS_DATE_FORMAT='DD Mon YYYY HH24:MI:SS'
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib
export
PATH=\$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/etc:/usr/local/bin:/sbin:\$PATH:
export EDITOR=vi
stty erase ^H

umask 022
ulimit -n 65536
ulimit -u 16384

alias agenthome='export ORACLE_HOME=\$AGENT_HOME;export


LD_LIBRARY_PATH=\$ORACLE_HOME/lib;export PATH=\$ORACLE_HOME/bin:\$PATH'
alias gridhome='export ORACLE_HOME=\$GRID_HOME;export
LD_LIBRARY_PATH=\$ORACLE_HOME/lib;export PATH=\$ORACLE_HOME/bin:\$PATH'
alias dbshome='export ORACLE_HOME=\$DB_HOME;export
LD_LIBRARY_PATH=\$ORACLE_HOME/lib;export PATH=\$ORACLE_HOME/bin:\$PATH'

export PS1='\`/usr/bin/whoami\`@\${HOSTNAME}:\${PWD} [\${ORACLE_SID}]


> '

EOF

##
### Directories/Permissions
##
chown oracle:oinstall /home/oracle/.profile
chown grid:oinstall /home/grid/.profile
chmod 750 /home/oracle /home/grid
chmod 640 /home/oracle/.profile /home/grid/.profile

mkdir -p /u01/app/oracle/product/db
chown -R oracle:oinstall /u01/app/oracle

mkdir -p /u01/app/grid/product/
chown -R grid:oinstall /u01/app/grid

mkdir -p /u01/app/grid/oraInventory
chown grid:oinstall /u01/app/grid/oraInventory

mkdir -p /var/opt/oracle
chown -R oracle:oinstall /var/opt/oracle
ln -s /etc/oratab /var/opt/oracle/oratab

##
### Set oracle/grid ulimits in /etc/profile
##
echo 'if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
' >> /etc/profile
##
### /etc/security/limits.conf
##
echo >> /etc/security/limits.conf
echo "# Oracle Limits" >> /etc/security/limits.conf
echo "oracle soft nproc 2047" >> /etc/security/limits.conf
echo "oracle hard nproc 16384" >> /etc/security/limits.conf
echo "oracle soft nofile 4096" >> /etc/security/limits.conf
echo "oracle hard nofile 65536" >> /etc/security/limits.conf
echo "oracle soft memlock 25165824" >> /etc/security/limits.conf
echo "oracle hard memlock 25165824" >> /etc/security/limits.conf
echo "grid soft nproc 2047" >> /etc/security/limits.conf
echo "grid hard nproc 16384" >> /etc/security/limits.conf
echo "grid soft nofile 4096" >> /etc/security/limits.conf
echo "grid hard nofile 65536" >> /etc/security/limits.conf
echo "grid soft memlock 25165824" >> /etc/security/limits.conf
echo "grid hard memlock 25165824" >> /etc/security/limits.conf

##
### /etc/sysctl.conf
##
echo >> /etc/sysctl.conf
echo "# Oracle Settings" >> /etc/sysctl.conf

# pagecache
echo "vm.pagecache=30" >> /etc/sysctl.conf

# Shared Memory
echo "kernel.shmmni=4096" >> /etc/sysctl.conf
# The default build sets these high enough
# echo "kernel.shmall=17039360" >> /etc/sysctl.conf
# echo "kernel.shmmax=69793218560" >> /etc/sysctl.conf

# kernel.sem
echo "kernel.sem=268 32000 100 256" >> /etc/sysctl.conf

# Disable ASLR
echo "kernel.randomize_va_space=0" >> /etc/sysctl.conf
echo "kernel.exec-shield=0" >> /etc/sysctl.conf

# fs
echo "fs.file-max=6815744" >> /etc/sysctl.conf
echo "fs.aio-max-nr=1048576" >> /etc/sysctl.conf

# Huge Pages (2MB page size)


# Calculate total RAM and allocate 50% as Huge Pages
TMEM=`cat /proc/meminfo | grep MemTotal | awk '{ print $2 }'`
HMEM=$(($TMEM/4096))
echo "vm.nr_hugepages=$HMEM" >> /etc/sysctl.conf

# Aggressive swapping
echo "vm.swappiness=100" >> /etc/sysctl.conf

# net
echo "net.core.rmem_default=262144" >> /etc/sysctl.conf
echo "net.core.wmem_default=262144" >> /etc/sysctl.conf
echo "net.core.rmem_max=4194304" >> /etc/sysctl.conf
echo "net.core.wmem_max=1048576" >> /etc/sysctl.conf
echo "net.ipv4.ip_local_port_range=9000 65500" >> /etc/sysctl.conf

echo "# END OF Oracle Settings" >> /etc/sysctl.conf


echo >> /etc/sysctl.conf

# Load the settings


sysctl -p

##
### Pam
##
echo "session required pam_limits.so" >> /etc/pam.d/login
echo "session required pam_limits.so" >> /etc/pam.d/sshd

##
### SUDO
##
echo "oracle ALL=(root) NOPASSWD: ALL" >> /etc/sudoers
echo "grid ALL=(root) NOPASSWD: ALL" >> /etc/sudoers
echo 'Defaults:oracle !requiretty' >> /etc/sudo.d/sudoers_local.inc

chmod o+x /usr/bin/sudo


ln -s /bin/sudo /usr/bin/sudo
ln -s /usr/bin/sudo /usr/local/bin/sudo

##
### Disable tty requirment from ssh sudo sessions for Oracle EM provisioning
##
chmod u+w /etc/sudoers
ex /etc/sudoers << EOF > /dev/null 2>&1
g/Defaults requiretty/s//#Defaults requiretty/g
wq
EOF
chmod u-w /etc/sudoers

chown -R grid:oinstall /u01


chown -R oracle:oinstall /u01/app/oracle
$GRID_HOME
unzip linuxx64_12201_grid_home.zip
asmcmd

asmcmd afd_label CRS /dev/mapper/dg_acrac_crs --init


asmcmd afd_label MGMT /dev/mapper/dg_acrac_mgmt --init
asmcmd afd_label CONTROL_REDO /dev/mapper/dg_acrac_activedb_control_redo --init
asmcmd afd_label DATA /dev/mapper/dg_acrac_activedb_data --init
asmcmd afd_label FRA /dev/mapper/dg_acrac_activedb_fra --init

asmcmd afd_lslbl /dev/mapper/dg_acrac_crs


asmcmd afd_lslbl /dev/mapper/dg_acrac_mgmt
asmcmd afd_lslbl /dev/mapper/dg_acrac_activedb_control_redo
asmcmd afd_lslbl /dev/mapper/dg_acrac_activedb_data
asmcmd afd_lslbl /dev/mapper/dg_acrac_activedb_fra

cd $GRID_HOME
gridSetup.sh
gridSetup
acrac2.puresg.com
/dev/mapper
Note: In Dialog B.17 There is a bug on NTP (PRVG-13602) confirmed by Oracle support with bug
number (24314803) so ignore any NTP errors on these checks

asmca
runInstaller
###########################
admin@array-3-ct0:~$ purehost list
# SWINGBENCH CONFIG
export JAVAHOME=/u01/app/oracle/local/benchmark_tools/jdk1.8.0_141
export SWINGHOME=/u01/app/oracle/local/benchmark_tools/swingbench
export ANTHOME=$SWINGHOME/lib
export
LD_LIBRARY_PATH=/u01/app/oracle/local/benchmark_tools/instantclient_12_2:${LD_LIBRARY_
PATH}
export
CLASSPATH=$JAVAHOME/lib/rt.jar:$JAVAHOME/lib/tools.jar:$ORACLE_HOME/jdbc/lib/ojdbc14.j
ar:$SWINGHOME/lib/mytransactions.jar:${SWINGHOME}/lib/swingbench.jar:$ANTHOME/ant.jar
# END OF SWINGBENCH CONFIG
##########################

$SWINGHOME/bin/oewizard
oewizard




http://www.purestorage.com/blog/author/dannyhiggins

You might also like