MiniCluster S7 2 AdminGuide
MiniCluster S7 2 AdminGuide
Guide
Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Référence: E69473-16
Copyright © 2019, 2021, Oracle et/ou ses affiliés. Tous droits réservés.
Ce logiciel et la documentation qui l'accompagne sont protégés par les lois sur la propriété intellectuelle. Ils sont concédés sous licence et soumis à des restrictions d'utilisation et
de divulgation. Sauf stipulation expresse de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier, accorder de licence, transmettre,
distribuer, exposer, exécuter, publier ou afficher le logiciel, même partiellement, sous quelque forme et par quelque procédé que ce soit. Par ailleurs, il est interdit de procéder à toute
ingénierie inverse du logiciel, de le désassembler ou de le décompiler, excepté à des fins d'interopérabilité avec des logiciels tiers ou tel que prescrit par la loi.
Les informations fournies dans ce document sont susceptibles de modification sans préavis. Par ailleurs, Oracle Corporation ne garantit pas qu'elles soient exemptes d'erreurs et vous
invite, le cas échéant, à lui en faire part par écrit.
Si ce logiciel, ou la documentation qui l'accompagne, est livré sous licence au Gouvernement des Etats-Unis, ou à quiconque qui aurait souscrit la licence de ce logiciel pour le
compte du Gouvernement des Etats-Unis, la notice suivante s'applique :
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation,
delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the
hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
Ce logiciel ou matériel a été développé pour un usage général dans le cadre d'applications de gestion des informations. Ce logiciel ou matériel n'est pas conçu ni n'est destiné à être
utilisé dans des applications à risque, notamment dans des applications pouvant causer un risque de dommages corporels. Si vous utilisez ce logiciel ou ce matériel dans le cadre
d'applications dangereuses, il est de votre responsabilité de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures nécessaires à son utilisation dans
des conditions optimales de sécurité. Oracle Corporation et ses affiliés déclinent toute responsabilité quant aux dommages causés par l'utilisation de ce logiciel ou matériel pour des
applications dangereuses.
Oracle et Java sont des marques déposées d'Oracle Corporation et/ou de ses affiliés. Tout autre nom mentionné peut correspondre à des marques appartenant à d'autres propriétaires
qu'Oracle.
Intel et Intel Xeon sont des marques ou des marques déposées d'Intel Corporation. Toutes les marques SPARC sont utilisées sous licence et sont des marques ou des marques
déposées de SPARC International, Inc. AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques déposées d'Advanced Micro Devices. UNIX est une
marque déposée de The Open Group.
Ce logiciel ou matériel et la documentation qui l'accompagne peuvent fournir des informations ou des liens donnant accès à des contenus, des produits et des services émanant de
tiers. Oracle Corporation et ses affiliés déclinent toute responsabilité ou garantie expresse quant aux contenus, produits ou services émanant de tiers, sauf mention contraire stipulée
dans un contrat entre vous et Oracle. En aucun cas, Oracle Corporation et ses affiliés ne sauraient être tenus pour responsables des pertes subies, des coûts occasionnés ou des
dommages causés par l'accès à des contenus, produits ou services tiers, ou à leur utilisation, sauf mention contraire stipulée dans un contrat entre vous et Oracle.
Accès aux services de support Oracle
Les clients Oracle qui ont souscrit un contrat de support ont accès au support électronique via My Oracle Support. Pour plus d'informations, visitez le site http://www.oracle.com/
pls/topic/lookup?ctx=acc&id=info ou le site http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs si vous êtes malentendant.
Contents
5
Contents
7
Contents
9
Contents
Verifying that the System Is Ready for the Creation of VMs (CLI) .................... 239
▼ List the System Setup Steps (CLI) ............................................................. 239
▼ (If Needed) Run or Rerun System Setup Steps (CLI) ..................................... 240
▼ Verify the System Setup (CLI) .................................................................. 241
▼ Verify the System, Topology, and Disk Readiness (CLI) ................................ 242
▼ Ensure IP Addresses are Available in MCMU for Future VMs ........................ 245
11
Contents
Glossary .......................................................................................................... 341
Index ................................................................................................................ 345
13
14 Oracle MiniCluster S7-2 Administration Guide • October 2021
Using This Documentation
Feedback
Provide feedback about this documentation at http://www.oracle.com/goto/docfeedback.
This document is updated with functions and features for MiniCluster version 1.3.0 software.
Depending on the version of software running on your MiniCluster, some features might be
slightly different or not present.
MCMU Overview
The MiniCluster Management Utility (MCMU) enables you to perform a variety of installation,
configuration, and management activities with a secure, browser user interface (BUI). You
select management tasks and provide configuration information, and the utility performs the
complex operations in the background.
Note - The MCMU also provides a CLI. See “Using the MCMU CLI” on page 203.
This list summarizes the types of activities you can perform using this utility:
■ Initially configure MiniCluster – The utility verifies the network and storage topology
for MiniCluster, sets up the two SPARC S7-2 compute nodes for internet and management
access, and configures an NFS shared file system for application VM group usage as
required. The utility configures the network according to site preferences, configures the
hostnames and IP addresses for client access and for the management console.
■ Create and manage database virtual machines – The utility installs the Oracle Grid
Infrastructure on the database VM group and supports provisioning of Oracle single
instance databases, RAC databases, and RAC One Node databases.
ORAchk Overview
ORAchk is a configuration audit tool that validates the Oracle environment. It enables you to
complete a variety of system checks that would otherwise have to be done manually. ORAchk
provides these features:
■ Checks the database VM for problems across the various layers of the stack.
■ Reports show system health risks with the ability to drill down into specific problems and
understand their resolutions
■ Can be configured to send email notifications when it detects problems.
■ Can be configured to run automatically at scheduled times.
To download ORAchk and to find out more about ORAchk, refer to these resources:
■ My Oracle Support article, Doc ID 1268927.02 – Download is available from this article.
■ The ORAchk Quick Start Guide – Available from http://docs.oracle.com/cd/
E75572_01/.
For an example of running ORAchk on MiniCluster, see “Run orachk Health Checks
(CLI)” on page 230.
Administration Resources
Use this table to identify the task you want to perform and to locate information about the task.
MiniCluster uses Oracle Solaris zones as the underlying support structure for the system. The
creation of zones is automatically handled by the MiniCluster initialization process based on
configuration information that you provide. You do not need to administer the technical details
of zones, but the MiniCluster tools and documentation use zone technology and terminology, so
this section explains key concepts and terms.
Caution - Never manually manage VMs using Oracle Solaris zone commands. Always manage
the VMs through MCMU BUI or MCMU CLI.
Zones are used to virtually divide the resources of a physical machine to simulate multiple
machines and OSs.
The Oracle Solaris Zones partitioning technology used in MiniCluster enables you to
consolidate multiple hosts and services on a system, affording these benefits:
This illustration shows the zones that are automatically created on every MiniCluster. The
illustration represents the system's zone configuration before the creation of VMs.
■ Global zones – One on each node, they include the initial installation of the Oracle Solaris
OS from which all the other zones and VMs are created. The global zone on node 1 also
contains the MCMU software. Each global zone is assigned 2 CPU cores. Each global zone
is automatically configured with network parameters that enable you to access it from your
network (see “Log in to the Global or Kernel Zone” on page 35). However, there is
minimal administration required in the global zones.
■ Kernel zones – One on each node, they include an installation of the Oracle Solaris OS,
NFS shared with the VMs, and grid infrastructure (GI) components. The OS and GI provide
the necessary drivers for the VMs to access file systems on the storage arrays. Each kernel
zone is assigned 2 CPU cores. Each kernel zone is automatically configured with network
parameters that enable you to access it from your network (see “Log in to the Global or
Kernel Zone” on page 35). However, there is minimal administration required in the
kernel zones because no site-specific software is added to them.
Note - The zones are automatically configured when the system is installed. For details about
the installation process, refer to the Oracle MiniCluster S7-2 Installation Guide.
VMs are used to virtually divide the resources of the system to simulate multiple machines
and OSs. Each VM is dedicated to the programs running inside. VMs are isolated, providing a
secure environment for running applications and databases.
You might configure separate VMs for individual departments in your organization, with each
VM hosting a unique set of applications and databases. Or use VMs to control licensing costs
by limiting some software to a set number of cores now with the ability to easily add more
cores later. You can use some VMs for development and others for production, or any other
combination of deployments.
MiniCluster VMs are created using Solaris non-global zones, and have very similar attributes
to MiniCluster zones (described in “MiniCluster Zones Overview” on page 21), including
secure isolation, flexibility in resource allocation, and so on. The distinction between
MiniCluster zones and VMs is that the zones provide underlying support structures for the
system (uniform from one MiniCluster to another) and VMs are the VMs that you customize to
suit your enterprise compute needs. You determine the number, type, and configuration of VMs
on MiniCluster.
You can configure the system with only one type of VM, or a combination of DB and App
VMs.
VMs are easily provisioned using the MCMU BUI or CLI. MCMU prompts you for the VM
parameters and then creates, deploys, and configures the VMs.
Note - When the system is installed, the initialization process automatically invokes the MCMU
BUI and prompts the installer to configure VMs. The installer can create VMs at that time, or
skip that process so that the VMs can be created later. To determine if VMs are present, see
“View the DB VM Group and VMs (BUI)” on page 99 and “View App VM Groups and
VMs (BUI)” on page 133.
Each VM has its own set of network parameters that enable you to access it from your network
(see “Accessing VMs” on page 32).
This illustration shows an example of how the VMs are logically arranged, and lists the main
components that make up each type of VM.
■ App VM group – A logical grouping of application VMs. You can have a single or a pair
of application VMs in a group. Unlike the DB VM group, you can have as many App VM
groups as there are resources available to support them. You can create clusters and install a
grid infrastructure.
■ App VM – An application virtual machine is a VM that contains the Oracle Solaris OS and
any applications you install. You choose to assign a set number of cores to an App VM, or
to have the App VM share cores with other VMs.
■ Future DB and App VMs – As long as storage and CPU resources are available, you can
create additional VMs at any time, up to a maximum of 12 VMs.
MCMU automatically assigns each VM the appropriate amount of storage based on the
configuration of the VM. This section describes how the MCMU configures the storage.
MiniCluster includes six HDDs in each node, and one or two storage arrays.
■ 2 HDDs, used by the global and kernel zones. The drives use RAID 10 for high availability.
■ 4 HDDs, store the VM root file systems. The drives use RAID 10 for high availability.
■ 14 SSDs, reserved for DB VMs. The DB disk groups are either configured for normal
redundancy (protection against a single disk failure) or high redundancy (protection against
two disk failures).
■ 4 SSDS, reserved for DB REDO logs (always set at high redundancy).
■ 6 HDDs, provide the NFS storage that can be exported to DB and App VMs (referred to as
internal NFS in this guide). This internal storage is enabled or disabled when you define a
group profile, and can be changed on the fly in the MCMU BUI or CLI. For highly secure
environments, refer to the recommendations in “Restrict Access to Shared Storage” in
Oracle MiniCluster S7-2 Security Guide.
This figure represents how the available storage is arranged. Note that this figure does not
include the internal storage that is reserved for the MiniCluster global zones and root file
systems.
If you add another storage array to the system (see “Configure an Added Storage Array
(CLI)” on page 318), the utility automatically doubles the amount of storage for each of the
categories shown in the figure.
In addition to the storage that comes with MiniCluster, you can provide access to other
NFS storage in your compute environment. See “Add an External NFS to a VM Group
(BUI)” on page 152.
These topics describe how to access different aspects of the system based on the kind of tasks
you need to perform.
Note - These topics assume that the system is already installed and initialized. For details about
accessing the system for installation, refer to the Oracle MiniCluster S7-2 Installation Guide.
Description Links
Access the MCMU BUI or CLI to create, edit, and delete DB and application VMs. Also use “Accessing the MCMU (BUI and
the MCMU to perform administrative tasks such as managing security benchmarks, updating CLI)” on page 27
firmware and software, and to perform any other MCMU functions.
Access individual VMs to administer software within the VM. “Accessing VMs” on page 32
Access the underlying VM support structures such as the global zone and kernel zones. “Accessing Underlying VM Support
Accessing these components is only performed in unique situations, such as to alter certain Structures” on page 35
default system configurations.
Access Oracle ILOM. “Accessing Oracle ILOM” on page 37
Review information about the MiniCluster REST API. “MiniCluster REST API
(Removed)” on page 38
Note - Each user must use their own browser and not share browser sessions.
Tip - Ensure that you specify https, because the utility requires a secure connection. If
your browser displays a warning about an insecure connection, add an exception to enable
connectivity to the system.
Note - If you are logging into MCMU for the first time, the utility requires you to create a new
password. See “Unlock a User Account and Reset a Password (BUI)” on page 48.
The System Status page is displayed. For further details, see “MCMU BUI
Overview” on page 29. For more information about user accounts, see “Managing MCMU
User Accounts (BUI)” on page 39.
The MCMU BUI automatically logs out users after a predetermined amount of inactivity. See
“Configure the BUI Session Timeout” on page 165.
When you log into the MCMU BUI, the System Status page is displayed. In the upper right
corner, you can select your language and other choices from the user-name drop-down menu.
■ Home – Displays the system status page, which provides an overall status of the system,
and access to these items:
■ Compliance Information – Shows information about security compliance reports. See
“Securing the System (BUI)” on page 159.
■ Virtual Tuning Assistant Status (not shown in the example) – Farther down the page is
an area that shows information from the built-in tuner feature. See “Checking the Virtual
Tuning Status (BUI)” on page 173.
In the upper right corner, click your login name, and select Log Out.
1. From a system that has network access to MiniCluster, use the ssh command to
log into MiniCluster.
Syntax:
% ssh mcmu_user_name@minicluster_node_name_or_IPaddress
where:
■ mcmu_user_name is the name of an MCMU user. The mcinstall user is the default primary
admin user. The password was set when the system was installed.
■ minicluster_node_name_or_IPaddress is the name of the first node on MiniCluster, or the
IP address of the first node.
For example:
% ssh mcinstall@mc4-n1
Note - After 15 minutes of CLI inactivity, the session is automatically logged out.
# exit
Accessing VMs
These topics describe how to access individual VMs (not through the MCMU). Use these
procedures to administer software installed in individual VMs.
Caution - Never manually manage VMs using Oracle Solaris zone commands. Always manage
the VMs through the MCMU BUI or MCMU CLI. See “Accessing the MCMU (BUI and
CLI)” on page 27.
Log in to a DB VM
Use this procedure to log into a VM.
You must have the Tenant Admin (tadmin) role to log into a VM. For more information about
roles, see “User Roles” on page 39.
If you log directly into a DB VM, you are not accessing the system through the MCMU and you
cannot run mcmu commands.
Caution - Never manually manage VMs using Oracle Solaris zone commands. Always manage
the VMs through MCMU BUI or MCMU CLI.
When you log into MiniCluster, the default prompt is: username@hostname/directory(% or $ or
#), however, for brevity in examples, the prompt is shortened to % for users and # for superuser.
This procedure describes how to access VMs using the ssh command. Depending on the
software and services installed in the VM, the VM might also be accessible through those
services.
1. From a terminal window with network access to the system, use the ssh
command to log into a DB VM.
Syntax:
% ssh user_name@VM-hostname_or_IPaddress
where:
■ user_name is a valid user name with the Tenant Admin (tadmin) role.
The default user that is initially configured in DB VMs is oracle. For more information
about the oracle user, see “User Accounts” on page 40
■ VM-hostname_or_IPaddress is either the hostname or IP address of the VM. You can obtain
VM names from Database → Virtual Machines (see “View the DB VM Group and VMs
(BUI)” on page 99.
For example:
% ssh oracle@dbvmg1-zone-1-mc4-n1
% su root
Password: **************
#
Log in to an App VM
You must have the Tenant Admin (tadmin) role to log into a VM. For more information about
roles, see “User Roles” on page 39.
If you log directly into a App VM, you are not accessing the system through the MCMU and
you cannot run mcmu commands.
Caution - Never manually manage VMs using Oracle Solaris zone commands. Always
manage the VMs through MCMU BUI or MCMU CLI. See “Accessing the MCMU (BUI and
CLI)” on page 27.
When you log into MiniCluster, the default prompt is: username@hostname/directory(% or $ or
#), however, for brevity in examples, the prompt is shortened to % for users and # for superuser.
This procedure describes how to access VMs using the ssh command. Depending on the
software and services installed in the VM, the VM might also be accessible through those
services.
1. From a terminal window with network access to the system, use the ssh
command to log into a VM.
Syntax:
% ssh user_name@VM-hostname_or_IPaddress
where:
■ user_name is a valid user name of a user with the Tenant Admin (tadmin) role.
■ VM-hostname_or_IPaddress is either the hostname or IP address of the VM. You can obtain
VM names from Application → Virtual Machines (see “View App VM Groups and VMs
(BUI)” on page 133.
For example:
% ssh mcinstall@appg500-zone-1-mc4-n2
At this point, you can perform administrative tasks in the App VM.
Log Out of a VM
Use this procedure to log out of a DB VM or App VM.
To completely log out, you need to exit from each login and su that you've performed. For
example, if you logged into a VM then used the su command to assume the root role, type
exit twice.
# exit
Caution - Accessing the global zone and kernel zones should only be performed by trusted and
experienced Oracle Solaris administrators. Performing this procedure involves assuming the
root role, which has all administrative privileges. If administrative commands are not performed
properly, there is a potential for damaging or deleting critical system data.
Caution - Never manually create, edit, or delete VMs using Oracle Solaris zone commands.
Always create, edit, and delete the VMs through MCMU BUI or MCMU CLI. See “Accessing
the MCMU (BUI and CLI)” on page 27.
1. From a terminal window with network access to the system, use the ssh
command to log into the global zone.
Use the mcinstall user account. For more details about this account, see “User
Accounts” on page 40.
% ssh mcinstall@Node-hostname_or_IPaddress
% ssh mcinstall@mc2.us.example.com
At this point, you can perform administrative tasks in the global zone or access the kernel
zones.
Note - Alternatively, you can log directly into a kernel zone using ssh
mcinstall@kz_public_hostname, where kz_public_hostname is the system prefix (shown in
System Settings → User Input Summary) appended with ss01 (kernel zone on node 1) or ss02
(kernel zone on node 2). For example: ssh mcinstall@mc4ss01.
# zoneadm list
global
acfskz
appvmg1-zone-1-mc4-n1
dbvmg1-zone-3-mc4-n1
dbvmg1-zone-1-mc4-n1
dbvmg1-zone-4-mc4-n1
dbvmg1-zone-2-mc4-n1
In the output, the global zone is identified as global. The kernel zone is identified as acfskz.
# zlogin acfskz
# exit
For more information about Oracle ILOM, refer to the Oracle ILOM documentation library at
http://docs.oracle.com/cd/E37444_01.
The default user account in Oracle ILOM is root. Specify the password that was configured for
your system.
To access Oracle ILOM, you need to know the Oracle ILOM hostname or IP address. To
identify these items on your system, see “View System Information (BUI)” on page 64
for hostnames, and “View and Update Network Parameters in v1.2.2 and Earlier
(BUI)” on page 70 for IP addresses (ILOM IP addresses are listed as management IP
addresses).
Depending on how you want to access Oracle ILOM, perform one of these
actions:
■ Oracle ILOM web interface – In a browser, enter this address, and press
Return.
http://ILOM_ipaddress
The Oracle ILOM Login screen is displayed. Log in using a Oracle ILOM account such as
root and password.
% ssh root@ILOM_hostname_or_ipaddress
root password: ********
->
Depending on how you accessed Oracle ILOM, perform one of these actions:
■ Oracle ILOM web interface – In the upper right corner, click Logout.
The Oracle ILOM Login screen is displayed. Log in using a Oracle ILOM account such as
root and password.
-> exit
MiniCluster 1.3.0 removes the REST API that was previously available. Any software that was
developed to use REST APIs to administer MiniCluster will no longer function.
These topics describe how to manage MCMU user accounts through the BUI. To manage user
accounts through the CLI, see “Managing MCMU User Accounts (CLI)” on page 283.
User Roles
When you create an MCMU user, you assign the user one of these roles:
■ Primary Admin (root role) – The root role defines the rights and privileges of primary
administrators of the MiniCluster system including all its compute nodes, networks,
database, and storage. Users with the root role can perform all installation and all critical
administrative operations without any constraints. As primary administrators, they can
delegate operations and approve adding and deleting users including new primary and
secondary administrators. The user must login with his/her own credentials. The mcinstall
user has the root role. All actions and operations carried out are logged and audited based on
the user identifier, not the role identifier.
■ Secondary Admin (mcadmin role) – Users who are assigned with this role have read-
only access to the global zones. They cannot run the MCMU BUI or CLI. All actions and
operations carried out are logged and audited based on the user identifier, not the role
identifier.
■ Tenant Admin (tadmin role) – This role defines the rights and privileges of the
administrator of a MiniCluster VM. The role defines the rights and privileges of a VM
administer involved with day-to-day administrative operations supporting application
installations and deployment. Tenant admins cannot run MCMU, or access the global or
kernel zones. All actions are audited based on the user identifier, not the role identifier.
A Tenant Admin user can use two-factor authentication to securely log in by entering a
password from a mobile device. For more instructions, see “Enable One-Time Password
(OTP) Authentication (BUI)” on page 50.
■ Auditor (auditor role) – Users with this role only have access to the MCMU BUI audit
review page where they can view the audit pool status and generate reports for user activity.
Only users with this role can access the audit review page. Auditors cannot access the
MCMU (except for the audit page), nor can they log into kernel zones or VMs.
User Accounts
mcinstall The password is root The installation process requires you to create mcinstall as the MCMU
initially configured primary administrator and create a password. This account is intended to be the
during the installation. primary administrator for the MCMU.
All actions performed by all MCMU users are logged based on the user's identifier.
Note - MCMU user accounts are not used for the routine use of the system, such as using
the applications and databases. Those user accounts are managed through Oracle Solaris, the
application, the database on the VMs, and through your site system administrators.
When an MCMU user logs into MCMU for the first time, the utility requires the user to create a
password that meets these requirements:
■ Must contain a minimum of 14 characters (or 15 for DISA STIG Profile configurations)
■ Must have a minimum of one numeric character
■ Must have a minimum of one uppercase alpha character
■ (DISA STIG Profile configurations) Must include one non-alpha-numeric character
■ Must differ from a previous password by at least three characters
■ Must not match the previous ten passwords
MCMU passwords expire after a certain number of days, at which time the user account is
locked, and the following warning is displayed on the home page:
A locked account can be unlocked by following the procedure in “Unlock a User Account
and Reset a Password (BUI)” on page 48. To avoid locked accounts, periodically
check the expiration date listed in the User Accounts page (see “Display MCMU Users
(BUI)” on page 43) and change your password before it expires (see “Change an MCMU
User Password (BUI)” on page 48).
All MCMU user accounts require approval by the MCMU supervisor and primary admin
(mcinstall). The process works as follows:
1. The prospective user (or an MCMU user on their behalf) accesses the MCMU registration
page and provides these mandatory details:
■MCMU user name
■ Email address
■ Full name
■ Phone number
■ MCMU role
2. MCMU sends the MCMU supervisor and primary admin an email requesting approval or
denial.
If the user was registered through the MCMU BUI, the email includes a URL to the MCMU
approval/denial feature and includes a unique key identifier.
If the user was created through the MCMU CLI, the email includes an mcmu command and
the unique key identifier.
3. When both the supervisor and primary admin approve the account, the user account is
enabled, and MCMU sends the new user and email confirming the account activation.
2. Click Register.
■ User name – Enter a unique user name for the new user.
■ Email – Enter the email address for the new user.
■ Title – (Optional) Enter the user's title.
■ Full Name – Enter the first and last name for the new user.
■ Organization – (Optional) Enter the user's organization.
■ Department – (Optional) Enter the new user's department.
■ Phone Number – Enter the new user's phone number. Do not include any special characters
or spaces.
■ Address – (Optional) Enter the new user's address.
■ Type of User – See “User Roles” on page 39 and select one of the following:
■ Primary admin
■ Secondary admin
■ Tenant admin
■ Auditor admin
If you are creating a new user who will use OTP-based authentication, select Tenant Admin
for the Type of User. OTP is available only to the Tenant Administrator role for App and
DB VMs. If an existing user with the Tenant Administrator role will use OTP, you must
delete the user account and create a new one. For more details, see “Enable One-Time
Password (OTP) Authentication (BUI)” on page 50.
4. Click Register.
The account is created, but is not activated until the new user is approved by the primary admin
and supervisor (accounts created during the initial installation). The MCMU sends the primary
admin and supervisor an email that includes a secure key that is used to approve the user. See
“Approve or Reject a New User (BUI)” on page 47.
After the primary admin and supervisor approve the account, the new user receives email with
a link to the MCMU BUI. Upon the first login, the new user is forced to create a password
according to the password policies. See “MCMU Password Policies” on page 41.
Before a new account is enabled, the MCMU primary admin and supervisor must both approve
the new user. See “User Accounts” on page 40.
1. As the MCMU primary admin or supervisor, obtain the MCMU approval email.
The email is sent from mcinstall@company-name.
Note -If you experience a delay in the email requesting approval of a new user, click your user
name and choose the Approval Board in the upper right corner. Verify that the request appears
in the Account Creation Request area of the Account Approval Dashboard. Select the user's
name and click Next. Select Approve and click Submit to expedite the approval.
2. In the email, click the approval link (or copy it into a browser).
3. Select Approve and select the Enable OTP check box if this user has a Tenant
Admin role and requires two-factor authentication.
For example:
4. Click Submit.
MCMU sends email to the user confirming or denying account activation. If you enabled OTP,
that user can now log in with OTP authentication. For more information, see “Enable One-Time
Password (OTP) Authentication (BUI)” on page 50.
Note - The first time a user logs into MCMU, the utility requires the user to enter a new
password.
4. In the upper right corner, click your user name and choose Change Password.
Note - The first time a user logs into MCMU, the utility requires the user to enter a new
password.
6. Log into MCMU with your user name and the temporary password assigned for
the reset.
OTP authenticates a user for a single login or session. OTP supports strong two-factor
authentication based on IETF standards, and supports both time and counter-based password.
OTP requires access to something a person has (such as a specific mobile device) as well as
something a person knows (such as a PIN). OTP is not vulnerable to replay attacks, so it is more
secure than a traditional static password.
OTP-based authentication is available for App and DB VMs. If you chose to enable OTP for a
user, it is enforced by users registered with the Tenant Administrator role. The users created as
primary, secondary, and auditor roles do not support the use of OTP.
You can use SSH to access App and DB VMs with OTP. During the SSH access, the Solaris
environment prompts you for your Solaris password, then for the OTP from your mobile
authenticator application. You can use the Oracle Mobile Authenticator App or the Google
Authenticator App, and you can freely download them from the Apple iOS and Google
Android App stores. Oracle MiniCluster's OTP conforms to the HMAC-based and time-based
specifications for a OTP, and will work with any authenticator application that conforms to
these specifications.
You can use the Oracle Mobile Authenticator App or the Google Authenticator App, and you
can freely download them from the Apple iOS and Google Android App stores.
2. Access the MCMU BUI as a new user with the Tenant Administrator role.
See “Log in to the MCMU BUI” on page 28.
If an existing user with the Tenant Administrator role will use OTP, you must delete the
user account and create a new one. For instructions, see “Create a New MCMU User
(BUI)” on page 44.
4. Create a new password for your account and click Change Password.
Type a new password. See “MCMU Password Policies” on page 41.
5. In the upper right corner, click your user name and choose Get OTP Secret.
Tip - If you do not see Get OTP Secret in the drop-down menu, verify that you are logged in
with a user account with Tenant Administrator privileges.
6. On your mobile device, open the Oracle Mobile Authenticator app and click Enter
Provided Key.
7. On your mobile device, type the zone name and OTP secret key from Step 5.
Tip - If you do not see the Add Account button on your mobile device, swipe up to remove the
keyboard.
After you enter this information, the Oracle Mobile Authenticator starts to generate OTP codes
every minute to access the VM.
9. Log into the MCMU BUI with your user name and the OTP password from your
mobile device.
10. Use SSH to verify that access to the VM was granted with the OTP.
For example, type your Oracle Solaris password and the OTP that was provided.
# ssh Dena_tadmin@192.0.2.0
MiniCluster Setup successfully configured
Password:
OTP code:
Use this procedure to delete a user account. You must know the user name and password
to delete an account through the BUI. The primary admin and supervisor must approve the
deletion though email sent from MCMU.
Note - Alternatively, you can delete a user account using the MCMU CLI. See “Delete an
MCMU User (CLI)” on page 287
1. Log into the MCMU BUI as the user you plan to delete.
See “Log in to the MCMU BUI” on page 28.
2. In the upper right corner, click the user name and choose Delete Account.
Once the deletion request is approved by the primary admin and supervisor, the account is
deleted.
2. In the upper right corner, click the user name and choose Edit Profile.
The user registration page is displayed.
4. Click Save.
These topics describe how to start and stop App and DB components, and how to power on and
off the system.
This procedure assumes that power is applied to the system, but the compute nodes are shut
down (the system is in standby mode). For instructions on how to connect the system to power,
refer to the Oracle MiniCluster S7-2 Installation Guide.
For additional information about Oracle ILOM, refer to the Oracle ILOM documentation at
http://docs.oracle.com/cd/E37444_01.
1. On a system with network access to MiniCluster, log into Oracle ILOM as root.
■ Oracle ILOM web interface – In a browser, enter this address, and press
Return
http://ILOM_hostname_or_ipaddress
The Oracle ILOM Login screen is displayed. Log in using your Oracle ILOM root account
and password.
% ssh root@ILOM_hostname_or_ipaddress
root password: ********
->
■ Oracle ILOM web interface – Click Host Management → Power Control and
select Power On from the Select Action list.
4. (Optional) If you are using the Oracle ILOM CLI and you want to connect to the
host from Oracle ILOM, start the host console.
When booting is complete, all the configured VMs are available for use. If for some reason
any of the VMs are not running, you can manually start them. See “Starting VM Components
(CLI)” on page 233.
Caution - If the system is not properly shutdown, data corruption can occur.
where VMgroupname is the name of the DB VM group. To determine the name, see “List a
Summary of All DB VM Groups (CLI)” on page 208.
For example:
% mcmu stop -G -n dbgrp1
b. Verify that all the VM components are stopped on the compute nodes.
Example:
% mcmu status -Z -a
# svcs eshm/omc
STATE STIME FMRI
disabled 10:01:29 svc:/application/management/eshm/omc:default
e. Exit superuser.
Type CTRL-D.
where x is 1 or 2.
For example:
% mcmu stop -S
9. (For a full power down, perform the remaining steps) From a system with
network access to MiniCluster, log into Oracle ILOM on a MiniCluster compute
node as root using one of these methods:
■ Oracle ILOM web interface – In a browser, enter this address, and press
Return:
http://ILOM_hostname_or_ipaddress
The Oracle ILOM Login screen is displayed. Log in using your Oracle ILOM root account
and password.
% ssh root@ILOM_hostname_or_ipaddress
root password: ********
->
10. Repeat the previous step to stop the other compute node.
These topics describe how to get system information using the MCMU BUI.
■ “Display the MCMU Version (BUI)” on page 63
■ “View System Information (BUI)” on page 64
■ “View and Update Network Parameters in v1.2.4 and Later (BUI)” on page 66
■ “View and Update Network Parameters in v1.2.2 and Earlier (BUI)” on page 70
■ “Review or Run Initialization Steps (BUI)” on page 74
■ “View the Status of Running Tasks (BUI)” on page 76
2. In the upper right corner, click your user name and choose About.
For additional information about software versions, select System Settings → System
Information as described in “View System Information (BUI)” on page 64.
Use this procedure to view specific information about the system, its components, and their
current state.
■ Silicon Secured Memory – Indicates whether the feature is functioning on each compute
node.
■ Database Accelerator Engine – Indicates whether the feature is functioning on each
compute node.
■ SPARC Hardware Assisted Cryptography – Indicates whether the feature is functioning
on each compute node.
Note - For descriptions of MiniCluster features, refer to the product page at https://www.
oracle.com/engineered-systems/supercluster/minicluster-s7-2/features.html.
■ Software and OS Information – Shows the MCMU and Oracle Solaris OS versions.
■ System Information – Shows the compute node hostnames, Oracle ILOM hostnames,
number of cores, memory, and state.
■ Storage Information – Shows statistics about the storage array. Click the triangle to
expand.
Use this procedure to view and update network parameters for MiniCluster systems running v1.
2.4 and later.
Note - For MiniCluster systems running v1.2.2 or earlier, see “View and Update Network
Parameters in v1.2.2 and Earlier (BUI)” on page 70.
When the system was installed, groups of IP addresses were added to the default IP pool for the
future creation of VMs. If those addresses have been consumed, or you want to add additional
IP pools on the same or different subnet, perform these actions:
This example shows the default IP pool which was configured during the initialization of
MiniCluster based on what was entered in the offline tool.
■ Edit an IP pool:
■ Add an IP pool:
4. Click OK.
5. Click Save, then OK.
■ Delete an IP pool:
If the Delete button is disabled, IP addresses in that pool are in use and the tool prevents you
from deleting the IP pool.
■ Assign a VLAN ID to an IP pool:
Use this procedure to view and update network parameters for MiniCluster systems running v1.
2.2 and earlier.
Note - For MiniCluster systems running v1.2.4 or later, see “View and Update Network
Parameters in v1.2.4 and Later (BUI)” on page 66.
When the system was installed, groups of IP addresses were added to the system for the future
creation of VMs. If those addresses have been consumed, and you need more addresses,
perform these steps.
a. Under Add IP Range, type the starting IP address and IP pool size.
b. Click Add.
i. Under Client Network Settings, click Add under the DNS Server entry.
When the system was installed, IP addresses of available DNS servers were added to the
system. If you need to change or remove those IP addresses, perform these steps.
i. Stop any queries to databases that are dependent on DNS. Consult with
your Database Administrator on the best way to do this.
ii. Under Client Network Settings on the User Input Summary, click Delete
next to the IP address.
i. Under Client Network Settings, click Add under the NTP Server entry.
i. Stop any queries to databases that are dependent on NTP. Consult with
your Database Administrator on the best way to do this.
ii. Under Client Network Settings on the User Input Summary, click Delete
next to the IP address.
Use this procedure to review the status of the initialization steps that were run when the system
was initially installed.
You can also use this procedure to rerun the initialization steps.
For more information about the initialization process, refer to the Oracle MiniCluster S7-2
Installation Guide.
For example:
The initialization steps are listed with a status of finished or not finished.
Use this procedure to view that status of the tasks that the utility is performing.
Description Links
Plan the overall configuration. “Configuration Planning Overview” on page 77
Plan DB VMs. “DB VM Planning Worksheets (Optional)” on page 78
You can create, edit, and delete DB and App VMs at any time. However, if you want to plan for
the overall configuration of the system, make these decisions:
■ Number of App VMs – The maximum number of App VMs per node is 12 minus the
number of DB VMs you plan to have.
For example, if you create 4 DB VMs on each node, you can therefore create a total
of eight App VMs per node. Another example is to create a 1 DB VM on each node,
therefore creating a total of 11 App VMs per node.
As you create VMs, MCMU keeps track of used resources and only enables you to create
VMs and assign cores that are available. You do not need to plan to use all the resources at
one time. If resources are available, you can add more VMs later.
Note - If you are not sure exactly how many VMs to create, you can skip the planning,
create VMs to see how it works, then edit, delete and recreate VMs until you have the
configuration that meets your needs.
You can use these planning worksheets to plan the creation of DB VMs, and to anticipate the
configuration information that you are asked to provide.
No or Yes
“Number of VMs on Each Node” on page 83 Node 1:
No or Yes
If No, define:
If assigned:
oracle
mcinstall
CLUSTER PARAMETERS
“SCAN
Name” on page 87 for
the cluster.
GI patch level
Normal or High
“DATA/RECO Disk Group
Split” on page 87
HOME PARAMETERS
“Oracle Database Version for the first home:
Version” on page 88
Note - Create one home for (Optional) Versions for additional homes:
each DB version you need.
INSTANCE PARAMETERS
“New Instance or
Import Existing
Instance” on page 90
Y/N
“PGA Memory
Space” on page 93
1 - 8 lowercase alpha/
numeric characters
DB VM Group Parameters
This section describes the parameters you define when you create a DB VM group profile. Use
this information in conjunction with these activities:
■ When planning DB VMs, described in “DB VM Planning Worksheets
(Optional)” on page 78.
■ While creating the DB VM group profile with the MCMU BUI, as described in “Create
a DB VM Group Profile (BUI)” on page 103 or “Create a DB VM Group Profile
(CLI)” on page 247.
VM Group Name
The VMs are logically grouped (see “MiniCluster VM Groups and VMs Overview” on page 23.
During the configuration process, you specify a group profile name of your choice. The name
can be up to 12 characters, and can contain lowercase letters, numbers, and the - (hyphen)
symbol. Later, the VM group name is automatically used as a prefix in the VM hostnames, so
specifying a short name can lead to shorter VM names.
Shared Storage
All DB VMs are allocated with storage space (the amount of storage depends on the type of
instances configured in the VM). The shared storage provides additional storage, if enabled.
6 HDDs on each storage array are set aside for additional storage space (see “MiniCluster
Storage Overview” on page 25).
■ If enabled – All the VMs in the group have access to the shared storage.
■ If disabled – The VMs will not have access to the shared storage space in the 6 HDDs.
Note - After the creation of VMs, you can enable or disable access to the shared storage at any
time. See “Enable or Disable NFS (BUI)” on page 149.
Security Profile
You define a security profile that is applied to the VMs in the group. The security profile
automatically configures the system with over 225 security controls. Choose on of these
profiles:
Note - If the system is configured with the DISA STIG profile (performed during the
installation), all VMs that are subsequently created should also be configured with the DISA
STIG profile.
IP Pool
An IP pool is a range of IP addresses. Each IP pool is a separate subnet. As of v1.2.4, you can
create multiple IP pools, then assign different VM groups to different IP pools. You can also
assign a VLAN ID to an IP pool.
Create the IP pools before creating the DB VM group. See “View and Update Network
Parameters in v1.2.4 and Later (BUI)” on page 66.
You choose between one to four VMs on each node for a maximum of eight DB VMs. For
Oracle RAC configurations, ensure that you specify VMs on each node.
You can always change the number of VMs later. See “Add a DB VM to a Group
(BUI)” on page 123.
Role Separation
This feature enables you to create a single administrative user, or to create two separate
DB administrative users with separate roles (separating ASM administration from RDBMS
administration). Separate roles might be required by certain third-party applications.
If you choose to create one administrative user, that user is the Oracle DB Installation user for
all Oracle DB software and is a member of the groups needed to perform administration of the
grid infrastructure and to administer the DB.
If you choose role separation, two users are created, each a member of different groups so that
each user is only able to administer either the ASM grid infrastructure, or the DB.
Based on your selection, the utility automatically provides industry standard values for user and
group names, IDs, and file system base.
■ No – Configures one DB administrative user (oracle) with privileges to administer the
ASM and RDBMS. These pre-assigned fields are displayed:
Note - Even when no role separate is selected, the user can choose to provide a new user ID
for the oracle user. For example, when the Use default Oracle User ID is selected.
DBA Group
■ Name – dba
■ ID – 1002
OINSTALL Group
■ Name – oinstall
■ ID – 1001
■ Yes – Enables role separation, and configures these pre-assigned DB administrator users and
roles.
Grid ASM Home OS User and Base
■ Name – oracle
■ ID – 1001
■ Base – /u01/app/oracle
Group Description
You can leave the field blank, or add a description that briefly describes the DB VM group.
DB VM Parameters
This section describes the DB VM parameters you define while creating the DB VM group
profile. Use this information in conjunction with these activities:
Public Hostname
For each VM, specify a unique hostname. This is the name that you add to your DNS. It is the
hostname that is used for client access to the VM.
The hostname can be up to 25 alpha-numeric characters and include the - (hyphen) symbol.
Number of Cores
For each VM, specify the number of cores (0 - 12). Before the creation of VMs, there are 24
cores available (12 on each node that are available for VMs). MCMU keeps track of how many
cores are assigned to VMs and only enables you to select a number from what is available.
Cores that are not assigned to VMs are pooled together and are available as shared cores.
If you select 0 (zero) cores, the VM uses shared cores. After the DB VM group is deployed,
you can change the number of cores on the VMs. See “Edit a DB VM Group Profile
(BUI)” on page 121.
Password
For each VM, set a password for the oracle user and mcinstall user.
If you select , MCMU sets the password to a default value (see “User
Accounts” on page 40).
For details about MCMU users, see “User Accounts” on page 40. Password policies vary based
on the security profile that was selected. See “MCMU Password Policies” on page 41 and
“Security Profile” on page 82.
SCAN Name
When you create database clusters, the VMs from both compute nodes are clustered together.
Provide a SCAN name for the database cluster that you are setting up.
SCAN is a feature used in Oracle RAC configurations. The SCAN provides a single name for
clients to access any database running in a cluster. MCMU provides a default SCAN, or you can
specify your own name. The SCAN must be a name that is up to 15 characters long. You can
use lowercase letters, numbers and the - (hyphen) symbol.
GI Patch Level
The MCMU BUI provides a list of patch levels that you can choose.
Select the level of redundancy that you want for the Oracle Cluster Registry (OCR) voting disk
group, or SYSTEM disk group. Choose one of these levels:
■ Normal – Provides three voting disks.
■ High – Provides five voting disks.
In the Define Cluster page, the data disk group redundancy level is displayed. The value is
based on what was selected for “System Disk Group” on page 87 in a previous page.
You can configure the percentage of storage that the DATA disk group and RECO disk group
use. The default is 80% DATA, 20% RECO.
Note - The percentage number shown is the amount for DATA, with the remaining percentage
applied to RECO.
In the Define Cluster page, the REDO disk group redundancy level is displayed. This disk
group is always configured for high redundancy (provides protection against two disk failures).
In the Define Cluster page, the RECO disk group redundancy level is displayed. The value is
based on what was selected for “System Disk Group” on page 87 in a previous page.
DB Home Parameters
This section describes the parameters you define while creating the DB VM homes. Use this
information in conjunction with these activities:
■ Planning DB VMs, as described in “DB VM Planning Worksheets
(Optional)” on page 78.
■ Creating a DB VM home with the MCMU BUI, as described in “Create DB Homes
(BUI)” on page 115, or CLI described in “Create DB Homes (CLI)” on page 252.
When you configure a database home, you are provided with a choice of selecting from a
variety of Oracle Database versions such as the following:
■ 11g
■ 12c (also available in Standard Edition)
■ 12.2 Standard Edition 2
■ 18c
■ 18.3 Standard Edition 2
■ 19c
For information about specific patch levels for the different versions, refer to MOS ID
2153282.1 on My Oracle Support.
The availability of a particular version depends on when the MiniCluster Component Bundle
was downloaded at installation time, or when bundles are downloaded for patching and
updating (see “Updating and Patching MiniCluster Software (BUI)” on page 177.
If a particular version is not available at the time that you configure the DB homes, you can
eventually upgrade to later versions using the MiniCluster Updating feature.
Each home provides one database version, but you can install multiple homes in a DB VM
group. The DB homes you create determine the specific versions of the Oracle Database that are
available to each DB instance.
Once the DB home is created, the utility allocates these resources for each DB VM:
■ ZFS root file system – 40 GB.
■ Database directory – 100 GB ZFS file system mounted on /u01.
■ DB REDO Logs – Configured for high redundancy on the storage array.
■ Client network – 1 virtual network.
Is the directory path for the Oracle Database. The default is /u01/app/oracle/
product/release_number/dbhome_number. Accept the default or change the name used for the
dbhome_number.
Patch Level
DB Instance Parameters
This section describes the parameters you define while creating the DB VM instances. Use this
information in conjunction with these activities:
■ Planning DB VMs, as described in “DB VM Planning Worksheets
(Optional)” on page 78.
■ Creating instances with the MCMU BUI, as described in “Create DB Instances
(BUI)” on page 117, or CLI described in “Create DB Instances (CLI)” on page 255.
If you choose to create a new instance, MCMU creates a new instance. You are prompted to
enter various database parameters such as the instance name, DB type, RAC or single instance,
and other parameters.
If you choose to import an existing instance, you specify another instance on the system that
will be used to create this instance. The instance must be an instance that was not created using
MCMU. You are prompted to enter the instance name, and all the DB parameters are defined by
the imported instance.
Template Type
■ DW – Creates a data warehouse type database, commonly used for analytic workloads.
■ OLTP – Creates an online transaction processing type database, commonly used for
business transaction workloads.
■ Custom – If selected, you are prompted to browse to a DB template that you provide.
Instance Type
If multiple homes were created, you select the version of the Oracle Database for this instance.
If only one home was created, MCMU automatically uses the database version that is available.
Container DB
This feature enables a single container database to host multiple separate pluggable databases
(only selectable for DB versions supporting this feature)
You have the option to specify the size of the PGA (memory for the server processes for the
instance), or accept the default value.
You have the option to specify the size of the SGA (memory shared by the processes in the
instance), or you can accept the default value.
Character Sets
You have the option to assign the database and national character sets for the instance. If you
choose the Recommended option, MCMU assigns the character set.
Instance Name
Each instance must be named. Specify a unique name that is up to 8 characters long. You can
use alpha and numeric characters (no special characters).
You can use these planning worksheets to plan the creation of App VMs, and to anticipate the
configuration information that you are asked to provide.
pair or single
“Shared Storage” on page 95
Y/N
“VM Type” on page 96
If assigned:
1 - 12 max. cores available per node (for both DB and APP VMs)
This section describes the parameters you define when you create an App VM group profile.
Use this information in conjunction with these activities:
■ Planning App VMs, as described in “App VM Planning Worksheets
(Optional)” on page 93.
■ Creating App VM group profile with the MCMU BUI, as described in “Create an App VM
Group Profile (BUI)” on page 135, or CLI described in “Configuring Application VMs
(CLI)” on page 275.
During the configuration process, you specify a group profile name of your choice. The name
can be up to 12 characters, and can contain lowercase letters, numbers, and the - (hyphen)
symbol. Later, the VM group name is automatically used as a prefix in the VM hostnames, so
specifying a short name can lead to shorter VM names.
Description
You can specify an optional description of the VM group.
Number of VMs
■ Pair – The utility configures two application VMs (one on each node) in the group.
■ Single – The utility configures one VM in the group.
Shared Storage
All App VMs are allocated with storage space. The shared storage provides additional storage,
if enabled.
6 HDDs on each storage array are set aside for additional storage space (see “MiniCluster
Storage Overview” on page 25).
■ If enabled – All the VMs in the group have access to the shared storage.
■ If disabled – The VMs will not have access to the shared storage space in the 6 HDDs.
Note - After the creation of VMs, you can enable or disable access to the shared storage at any
time. See “Enable or Disable NFS (BUI)” on page 149.
For systems in highly secure environments, do not enable shared storage. For additional
security information, refer to the Oracle MiniCluster S7-2 Security Guide.
Security Profile
For current versions of MCMU, the security profile is automatically configured for each Oracle
Solaris 11 VM based on what was selected for the system during the initial configuration. The
following list describes the security profiles that can be selected at install time:
VM Type
■ Solaris 11 Native Zone – Configures Oracle Solaris 11 OS for the App VM. This is a native
OS installation because the version is the same as what is installed in the global zones.
Choose this VM type if you plan to use the App VM clustering feature.
■
Solaris 10 Branded Zone – (Introduced in software v1.1.25) Configures Oracle Solaris
10 OS for the App VM. This is a branded OS installation because the version is different
than what is installed in the global zones. Branded zones are usually used when applications
require a specific OS version.
Oracle provides quarterly Critical Patch Updates (CPUs) for Oracle Solaris 10, including
Solaris 10 Containers (Branded Zones). Review the knowledge articles titled How to find
the Oracle Solaris Critical Patch Update (CPU) Patchsets, Recommended OS Patchsets
for Oracle Solaris and Oracle Solaris Update Patch Bundles (Doc ID 1272947.1) and How
Patches and Updates Entitlement Works (Doc ID 1369860.1). Both articles are available
at My Oracle Support. Take any actions necessary to patch applicable Oracle Solaris 10
Branded Zone virtual machines.
Note - For two VM configurations, MCMU automatically configures both VMs with the same
VM type.
Enable Security
(Only for Oracle Solaris 10 branded zones) If selected, an Oracle Solaris 10 security service
called Java Authentication and Authorization Service (JASS) is assigned to the VMs.
JASS hardens and minimizes the OS attack surface. The configuration is based on the Solaris
Security Toolkit, which enforces security controls such as RBAC, allow-listed ports, protocols
and services, and ensures that unnecessary services are disabled.
For more information about JASS, refer to the JASS Reference Guide at https://docs.
oracle.com/javase/8/docs/technotes/guides/security/jaas/JAASRefGuide.html.
Note - For two VM configurations, MCMU automatically configures both VMs with or without
the security service based on your selection.
IP Pool
An IP pool is a range of IP addresses. Each IP pool is a separate subnet. As of v1.2.4, you can
create multiple IP pools, then assign different VM groups to different IP pools. You can also
assign a VLAN ID to an IP pool.
Create the IP pools before creating the App VM group. See “View and Update Network
Parameters in v1.2.4 and Later (BUI)” on page 66.
Public Hostname
For each VM, specify a unique hostname. This is the name that you add to your DNS. It is the
hostname that is used for client access to the VM.
The hostname can be up to 32 lowercase alpha-numeric characters and include the - (hyphen)
symbol.
Cores
For each VM, specify the number of cores. Before the creation of VMs, there are 24 cores
available (12 on each node that are available for VMs). MCMU keeps track of how many cores
are assigned to VMs and only enables you to select a number from what is available. If you
select 0 (zero) cores, the VM shares available cores. You can assign a different number of cores
to each VM within a group.
After the App VM group is deployed, you can change the number of cores on the VMs. See
“Edit an App VM Group (BUI)” on page 144.
Password
For each VM, set a password for the oracle and mcinstall users.
If you select , MCMU sets the password to a default value (see “User
Accounts” on page 40).
For details about MCMU users, see “User Accounts” on page 40. Password policies vary based
on the security profile that was selected. See “MCMU Password Policies” on page 41 and
“Security Profile” on page 82.
Define Cluster
(Introduced in software v1.1.25) If you selected the Oracle Solaris 11 VM type, MCMU
BUI displays the Define Cluster section (see “Create an App VM Group Profile
(BUI)” on page 135). If you enable Clusterware, MCMU configures the two App VMs
into a cluster, providing a highly available configuration. If one VM goes down, the system
automatically fails over. You can only cluster two App VMs.
Note - If you want to cluster Oracle Solaris 10 branded zones, you must do so manually.
To enable this feature, slide the selector to Yes, and enter a name in the SCAN name field.
Single Client Access Name (SCAN) is a feature used in cluster configurations. The SCAN
provides a single name for clients to access all VMs running in the cluster. The SCAN must
be a name that is up to 15 characters long. You can use lowercase letters, numbers and the -
(hyphen) symbol.
MCMU handles the configuration of the cluster, but if you want additional details, refer to the
Database Clusterware Administration and Deployment Guide at: http://docs.oracle.com/
database/121/nav/portal_booklist.htm.
Description Link
View the DB VM group and DB VMs. “View the DB VM Group and VMs (BUI)” on page 99
Create database VMs. “DB VM Creation Task Overview” on page 101
Caution - Never manually manage VMs using Oracle Solaris zone commands. Always manage
the VMs through MCMU BUI or MCMU CLI.
In this example, the page reports No data to display because a DB group profile has not yet
been created.
Tip - If the VMs are not listed, click the triangle that is next to the VM group to expand the
display. You might need to select another navigation item, then come back to this page.
In this example, there is one VM on each node, and each VM has one online DB instance.
Task Description Details You Provide During the BUI Instructions CLI Instructions
No. Task
1. If needed, create additional You can accept the default network “View and Update “Managing Networks
networks that will be assigned parameters that were configured Network Parameters (CLI)” on page 291
to the VMs during the creation during the installation, or edit or in v1.2.4 and Later
process. add additional networks. (BUI)” on page 66
Task Description Details You Provide During the BUI Instructions CLI Instructions
No. Task
4. Deploy the DB VM Group. None “Deploy the “Deploy the
DB VM Group DB VM Group
(BUI)” on page 112 (CLI)” on page 251
5. Create DB Homes in the VMs. “DB Home Parameters” on page 88 “Create DB Homes “Create DB Homes
(BUI)” on page 115 (CLI)” on page 252
6. Create DB Instances in Homes. “DB Instance “Create DB Instances “Create DB Instances
Parameters” on page 90 (BUI)” on page 117 (CLI)” on page 255
The DB VM group provides the foundation for the DB VMs and DB instances. Before you
can create DB VMs, you must create a DB VM group. One DB VM group is supported on the
system. If a DB VM group profile already exists, you cannot create another one.
Note - It is possible that the DB VM group profile was created when the system was initially set
up. To determine if a group profile has already been created, see “View the DB VM Group and
VMs (BUI)” on page 99.
Your system must be installed and initialized as described in the Oracle MiniCluster S7-2
Installation Guide. This ensures that the required packages that contain several necessary files,
such as Oracle Solaris OS, Oracle Grid Infrastructure, and so on, are on the system.
2. Ensure that the system has pool of IP addresses to apply to the DB VMs.
For each DB VM, you need 2 IP addresses. The SCAN requires 3 IP addresses.
When the system was installed, a pool of IP addresses was allocated to the system. To view,
add, or change IP parameters, see:
“View and Update Network Parameters in v1.2.4 and Later (BUI)” on page 66
“View and Update Network Parameters in v1.2.2 and Earlier (BUI)” on page 70
The Database Virtual Machine Group Profiles Summary page is displayed. This example
indicates that a DB group has not yet been created.
5. In the Define DB VMs page, enter the required information, then click Next.
For details about the required information, use the optional worksheet (“DB VM Planning
Worksheets (Optional)” on page 78), or see “DB VM Group Parameters” on page 81.
Note - You do not have to have the same number of VMs on each compute node. However, if
you plan to configure all the DB VMs in RAC pairs, assign the same number of VMs to the
second compute node.
This example shows the page when Role Separated is set to No.
If Role Separated is set to Yes, the lower part of the page shows the users and roles that will be
configured.
For details about the required information, use the optional worksheet (“DB VM Planning
Worksheets (Optional)” on page 78), or see “DB VM Parameters” on page 85.
For details about the required information, use the optional worksheet (“DB VM Planning
Worksheets (Optional)” on page 78), or see “DB VM Parameters” on page 85.
The Review page lists all the information that you filled in from the previous pages for this DB
VM group. The information in this page is not editable.
■ If you find any issues with any of the information on the Review page, click
either Back to return to a previous screen, or click Cancel to return to the
Home page.
■ If you are satisfied with the information displayed on the Review page, click
Create (or Generate). A progress window is displayed. Once complete,
dismiss the window.
The utility begins assigning IP addresses to the VMs based on the IP address information that
was entered during the initial installation of the system. This process can take 10 to 30 minutes
to complete, depending on the number of DB VMs specified. When the process is finished, a
screen is displayed that shows the IP mapping assignments.
9. Verify that the VM group profile is correct, and note the hostnames and IP
addresses for DNS.
Caution - Do not click Continue until you have recorded all the information shown in this
Mapping IP review page.
■ If you find any issues with any of the information, close the window and
repeat this task.
■ If you are satisfied with the information displayed on the Mapping IP review
page, record all the information shown in this screen so that you can enter
the IP addresses and hostnames into DNS.
Once you have recorded all the information in the Mapping IP review page, click Confirm. The
utility reserves the names and IP addresses for the DB VM group.
11. When you have entered all the IP addresses and hostnames into DNS, click
Confirm.
The utility performs a set of configuration verifications. This takes approximately 15 minutes to
complete.
12. When the group profile process is complete, perform the next task.
See “Deploy the DB VM Group (BUI)” on page 112.
Use this procedure to deploy the VM group. When you deploy a group, MCMU installs the
VMs that were defined in the VM group profile.
If you need to change any of these DB VM parameters, do so before you deploy the group:
■ IP addresses
■ Hostnames
Once the VM group is deployed, you can change the number of cores assigned to each VM, and
add or delete VMs.
1. Ensure that you complete these tasks before deploying the VM group:
3. Click Deploy, and review the configuration in the Deployment Review Page.
4. Click Deploy.
The Create Virtual Machine Group window is displayed. As the utility deploys the VM group,
status of each deployment step is updated in this window.
The deployment takes 40 to 80 minutes to complete.
5. (Optional) If you want to see all the steps involved, click Show Detail.
6. When the deployment is complete, click Complete and go to the next task.
See “Create DB Homes (BUI)” on page 115.
Each DB home provides a particular Oracle Database version that is used to create DB VM
database instances. You must create at least one DB home in the group, and optionally, you can
create multiple DB homes so that the group is configured with multiple versions of the Oracle
Database.
5. Click Create.
The utility creates DB home information for every VM within the DB VM group. After
approximately 15 to 30 minutes, the status reports that the process is complete.
Before you can perform this task, you must complete these tasks:
Tip - If the VMs are not listed, click the triangle that is next to the VM group to expand the
display. You might need to select another navigation item, then come back to this page.
In this example, the VMs do not yet have any DB instances, which is evident because no
instance names are displayed.
4. Click Create.
A progress pop-up window is displayed. This process can take from 15 to 90 minutes to
complete, depending on the configuration selected.
Tip - While the DB instance is being created, you can dismiss the pop-up window and then
perform other actions in the main BUI (such as create additional DB instances). To return to the
progress pop-up window, in the Virtual Machines page, click the Creating link.
7. Repeat these steps for each DB instance that you want to create.
You can create multiple DB instances, until the point where the utility determines that you have
reached the limit. At that point, a message stating that there is not enough memory available to
create additional DB instances is displayed.
You can edit VMs even when they are online and in production. The utility only enables
changes to VM parameters that are safe, based on the state of the VM.
For deployed DB groups, you can change the number of cores assigned to the VMs (increase
or decrease) and add VMs to the group (to add a VM, see “Add a DB VM to a Group
(BUI)” on page 123).
For non-deployed DB groups, you can make the same changes as deployed DB groups, plus
change the VM names and IP addresses.
2. In the navigation panel, select the Database → Virtual Machine Group Profiles
page.
For example:
3. Click Edit.
4. Edit any of the parameters that are enabled for changes, such as the number of
cores.
If a VM is not deployed, you can change the IP addresses and hostnames.
For a description of DB VM parameters, see “DB VM Parameters” on page 85.
■ Save – Click Save to save the changes and provide a summary page. The change does not
become active until you click Apply.
■ Cancel – Click Cancel to discard the changes and close the window.
7. If you changed the name or IP address of a VM, make the equivalent change in
DNS.
2. In the navigation panel, select the Database → Virtual Machine Group Profiles
page.
3. Click Edit.
5. Specify passwords for the oracle and mcinstall accounts on each new VM.
6. As needed, check and change the details for the new VMs.
For example, check the hostnames, IP addresses, and number of cores and change them to meet
your requirements.
For a description of DB VM parameters, see “DB VM Parameters” on page 85.
■ Save – Click Save to save the changes. After a few minutes, a summary page is displayed.
■ Cancel – Click Cancel to discard the changes and close the window.
9. On the Virtual Machine Group Profiles page, click Edit to view or change the IP
addresses that were automatically assigned.
This task describes how to display the string that can be used by applications to connect to the
DB VM instance.
Use these procedures to delete DB instances, DB home, VMs, and group profiles.
4. Carefully locate and click the trash can icon under the Delete column. (Note - Do
not click the trash can icon that is under the Actions column).
7. Repeat these steps for each DB instance that you want to delete.
Delete a DB VM (BUI)
To delete a RAC or RAC One Node instance for Oracle Database 12.2 and 18.3, you must
provide the SYS user password.
4. In the navigation panel, select the Database → Virtual Machine Group Profiles
page.
5. Click the Edit button for the DB VM group that contains the DB VM you plan to
delete.
The Edit Database Virtual Machine Group Profiles page is displayed.
6. Identify the VM that you plan to delete and scroll to the bottom of the column of
VM parameters.
You can only delete a DB home if all the instances in the home have been deleted.
4. Carefully locate and click the trashcan icon that is under the Actions column (or
Edit column). Note - Do not click the icon under the Delete column.
Use this procedure to delete a DB VM group. All the VMs in the group will be deleted. The
DB group profile is not deleted, and can be redeployed. If the DB group contains DB VMs, the
primary admin is notified though email as each VM is deleted.
Caution - Deleting a DB VM group deletes all the VMs, applications, and data associated with
the VM group. The deletion cannot be undone. Proceed with caution.
■ Click Confirm.
■ In previous versions click the confirmation checkbox, then click Confirm.
The deletion can take 15 to 60 minutes, depending on the number of VMs in the group.
Use this procedure to delete a DB VM group profile. You can only perform this procedure if the
DB group does not exist, has been deleted, or is not deployed.
3. Click Delete.
Perform these tasks to view, create, edit, and delete App VMs.
Description Link
View App VMs. “View App VM Groups and VMs (BUI)” on page 133
Create App VMs. “App VM Creation Task Overview” on page 135
Caution - Never manually manage VMs using Oracle Solaris zone commands. Always manage
the VMs through MCMU BUI or MCMU CLI.
This is an example of a system with one App VM group. If this page reports no data to display,
App groups have not been configured yet.
Tip - If the VMs are not listed, click the triangle that is next to the VM group to expand the
display. You might need to select another navigation item, then come back to this page.
Task Description Details You Provide During the BUI Instructions CLI Instructions
No. Task
1. If needed, create additional You can accept the default network “View and Update “Managing Networks
networks that will be assigned parameters that were configured Network Parameters (CLI)” on page 291
to the VMs during the creation during the installation, or edit or in v1.2.4 and Later
process. add additional networks. (BUI)” on page 66
The profile is used to define an App VM group, which supports one or two VMs (one on each
compute node).
The total number of App VM groups you can create is only limited by the amount of system
resources that are available.
For each App VM, you need 1 IP address. When the system was installed, a pool of IP
addresses was defined in the system. To see the amount of IP addresses in the pool, in the
MCMU BUI, go to System Settings → User Input Summary, and view the IP Address Pool
Size.
Note - It is possible that App VM group profiles were created when the system was initially set
up. To determine if a group profile has already been created, see “View App VM Groups and
VMs (BUI)” on page 133.
For details about the required information, use the optional worksheet (“App VM Planning
Worksheets (Optional)” on page 93), or see “App VM Group Parameters” on page 94.
5. Enter information in the page section including passwords for all accounts.
This example shows the page that is displayed when a pair of VMs are selected in Step 4. If
Single is selected, only one VM is displayed.
If you plan to cluster the App VMs for high availability, complete the Define Cluster section
and click Next (for details, see “Define Cluster” on page 98). Otherwise, click Next.
Note that this section of the page is only enabled when you are configuring Oracle Solaris 11
type VMs.
If you find any issues with any of the information on the Review page, either click Back to
return to a previous screen, or click Cancel to return to the Home page.
8. When the creation is finished, make note of the host names and IP addresses
that are displayed.
Perform this deployment task for each App VM group that you create.
Once complete, the utility allocates these resources to each App VM:
3. For the App VM group that you want to deploy, click Deploy.
Note - If the parameters are not correct, instead select Application → Virtual Machine Group
Profiles.
Use this procedure to edit an App VM. You can edit a deployed VM.
3. For the App VM group that you want to edit, click Edit.
Use this procedure to delete an App VM group that has not been deployed.
3. For the App VM group that you want to delete, click Delete.
Use this procedure to delete an App VM group that has VMs and has been deployed.
When you delete a deployed App VM group, the VMs in the group are deleted and storage and
network resources are returned to the system for future allocation. The utility sends the primary
admin email reporting the deletion of each VM.
Caution - Deleting App VM groups deletes all the VMs, applications, and data associated with
the VM group. The deletion cannot be undone. Proceed with caution.
3. For the App VM group that you want to delete, click Delete VM Group.
5. When the confirmation window indicates that the deletion is done, click OK (or
Quit).
These topics describe how to configure the NFS shared storage and how to add or remove a
network file system.
Note - Additional storage management procedures such as preparing a drive for replacement
and adding another storage array must be performed using the mcmu CLI. See “Managing
Storage (CLI)” on page 309.
■ Internal NFS – Refers to storage on the MiniCluster storage array that can be enabled or
disabled.
■ External NFS – Refers to other NFS storage that is provided by servers in your
environment.
Use this procedure to enable or disable access to internal and external NFS storage DB VM and
App VM groups. You can also use this procedure to identify if NFS is enabled or disabled.
The internal NFS storage provides storage space for any storage purpose, and is available to all
VMs within a group if it is enabled.
Caution - Systems deployed in highly secured environments should disable NFS to both
internal and external storage. For more information, refer to the Oracle MiniCluster S7-2
Security Guide.
This tables describes the configuration results of enabling or disabling NFS in the Group
Profiles page.
The change takes effect immediately and applies to all the VMs in the group. For
more information about shared storage on the storage array, see “MiniCluster Storage
Overview” on page 25.
Caution - If any software is dependent on data in the shared storage, and you plan to disable
shared storage, take appropriate actions to remove the dependencies before you perform this
procedure.
■ For a DB VM group, select the Database → Virtual Machine Group Profiles page
■ For an App VM group, select the Application → Virtual Machine Group Profiles page
5. To access the shared file system, log into the VM and perform Oracle Solaris
commands.
To access the file system:
% cd /sharedstore
Note - The /sharedstore directory is empty until you put software in the directory.
% ls /sharedstore
Downloads Music Pictures Presentations Templates Texts Videos
Related Information
■ Securing Files and Verifying File Integrity in Oracle Solaris 11.4 (https://docs.oracle.
com/cd/E37838_01/html/E61022/index.html)
■ Managing File Systems in Oracle Solaris 11.4 (https://docs.oracle.com/cd/E37838_01/
html/E61016/index.html)
■ Securing Files and Verifying File Integrity in Oracle Solaris 11.3 (https://docs.oracle.
com/cd/E53394_01/html/E54827/index.html)
■ Managing File Systems in Oracle Solaris 11.3 (http://docs.oracle.com/cd/E53394_01/
html/E54785/index.html)
■ Oracle Solaris 11.3 Information Library (https://docs.oracle.com/cd/E53394_01/)
Use this procedure to add a network file system (NFS) to a DB VM group or an App VM
group.
The NFS service must be at minimum NFSv4. The NFS that you add can be any whole or
partial directory tree or a file hierarchy, including a single file that is shared by and NFS server.
When you add external NFS to a group, the remote file system is immediately accessible to all
the VMs in the group. External NFS is only made available to VMs in a group if shared storage
is enabled. See “Enable or Disable NFS (BUI)” on page 149.
% /usr/sbin/showmount -e NFSserver_name_or_IPaddress
c. To check the version of the NFS service provided by the NFS server, type:
The second column displays the version number. You might see several lines of output.
One of them must report version 4.
4. Click Edit.
The Edit Virtual Machine Group Profile page is displayed. Locate this section:
9. At the bottom of the screen, click Apply and confirm the change.
% su root
password: **************
# ls -ld /my_mountpoint
drwx------ 2 root root 6 Oct 25 17:20 my_mountpoint
12. To access the network file system, log into the VM and perform Oracle Solaris
commands.
To access the file system:
% cd /my_mountpoint
% ls /my_mountpoint
Downloads Music Pictures Presentations Templates Texts Videos
Related Information
■ Securing Files and Verifying File Integrity in Oracle Solaris 11.4 (https://docs.oracle.
com/cd/E37838_01/html/E61022/index.html)
■ Managing File Systems in Oracle Solaris 11.4 (https://docs.oracle.com/cd/E37838_01/
html/E61016/index.html)
Use this procedure to delete a network file system (NFS) from a DB VM group or an App VM
group.
When you delete an NFS from a group, the remote file system is immediately unavailable to all
the VMs in the group. The mount point is deleted from the system.
3. Click Edit.
The Edit Virtual Machine Group Profile page is displayed. Locate this section:
8. At the bottom of the screen, click Apply, and confirm the change.
You can configure the SMF service called mcbackup to create a snapshot of the global zone boot
environment in the /sharedstore/be/hostname directory. The service is disabled by default.
This procedure describes how to enable and disable the mcbackup service.
Once the mcbackup service is enabled, there is a 15 minute delay, after which a snapshot of the
global zone boot environment is created and backed up every hour.
1. Log into the kernel zone on node 1 as a primary admin such as mcinstall and
assume the root role.
See “Log in to the Global or Kernel Zone” on page 35.
2. Ensure that the permissions on the /sharedstore/be directory are limited to only
authorized users.
For example, list the directory permissions and then set them so that only a user with the root
role can access the directory.
4. Log into the kernel zone on node 2 as a primary admin such as mcinstall and
assume the root role.
These topics describe how to view security benchmarks and encryption key information in the
MCMU BUI. You can also use the BUI to configure a firewall to protect network traffic.
Note - For detailed information about running security benchmarks and changing SSH keys,
refer to the Oracle MiniCluster S7-2 Security Guide.
Firewall Protection
The firewall technology provided by MiniCluster differs based on the version of the Oracle
Solaris OS that is running on MiniCluster components.
■ MiniCluster 1.3.0 and later
MiniCluster now uses the packet filter functionality delivered by Oracle Solaris 11.4 to
enable network traffic protection. This enables MiniCluster to protect networks and virtual
hosts from network-based intrusions. Packet Filtering is enabled and disabled through the
use of the SMF service svc:/network/firewall for Global and Kernel Zones, and all VMs
running Oracle Solaris 11.4.
The Firewall Manager feature is available through the MiniCluster BUI (System Settings →
Firewall Manager).
■ MiniCluster 1.2.5.22 and earlier
MiniCluster provides network traffic protection using Oracle Solaris 11.3 IP Filter-based
firewalls for virtual machines, including global, non-global, and kernel zones.
For instruction on updating firewall rules, refer to “Manage Firewall Rules” in Oracle
MiniCluster S7-2 Security Guide.
To learn about the Oracle Solaris firewall technologies, refer to the following Oracle Solaris
Documents:
When the system is installed, a security profile (CIS Equivalent, PCI-DSS, or DISA-STIG) is
selected, and the system is automatically configured to meet that security profile. To ensure that
the system continues to operate in accordance with security profiles, the MCMU provides the
means to run security benchmarks and access to the benchmark reports.
■ Enables you to evaluate and assess the current security state of the database and application
VMs.
■ The security compliance tests support the security profile standards based on the security
level configured during the installation.
■ The security compliance tests run automatically when the system is booted, and can be run
on-demand or at scheduled intervals.
■ Only available to MCMU primary admins, compliance scores and reports are easily
accessed from the MCMU BUI.
■ The compliance reports provide remediation recommendations.
Use this procedure to view security related information such as compliance reports and
encryption key details.
For information about configuring security compliance benchmarks, see “Securing the System
(BUI)” on page 159.
■ Node – Lists the compute nodes. You can expand and collapse the individual nodes by
clicking on the arrow.
■ Virtual Machine Name – Lists the VM names (hostnames).
■ Benchmark Type – Specifies the type of benchmark used (CIS Equivalent, PCI-DSS, or
DISA-STIG).
■ Compliance Score – Lists the overall score of the compliance run.
■ Date & Time – Displays the most recent time that the benchmark was performed.
■ Remarks – Provides information about benchmark results.
■ View Report – Provides a button that enables you to view a compliance report.
■ Schedule Compliance – Provides a button that enables you to schedule a benchmark.
To manage encryption keys, refer to the Oracle MiniCluster S7-2 Security Guide.
Note - You can only view benchmark reports if a benchmark was schedule and ran.
Note - You can display all the details of all tests by clicking Show all Result Details at the
bottom of the report.
Caution - Carefully consider the session timeout period. Maintaining a short BUI timeout
session is a key security configuration. Ensure that the value you use is in compliance with your
corporate security policies.
Note - Each user must use their own browser and not share browser sessions.
By default, utility is configured at installation. However, if you change Oracle ILOM root
passwords after the installation, you must update the Oracle Engineered Systems Hardware
Manager configuration with the new passwords. This action is required so that service
personnel can use the tool to ensure optimum problem resolution and health of the system.
■ System-level problem history, and the ability to manually clear hardware faults and
warnings
■ Automatic and manual collection of support file bundles
■ Manual delivery of support file bundles to My Oracle Support (MOS)
If you change Oracle ILOM root passwords after the installation, you must update the Oracle
Engineered Systems Hardware Manager configuration with the new passwords. This action is
required so that service personnel can use the tool to ensure optimum problem resolution and
health of the system.
In addition, Oracle Engineered Systems Hardware Manager must be configured with the
passwords for the root account on all the Oracle ILOMs in the system.
Note - The utility does not need to know the passwords for the OS, database, applications, or
VMs.
Related Information
■ “Access Oracle Engineered Systems Hardware Manager” on page 168
■ “Update Component Passwords” on page 170
■ “Configure the Utility's Password Policies and Passwords” on page 169
■ “Configure Certificates and Port Numbers” on page 171
You can access this tool from a browser as described in this procedure, or you can launch
the tool from the MCMU BUI. See “Access Oracle Engineered Systems Hardware
Manager” on page 197.
Tip - For assistance, refer to the online help that is displayed on each page.
Related Information
■ “Configure the Utility's Password Policies and Passwords” on page 169
■ “Configure Certificates and Port Numbers” on page 171
This procedure describes how to manage the passwords and policies for the user accounts.
Note - You can also change the admin password using an alternative procedure described in
“Configure the Oracle Engineered System Hardware Manager Password” in Oracle MiniCluster
S7-2 Installation Guide.
c. Click Edit.
Related Information
■ “Oracle Engineered Systems Hardware Manager Overview” on page 167
■ “Access Oracle Engineered Systems Hardware Manager” on page 168
■ “Update Component Passwords” on page 170
■ “Configure Certificates and Port Numbers” on page 171
You must perform this procedure whenever the Oracle ILOM root password is changed.
Keeping Oracle Engineered Systems Hardware Manager up to date ensures that Oracle Service
personnel can use the utility to manage MiniCluster components.
For details on which component passwords are required see “Oracle Engineered Systems
Hardware Manager Overview” on page 167.
b. Click in the check boxes for ILOM (user root), and press Provide Credentials.
c. Enter the password that you have already set in the ILOM.
Select the compute server (MiniCluster nodes), and click Provide Credentials. Enter the node's
Oracle ILOM passwords.
4. Restart Oracle Engineered Systems Hardware Manager for the changes to take
effect:
Related Information
■ “Oracle Engineered Systems Hardware Manager Overview” on page 167
■ “Access Oracle Engineered Systems Hardware Manager” on page 168
■ “Configure the Utility's Password Policies and Passwords” on page 169
■ “Configure Certificates and Port Numbers” on page 171
Perform the relevant steps in this procedure to configure these items used by Oracle Engineered
Systems Hardware Manager:
■ Certificates – Use your own certificates instead of the site- and instance-specific
certificates that the utility generates.
■ Ports – If an application running on MiniCluster uses the same port that the utility uses
(8001), you or Oracle Service can configure Oracle Engineered Systems Hardware Manager
to use a different port.
Related Information
■ “Oracle Engineered Systems Hardware Manager Overview” on page 167
■ “Access Oracle Engineered Systems Hardware Manager” on page 168
■ “Update Component Passwords” on page 170
■ “Configure the Utility's Password Policies and Passwords” on page 169
The virtual tuning assistance is used to keep MiniCluster automatically tuned to best practices.
Note - This section describes how to administer the virtual tuning assistant using the MCMU
BUI. For instructions on how to administer the virtual tuning assistant (mctuner) using the
MCMU CLI, see “Administering the Virtual Tuning Assistant (CLI)” on page 327.
These topics describe how to obtain information from the virtual tuning assistant.
By default, the virtual tuning assistant is enabled on the system to ensure that the system is
running with optimal tuning parameters. There is a tuning instance running on the global and
kernel zones on each node.
By default, the tuning assistant sends notices to root@localhost. To change the email
notification email address, see “Configure the mctuner Notification Email Address
(CLI)” on page 327.
2. In the Home page, scroll down to the Virtual Tuning Assistant Status panel.
For example:
For example:
The Virtual Tuning Assistant Status Information panel provides this information:
■ Virtual Machine – For each VM, this column indicates the type of zone that the VM is
based on.
■ Status – Indicates if the VM is online or offline.
■ Issues – Displays any issues that the virtual tuning assistant detects.
■ Notices – Displays virtual tuning assistant notices.
Updates for MiniCluster, are issued on a periodic basis. The bundled updates are available
for download from My Oracle Support (http://support.oracle.com.) Search for Doc ID
2153282.1.
There are a number of MiniCluster software and firmware components that can be updated
using MCMU. At any given time, updates might be available for one component, and not
others.
Software Components
This table lists components that can be updated (subject to update availability):
Note - The list of components is subject to change for different releases of MiniCluster. To see
the exact list for your system, view the current MCMU versions as described in “View Software
Component Versions (BUI)” on page 181.
This table lists the approximate duration of the updated for updating various components on
a two node cluster. The duration varies depending on the number of VMs and the current
workload. As a best practice, perform upgrades during low or no workload periods.
Related Information
■ “Software Upgrade Requirements” on page 179
■ “Check for and Obtain the Latest Updates” on page 183
■ “Update MiniCluster Software (BUI)” on page 188
■ “Updating MiniCluster Software (CLI)” on page 331
This section describes the requirements that apply when you upgrade a software component to
a new major revision. For example, when you upgrade the Grid Infrastructure from 12c to 18c.
These requirements do not apply to PSUs or Proactive patches.
Note - When upgrading the grid infrastructure or shared storage, the current release must be
updated to the latest proactive patch level before the upgrade. For example, a system running
Oracle Database 12.1 with the April 2018 Proactive patch must be updated with the October
2018 Proactive patch (assuming that is the latest available) before the system can be upgraded
to Oracle Database 18c. Also, the system must be idle with no database or applications running.
DB VM Home Updates
Existing DB VM homes can only be updated with the same major release. For example, you can
update an Oracle DB home from 12c April 2018 Proactive patch to 12c October 2018 Proactive
patch. You cannot upgrade an existing DB home from one major release to another.
However, you can install the Oracle Database of your choice (12.2c and 18c, for example) and
then create new DB homes and instances for the DB VMs if you follow these guidelines:
1. Ensure that the Grid infrastructure and Shared filesystem components are running Grid
Infrastructure 18c. Use these procedures to view, and if needed, upgrade those components:
■ “Software Upgrade Requirements” on page 179
■ “View Software Component Versions (BUI)” on page 181
■ “Check for and Obtain the Latest Updates” on page 183
■ “Extract the Patch Bundle” on page 185
■ “Install the Component Bundle” on page 187
The MCMU BUI provides a list of MCMU software versions currently installed on the system.
The page shows current software versions on your system. If you have recently installed the
component bundle (see “Install the Component Bundle” on page 187), the Latest Level
column shows the latest updates that are available and an Apply button is enabled.
Note - Do not click Check Status unless you have installed the latest updates in the /var/
opt/oracle.minicluster/patch directory as described in “Check for and Obtain the Latest
Updates” on page 183. The Check Status feature compares component versions of the system
against what is in the /var/opt/oracle.minicluster/patch directory.
Some updates require you to download multiple large zip files. Depending on your network
capabilities and the size of the zip files, the download can take a significant amount of time.
IMPORTANT – Information in the Release Notes supersedes instructions in this guide. There
are multiple software update scenarios based on the state of MiniCluster (before initial setup,
after initial setup) and based on the version currently running on MiniCluster. Depending on
your situation, you might need to perform updates in a specific way. For further details, refer to
the MiniCluster Release Notes for your release of the software. Go to MOS (http://support.
oracle.com), and search for MiniCluster Release Notes.
IMPORTANT – Update the MCMU component before you update any other component. If
you follow the steps in this procedure, you are directed to do so.
1. Follow best practices and backup the system before updating software.
3. In the upper right corner, click your user name and select My Oracle Support.
5. Search 2153282.1.
This MOS document is the MiniCluster S7-2: Software Download Center and provides access
to MiniCluster downloads.
Provides the initial configuration tool used to create configuration files required at installation time.
■ During the initial installation. Refer to the Oracle MiniCluster S7-2 Installation Guide.
■ Any time that there are updated versions and you want to use those updated versions to configure VMs. This
procedure explains how to perform this activity.
MiniCluster Core Software
Provides MiniCluster management software (MCMU), Solaris OS and SRU repository files.
Download and install the core software if you see a screen telling you that your core software is out of date when you
run the installmc --deploy command as part of your initial installation.
Note - MiniCluster systems ship from the factory with the core software installed. You usually only need this
download if instructed to obtain it during the installation. You do not use this download for routine software updates.
In those cases use the Patch Bundle and Component bundle.
MiniCluster Patch Bundle
■ MCMU
■ GI and DB patches for all supported DB releases (applied to existing DB VMs)
■ Oracle Solaris SRUs (applied to existing VMs)
■ Compute node firmware (Oracle ILOM)
■ Storage array firmware
Download and install the Patch Bundle to determine if any of the MiniCluster software components are out of date
and to update out of date components.
Note - Oracle Solaris 10 branded zones are updated outside of the MiniCluster update feature. If you have Oracle
Solaris 10 branded zones, apply patches to them separately. Review the knowledge articles titled How to find the
Oracle Solaris Critical Patch Update (CPU) Patchsets, Recommended OS Patchsets for Oracle Solaris and Oracle
Solaris Update Patch Bundles (Doc ID 1272947.1) and How Patches and Updates Entitlement Works (Doc ID
1369860.1). Both articles are available at My Oracle Support. Take any actions necessary to patch applicable Oracle
Solaris 10 Branded Zone virtual machines.
This procedure in this chapter describes how to download this Patch Bundle.
MiniCluster Factory Reset ISO
Download and install the factory reset ISO if you want to reset your MiniCluster system back to the original factory
settings.
For information on downloading and installing the factory reset ISO, refer to the MOS article titled Oracle
MiniCluster S7-2: How to factory reset the entire system (Doc ID 2151620.1).
8. In Doc ID 2153282.1, begin the download process for one of these bundles:
■ Patch Bundle – For updating MCMU, existing VMs GI, DB, and OS, compute node's OS,
GI, and firmware, and storage array firmware.
■ Component Bundle – (Recommended when you plan to update MCMU) This bundle is for
installing the latest releases of the Oracle Database in the DB repository (used to create and
update DB VMs).
To begin the download process, click the patch number for the bundle you want to download.
See “Check for and Obtain the Latest Updates” on page 183.
2. Log into the compute node 1 MCMU CLI as the primary admin such as mcinstall.
See “Log in to the MCMU CLI” on page 31.
% cd /var/opt/oracle.minicluster/patch
% /bin/unzip '*zip'
% ls
MC-README.txt p25218297_100_SOLARIS64_2of4.zip
mc-1.1.21.4-patch.tar.aa p25218297_100_SOLARIS64_3of4.zip
mc-1.1.21.4-patch.tar.ab p25218297_100_SOLARIS64_4of4.zip
mc-1.1.21.4-patch.tar.ac
mc-1.1.21.4-patch.tar.ad
Note - After the extraction, you can delete the tar and zip files.
■ Update components using the MCMU BUI – See “Update MiniCluster Software
(BUI)” on page 188.
■ Update components using the MCMU CLI – See “Update the MCMU Component
(CLI)” on page 333.
This procedure is also required when you update the MCMU component as described in
“Update the MCMU Component (CLI)” on page 333. Install the Component Bundle after
updating the MCMU component.
2. Log into the compute node 1 MCMU CLI as the primary admin such as mcinstall.
See “Log in to the MCMU CLI” on page 31.
% cd /var/opt/oracle.minicluster/patch
% /bin/unzip '*zip'
% ls
MC-README.txt
mc-1.1.21.4-sfw.tar.ad
mc-1.1.21.4-sfw.tar.aa
mc-1.1.21.4-sfw.tar.ae
mc-1.1.21.4-sfw.tar.ab
% cd /var/opt/oracle.minicluster/patch
% cat mc-version_no-sfw.tar.a? | sh ./import.sh
% rm mc-version_no-sfw.tar.a?
6. If the previous step caused the eshm/omc service to transition into a maintenance
state, clear the service on both nodes.
% su - root
# svcadm clear eshm/omc
■ Update components using the MCMU BUI – See “Update MiniCluster Software
(BUI)” on page 188.
■ Update components using the MCMU CLI – See “Update the MCMU Component
(CLI)” on page 333.
This procedure describes how to apply updates to MiniCluster components using the MCMU
BUI after the system's initial setup is done. Always use the MCMU BUI to update the
MiniCluster components. Do not apply patches manually unless you are instructed to do so by
authorized service personnel.
Caution - The MCMU component must be updated before you update any other component.
(see “Update the MCMU Component (CLI)” on page 333).
1. Ensure that you have downloaded the latest Patch Bundle as described in
“Check for and Obtain the Latest Updates” on page 183.
2. If you plan to update the grid infrastructure in the kernel zone or in a DB VM,
ensure that the system is idle with no database or applications running.
7. In the Patches and Updates page, identify what updates are available.
Review the Current Level and Latest Level information.
These buttons indicate which components can be updated:
Updating components marked with two asterisks requires the entire system to be offline.
Caution - The MCMU component must be updated before you update any other component.
(see “Update the MCMU Component (CLI)” on page 333).
■ Click Apply for an individual component – MCMU applies the update for that
component.
When you update individual software components, the MCMU software ensures that
any prerequisite updates are applied. For example, when applying the Shared Filesystem
Software update, MCMU first updates the Solaris repository, then the Shared Storage OS.
■ Click Apply All – MCMU automatically applies available component updates in this order:
1. MCMU
2. Storage tray firmware
3. Solaris repository
4. Solaris in kernel zones
5. ACFS
6. Solaris in global zones (updates node 2 first, reboots node 2, then updates node 1 and
reboots node 1)
Note - Apply All does not automatically apply the compute node firmware (Oracle ILOM),
Grid infrastructure, or Oracle DB home software. Those component updates must be applied
individually.
10. When the dialog window indicates that the update process is complete, click OK
(or Confirm).
The dialog window is dismissed, and you can run other MCMU BUI functions.
If you are updating the MiniCluster Configuration Utility component, web services are restarted
and you might need to refresh the browser cache (shift-reload).
The MCMU BUI provides access to several features that enable you to check system states.
The system readiness check feature checks to ensure that the MiniCluster hardware and
software are configured properly and at expected levels. This check is normally performed
before the system is configured, but you can run this feature any time.
Use this task to check that the I/O cards locations and to verify network connectivity.
This example shows the topology of a system with one storage array. Note that a second storage
array is shown, but with no connections to the nodes, the system probably only has one storage
array.
Use this procedure to check the health of the drives in the system. This feature performs read
and write tests on a reserved area of each drive. The check is not destructive to data.
Note - Access to MOS requires an Oracle support agreement and MOS login credentials.
2. In the upper right corner, click your user name and select My Oracle Support.
3. Sign in to MOS.
Note - Oracle Engineered Systems Hardware Manager must be configured before it is accessed.
See “Configuring Oracle Engineered Systems Hardware Manager” on page 167.
Note - For storage space efficiency, the utility only supports the existence of one support file
bundle per component. If a support file bundle exists, it is automatically replaced when a new
bundle is generated.
1. Log into Oracle Engineered Systems Hardware Manager through the MCMU BUI.
Log in as the admin user. See “Access Oracle Engineered Systems Hardware
Manager” on page 197.
4. In the Create Bundle dialog box, select one of the compute servers.
5. Click Create.
The utility creates a support file bundle.
When you have completed the initial installation of the system, you can use MCMU to activate
Oracle ASR software for the system.
Oracle ASR software provides the ability to resolve problems quickly by automatically opening
service requests for Oracle's qualified server, storage, and Engineered System products when
specific faults occur. Parts are dispatched upon receipt of a service request sent by Oracle ASR.
In many cases, Oracle engineers are already working to resolve an issue before you are aware
that a problem exists.
Oracle ASR securely transports electronic fault telemetry data to Oracle automatically to help
expedite the diagnostic process. The one-way event notification requires no incoming Internet
connections or remote access mechanism. Only the information needed to solve a problem is
communicated to Oracle.
Oracle ASR is a feature of the Oracle hardware warranty, Oracle Premium Support for Systems,
and Oracle Platinum Services. To learn more, go to https://www.oracle.com/support/
premier/index.html.
5. Click Configure.
Previous sections in this document describe how to administer MiniCluster using the MCMU
BUI, which is a good interface to use for guided visual procedures. The majority of BUI
procedures can also be performed using the MCMU CLI. The remainder of this document
covers the MCMU CLI procedures.
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics describe how to use the mcmu command and how to display mcmu help.
To perform mcmu commands, you must log into the mcmu CLI with a valid MCMU account such
as the mcinstall user account. See “Log in to the MCMU CLI” on page 31.
where:
This example creates a DB instance using the tenant subcommand with -I (instance) and -c
(create) options.
% mcmu tenant -I -c
For the latest CLI information, additional details, and valid options, use the mcmu help option.
See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu Help
for a Specific Subcommand (CLI)” on page 205.
Use this procedure to display the mcmu CLI syntax for all the mcmu subcommands and options.
2. Type:
% mcmu -h
Usage: mcmu [Sub-Command][Sub-command options]
Sub-Commands:
/var/opt/oracle.minicluster/bin/mcmu [setupmc|patch|tenant|status|start|stop|
compliance|sshkey|user|readiness|mctuner|asr|security|diskutil]
MCMU Options:
-h, --help Show supported options
-V, --version Print version string
.
<output omitted>
.
% mcmu mctuner -h
Usage: mcmu mctuner < -h | -S | -P <options> >
Options:
-h, --help show this help message and exit
-S, --status show mctuner status in all zones
-P, --property set mctuner property in one zone
For example:
Options:
-h, --help show this help message and exit
-k NODENUM, --kernelzone=NODENUM Show kernel zone status, specified by node number (node1
or node2)
-n ZONENAME, --zonename=ZONENAME Show tenant zone status, specified by zone name
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
Note - For information about displaying the status of VMs and zones, see “Obtaining Status
(CLI)” on page 223.
These topics describe how to display information about the MCMU version, VM group profiles,
and VMs.
Description Links
Determine the version of the MCMU “List the MCMU Version (CLI)” on page 208
software.
List information about DB VMs. “List a Summary of All DB VM Groups (CLI)” on page 208
Description Links
List VM IP addresses and hostnames “List the IP and Hostname Entries for DNS (CLI)” on page 222
2. Type:
% mcmu -V
Oracle MiniCluster Configuration Utility
MCMU v1.3.0
This procedure also lists DB VMgroupIDs, which are required to perform other CLI commands.
2. Type:
% mcmu tenant -G -l
Listing DB VM Group...
Status : Active
Description :
VMgroupName : dbzg2
editable : True
deletable : True
progress : False
VMgroupID : 1
This procedure also lists DB VMgroupIDs, which are required to perform other CLI commands.
2. Type:
% mcmu tenant -P -l
Examples:
■ This is an example of a system that does not have any DB VM group profiles configured:
% mcmu tenant -P -l
Listing DB VM Group Profile..
No VM Group Profiles available yet
■ This is an example of a system with one DB VM group profile:
% mcmu tenant -P -l
Listing DB VM Group Profile..
Status : Active
EditStatus :
Description : Initial DB VM Group
- NORMAL redundancy
- Shared Storage
- CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
where VMgroupID is the ID of the DB VM group profile. To determine the VMgroupID, see
“List a Summary of a DB VM Group Profile (CLI)” on page 208.
% mcmu tenant -P -L 1
Getting DB VM Group Profile...
GRID DEFINITION
Status : Active
inventoryLocation : /u01/app/oraInventory
gridLocation : /u01/app/12.1.0.2/grid
redoDiskGroup : HIGH
dataDiskGroup : NORMAL
recoDiskGroup : NORMAL
SCAN_name : dbgp1-scan
SCAN_ip : 192.0.2.4,192.0.2.5,192.0.2.6
STORAGE DEFINITION
redundancy : NORMAL
numberOfDisks : None
storageArrays :
DB VM GROUP DEFINITION
status : Active
VMGroupDesc : Initial DB VM Group
- NORMAL redundancy
- Shared Storage
- CIS
VMGroupType : database
VMGroupName : dbgrp1
operationType : DBZoneGroup_MapIP
VMGroupID : 1
globalName : mc3-n1,mc3-n2
compliance benchmark : No
shared storage : Yes
DB VM DEFINITIONS
VM 1
status : Active
id : 1
name : dbgp1-vm1-mc3-n1
globalName : mc3-n1
cores : 0
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.10
private_hostname : mc3-n1vm1-z1-priv
private_mask : 24
public_ip : 192.0.2.11
public_hostname : mc3-n1vm1-z1
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.13
virtual_hostname : mc3-n1vm1-z1-vip
VM 2
status : Active
id : 2
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
cores : 3
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip :192.0.2.14
private_hostname : mc3-n2vm1-z1-priv
private_mask : 24
public_ip : 192.0.2.15
public_hostname : mc3-n2vm1-z1
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.15
virtual_hostname : mc3-n2vm1-z1-vip
VM 3
status : Active
id : 3
name : dbgp1-vm2-mc3-n1
globalName : mc3-n1
cores : 0
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.16
private_hostname : mc3-n1vm1-z2-priv
private_mask : 24
public_ip : xx.xxx.xxx..198
public_hostname : mc3-n1vm1-z2
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.17
virtual_hostname : mc3-n1vm1-z2-vip
VM 4
status : Active
id : 4
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
cores : 2
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.18
private_hostname : mc3-n2vm1-z2-priv
private_mask : 24
public_ip : 192.0.2.19
public_hostname : mc3-n2vm1-z2
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.20
virtual_hostname : mc3-n2vm1-z2-vip
where VMgroupID is the ID of the DB VM group profile. To determine the VMgroupID, see
“List a Summary of All DB VM Groups (CLI)” on page 208.
For example:
% mcmu tenant -G -L 1
Getting DB VM Group Profile...
GRID DEFINITION
Status : Active
inventoryLocation : /u01/app/oraInventory
gridLocation : /u01/app/12.1.0.2/grid
redoDiskGroup : HIGH
dataDiskGroup : NORMAL
recoDiskGroup : NORMAL
SCAN_name : dbgp1-scan
SCAN_ip : 192.0.2.2,192.0.2.3,192.0.2.4
STORAGE DEFINITION
redundancy : NORMAL
numberOfDisks : None
storageArrays :
DB VM GROUP DEFINITION
status : Active
VMGroupDesc : DB MVM Group 1 - NORMAL - SHARED - CIS
VMGroupType : database
VMGroupName : dbgp1
operationType : DBZoneGroup_MapIP
VMGroupID : 1
globalName : mc3-n1,mc3-n2
compliance benchmark : No
shared storage : Yes
DB VM DEFINITIONS
VM 1
status : Active
id : 1
name : dbgp1-vm1-mc3-n1
globalName : mc3-n1
cores : 4
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.6
private_hostname : mc3-n1vm1-z1-priv
private_mask : 24
public_ip : 192.0.2.9
public_hostname : mc3-n1vm1-z1
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.10
virtual_hostname : mc3-n1vm1-z1-vip
VM 2
status : Active
id : 2
name : dbgp1-vm2-mc3-n1
globalName : mc3-n1
cores : 3
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.11
private_hostname : mc3-n1vm1-z2-priv
private_mask : 24
public_ip : 192.0.2.12
public_hostname : mc3-n1vm1-z2
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.13
virtual_hostname : mc3-n1vm1-z2-vip
VM 3
status : Active
id : 3
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
cores : 0
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.14
private_hostname : mc3-n2vm1-z1-priv
private_mask : 24
public_ip : 192.0.2.15
public_hostname : mc3-n2vm1-z1
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.16
virtual_hostname : mc3-n2vm1-z1-vip
VM 4
status : Active
id : 4
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
cores : 0
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.17
private_hostname : mc3-n2vm1-z2-priv
private_mask : 24
public_ip : 192.0.2.18
public_hostname : mc3-n2vm1-z2
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip : 192.0.2.19
virtual_hostname : mc3-n2vm1-z2-vip
where VMgroupID is the DB VM group ID. To determine the VMgroupID, see “List a Summary
of a DB VM Group Profile (CLI)” on page 208.
In this example, the home_ID is listed in the left column (ID: 1, ID: 9, ID: 2, and so on).
% mcmu tenant -H -l 1
LIST OF DB HOMES IN DB VM GROUP 1
where home_ID is the ID of the DB home. To determine the home_ID, see “List All DB Homes
in a Group (CLI)” on page 214.
For example:
% mcmu tenant -H -L 2
DB HOME INFORMATION
ID: 2
VM_ID: 2
VMGROUP_ID: 1
DB_HOME: /u01/app/oracle/product/12.1.0/db_12c
VERSION: 12.1.0.2
TYPE: RAC
PATCH: 12.1.0.2.160419
STATUS: Active
where VMgroupID is DB VM group ID. To determine the VMgroupID, see “List a Summary of
a DB VM Group Profile (CLI)” on page 208.
In this example, the instance_ID is listed in the left column (ID: 3, ID: 4, ID: 7, and so on).
mcmu tenant -I -l 1
LIST OF DB INSTANCES IN DB VM GROUP 1
where instance_ID is the ID of the instance. To determine the instance_ID, see “List All DB
Instances in a Group (CLI)” on page 216.
For example, to list details on a DB instance with an ID of 3, type:
% mcmu tenant -I -l 1
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_100316_155137.log
This procedure also lists App VMgroupIDs, which are required to perform other CLI
commands.
2. Type:
% mcmu tenant -A -l
For example:
% mcmu tenant -A -l
Listing APP VM Group...
Status : Active
EditStatus :
Description : Drama App VM Group
- shared
- multiple
- CIS
deletable : True
progress : False
VMgroupName : avm1
editable : True
VMgroupID : 2
Status : Active
EditStatus :
Description : Thriller App VM Group - Multiple
- shared
- PCI-DSS
deletable : True
progress : False
VMgroupName : avm2
editable : True
VMgroupID : 3
Status : Active
EditStatus :
Description : Documentary App VM Group
- single
- no shared storage
- pci-dss
deletable : True
progress : False
VMgroupName : avm3
editable : True
VMgroupID : 4
Status : Active
EditStatus :
Description : Sci-Fi App VM Group
- single
- no shared storage
- CIS
deletable : True
progress : False
VMgroupName : avm5
editable : True
VMgroupID : 5
where VMgroupID is the App group profile ID. To determine the VMgroupID, see “List a
Summary of All App VM Group Profiles (CLI)” on page 217.
For example:
% mcmu tenant -A -L 2
Getting APP VM Group...
APP VM DEFINITION
APPVM 1
id : 5
status : Active
name : avm1-vm1-mc3-n1
globalName : mc3-n1
cores : 0
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.2
private_hostname : mc3-n1vm2-az1-priv
private_mask : 24
public_ip : 192.0.2.3
public_hostname : mc3-n1vm2-az1
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip :
virtual_hostname : mc3-n1vm2-az1-vip
APPVM 2
id : 6
status : Active
name : avm1-vm1-mc3-n2
globalName : mc3-n2
cores : 2
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.4
private_hostname : mc3-n2vm2-az1-priv
private_mask : 24
public_ip : 192.0.2.5
public_hostname : mc3-n2vm2-az1
public_mask : 20
public_gateway : 192.0.2.1
virtual_ip :
virtual_hostname : mc3-n2vm2-az1-vip
This procedure also lists App VMgroupIDs, which are required to perform other CLI
commands.
2. Type:
% mcmu tenant -V -l
Listing APP VM Group...
Status : Active
VMgroupName : mc12appzg2
Description : zonegroup description
VMgroupID : 2
where VMgroupID is the App group profile ID. To determine the VMgroupID, see “List a
Summary of All App VM Group Profiles (CLI)” on page 217.
% mcmu tenant -V -L 2
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_100316_161932.log
EXTERNAL NFS
APP VM DEFINITION
APPVM 1
id : 5
status : Active
name : mc12appzg2n1
globalName : mc12-n1
cores : 3
DNSServers : 192.0.2.7,192.0.2.8
memory : 522496
virtualNetworks
private_ip : 192.0.2.2
private_hostname : mc12appzg2n1-pub-priv
private_mask : 24
public_ip : 192.0.2.3
public_hostname : mc12appzg2n1-pub
public_mask : 22
public_gateway : 192.0.2.1
virtual_ip :
virtual_hostname : mc12appzg2n1-pub-vip
APPVM 2
id : 6
status : Active
name : mc12appzg2n2
globalName : mc12-n2
cores : 3
DNSServers : <valid_IP_addr>,<valid_IP_addr>,<valid_IP_addr>
memory : 522496
virtualNetworks
private_ip : 192.0.2.4
private_hostname : mc12appzg2n2-pub-priv
private_mask : 24
public_ip : 192.0.2.5
public_hostname : mc12appzg2n2-pub
public_mask : 22
public_gateway : 192.0.2.1
virtual_ip :
virtual_hostname : mc12appzg2n2-pub-vip
Use this procedure to see a list of hostname and IP addresses that should be mapped in DNS.
2. Type:
% mcmu tenant -M -n
IP | HOSTNAME
-------------+-----------------------------
192.0.2.2 | mc12dbzg1-zone-3-mc12-n1
192.0.2.3 | mc12dbzg1-zone-3-mc12-n1-vip
192.0.2.4 | mc12dbzg1-zone-3-mc12-n2
192.0.2.5 | mc12dbzg1-zone-3-mc12-n2-vip
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics describe how to view various aspects of the system status:
■ “Show the Status of Zones and DB VMs (CLI)” on page 223
■ “Show the Kernel Zone GI Status (CLI)” on page 224
■ “Show the GI Status of a DB VM (CLI)” on page 226
■ “Show Kernel Zone Status (CLI)” on page 228
■ “Show the VM Status (CLI)” on page 228
■ “Check the Status of the GI on the Kernel Zone (CLI)” on page 228
■ “Run orachk Health Checks (CLI)” on page 230
Note - For mcmu commands that list information about zones and VMs, see “Listing Version,
Group, and VM Details (CLI)” on page 207.
2. Type:
% mcmu status -Z -a
% mcmu status -Z -a
[INFO ] Zone status on node1
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 acfskz running - solaris-kz excl
7 dbgp1-vm1-mc3-n1 running /mcpool/dbgp1-vm1-mc3-n1zroot solaris excl
8 dbgp1-vm2-mc3-n1 running /mcpool/dbgp1-vm2-mc3-n1zroot solaris excl
- appzonetemplate installed /mcpool/appzonetemplate solaris excl
- dbzonetemplate installed /mcpool/dbzonetemplate solaris excl
2. Type:
% mcmu status -G -k
--------------------------------------------------------------
Name Target State Server State details
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––--
Local Resources
--------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE mc2ss01 STABLE
ONLINE ONLINE mc2ss02 STABLE
ora.OCRVOTE.dg
ONLINE ONLINE mc2ss01 STABLE
ONLINE ONLINE mc2ss02 STABLE
ora.SHARED.COMMONVOL.advm
ONLINE ONLINE mc2ss01 STABLE
where VMgroupname is the name of the DB VM group. To determine the VMgroupname, see
“List a Summary of All DB VM Groups (CLI)” on page 208.
For example:
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE dbzg2-zg2zone-1-mc2-n1 STABLE
ONLINE ONLINE dbzg2-zg2zone-1-mc2-n2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE dbzg2-zg2zone-1-mc2-n1 STABLE
ONLINE ONLINE dbzg2-zg2zone-1-mc2-n2 STABLE
ora.RECO.dg
ONLINE ONLINE dbzg2-zg2zone-1-mc2-n1 STABLE
where x is either 1 or 2.
For example:
% mcmu status -Z -k node1
[INFO ] Log file path :
/var/opt/oracle.minicluster/setup/logs/mcmu_050616_112555.log
ID NAME STATUS PATH BRAND IP
2 acfskz running - solaris-kz excl
where VMname is the name of the VM. To determine the name of a DB VM, see “List Details
of a DB VM Group Profile (CLI)” on page 209. For an App VM, see “List Details of an App
Group Profile (CLI)” on page 218.
For example:
% mcmu status -Z -n dbgp1-vm1-mc3-n1
ID NAME STATUS PATH BRAND IP
7 dbgp1-vm1-mc3-n1 running /mcpool/dbgp1-vm1-mc3-n1zroot solaris excl
2. Type:
% mcmu status -G -k
INFO:MCMU.controllers.common.pexpect_util:su to user root successfully.
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 STABLE
ora.OCRVOTE.dg
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 STABLE
ora.SHARED.COMMONVOL.advm
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 Volume device /dev/a
sm/commonvol-377 is
online,STABLE
ora.SHARED.SSVOL.advm
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 Volume device /dev/a
sm/ssvol-377 is onli
ne,STABLE
ora.SHARED.dg
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 STABLE
ora.asm
ONLINE ONLINE mc3ss01 Started,STABLE
ONLINE ONLINE mc3ss02 Started,STABLE
ora.net1.network
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 STABLE
ora.ons
ONLINE ONLINE mc3ss01 STABLE
ONLINE ONLINE mc3ss02 STABLE
ora.shared.commonvol.acfs
ONLINE ONLINE mc3ss01 mounted on /commonfs
,STABLE
ONLINE ONLINE mc3ss02 mounted on /commonfs
,STABLE
ora.shared.ssvol.acfs
ONLINE ONLINE mc3ss01 mounted on /sharedst
ore,STABLE
ONLINE ONLINE mc3ss02 mounted on /sharedst
ore,STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE mc3ss02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE mc3ss01 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE mc3ss01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE mc3ss01 xxx.xxx.xxx.144 192.
xxx.xx.250,STABLE
ora.commonfs.export
1 ONLINE ONLINE mc3ss01 STABLE
ora.cvu
1 ONLINE ONLINE mc3ss01 STABLE
ora.mc3ss01.vip
1 ONLINE ONLINE mc3ss01 STABLE
ora.mc3ss02.vip
1 ONLINE ONLINE mc3ss02 STABLE
ora.mgmtdb
1 ONLINE ONLINE mc3ss01 Open,STABLE
ora.oc4j
1 ONLINE ONLINE mc3ss01 STABLE
ora.omcss.havip
1 ONLINE ONLINE mc3ss01 STABLE
ora.scan1.vip
1 ONLINE ONLINE mc3ss02 STABLE
ora.scan2.vip
1 ONLINE ONLINE mc3ss01 STABLE
ora.scan3.vip
1 ONLINE ONLINE mc3ss01 STABLE
ora.sharedstore.export
1 ONLINE ONLINE mc3ss01 STABLE
--------------------------------------------------------------------------------
Before you can run ORAchk, you must download it and install it in the database VMs.
For more information about ORAchk, refer to “ORAchk Overview” on page 18.
3. Run orachk.
root@mc1dbzg1-mc1zg1zone1:~# ./orachk
CRS stack is running and CRS_HOME is not set. Do you want to set
CRS_HOME to /u01/app/12.1.0.2/grid?[y/n][y]y
Checking for prompts on mc1dbzg1-mc1zg1zone1 for oracle user...
Checking ssh user equivalency settings on all nodes in cluster
Node mc1dbzg1-mc1zg1zone2 is not configured for ssh user equivalency and the script uses
ssh to execute checks on remote nodes.
Without this facility the script cannot run audit checks on the remote nodes.
If necessary due to security policies the script can be run individually on each node.
Do you want to configure SSH for user root on mc1dbzg1-mc1zg1zone2 [y/n][y]y
Enter root password on mc1dbzg1-mc1zg1zone2 :-
Verifying root password.
. .
Checking for prompts for oracle user on all nodes...
=============================================================
Node name - mc1dbzg1-mc1zg1zone1
=============================================================
Collecting - ASM Disk Groups
Collecting - ASM Disk I/O stats
Collecting - ASM Diskgroup Attributes
Collecting - ASM disk partnership imbalance
Collecting - ASM diskgroup attributes
Collecting - ASM diskgroup usable free space .
.
<output omitted>
.
Detailed report (html) -
/root/orachk_mc1dbzg1-mc1zg1zone1_rac12c1_061716_150741/orachk_mc1dbzg1-
mc1zg1zone1_rac12c1_061716_150741.html
UPLOAD(if required) - /root/orachk_mc1dbzg1-mc1zg1zone1_rac12c1_061716_150741.zip
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
Use the MCMU CLI to start and stop individual VM and zone components.
Typically, the system is started and stopped using Oracle ILOM, which provides a lights-out
method for controlling the system. For Oracle ILOM starting instructions, see “Starting and
Stopping the System” on page 57. However, there can be situations where you need to start or
stop individual MiniCluster components such as the kernel zones.
Note - These topics assume that power is applied to the system, but the particular component
you plan to start is stopped.
where x is 1 or 2.
For example:
where VMgroupname is the name of the VM group. To determine the name, see “List a
Summary of All DB VM Groups (CLI)” on page 208.
For example:
where VMname is the name of the VM. To determine the name of a DB VM, see “List Details
of a DB VM Group Profile (CLI)” on page 209. For an App VM, see “List Details of an App
Group Profile (CLI)” on page 218.
For example:
where VMgroupname is the name of the DB VM group. To determine the group name, see “List
a Summary of All DB VM Groups (CLI)” on page 208.
For example:
Caution - To properly shutdown the system, follow the instructions in “Shut Down, Reset, or
Power Cycle the System” on page 58. If the system is not properly shutdown, data corruption
can occur.
where VMgroupname is the name of the DB VM group. To determine the name, see “List a
Summary of All DB VM Groups (CLI)” on page 208.
For example:
where x is 1 or 2.
For example:
where VMgroupname is the name of the VM group. To determine the name, see “List a
Summary of All DB VM Groups (CLI)” on page 208.
For example:
MCMU stops each VM in the group one by one. You are prompted to confirm the stopping of
each VM in the group.
where VMname is the name of the VM. To determine the name of a DB VM, see “List Details
of a DB VM Group Profile (CLI)” on page 209. For an App VM, see “List Details of an App
Group Profile (CLI)” on page 218.
For example:
Tip - To restart the node, connect to the management console and manually start the node with
the OpenBoot boot command.
where x is 1 or 2.
For example, to stop the kernel zone on each node, type:
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
Before you can create VMs, all of the system setup steps must be complete, and the state of the
system software, drives, and connectivity must be in expected healthy state. The MCMU CLI
provides a number of commands that enable you to verify various aspects of the system setup.
Note - To install and set up the system, refer to the Oracle MiniCluster S7-2 Installation Guide.
These topics describe how to verify the setup, and run readiness checks through the CLI.
■ “List the System Setup Steps (CLI)” on page 239
■ “(If Needed) Run or Rerun System Setup Steps (CLI)” on page 240
■ “Verify the System Setup (CLI)” on page 241
■ “Verify the System, Topology, and Disk Readiness (CLI)” on page 242
■ “Ensure IP Addresses are Available in MCMU for Future VMs” on page 245
2. Display the list of setup steps and the status of each step.
Verifying that the System Is Ready for the Creation of VMs (CLI) 239
(If Needed) Run or Rerun System Setup Steps (CLI)
This example indicates that all the system setup steps have been performed and completed with
a status of OK. The log file of the setup process is also displayed.
% mcmu setupmc -a
[INFO ] Log file path : mc_name-n1:/var/opt/oracle.minicluster/setup/logs/mcmu_082216_160419.log
+-----------------------------------------------------------------------------------------------------+
| STEP | DESCRIPTION | STATUS |
+-----------------------------------------------------------------------------------------------------+
| 1 | Check Package Version and Gather User Input | OK |
| 2 | Prepare for System Install | OK |
| 3 | Interconnect Setup | OK |
| 4 | Configure Explorer | OK |
| 5 | Check System Readiness | OK |
| 6 | Verify Topology | OK |
| 7 | Prepare Network Interfaces | OK |
| 8 | Configure Client Access Network on Node 1 | OK |
| 9 | Configure Client Access Network on Node 2 | OK |
| 10 | Configure NTP Client, Set Password Policy and Setup Apache Web Server | OK |
| 11 | Check Configuration and IP Mappings | OK |
| 12 | Configure ILOM Network | OK |
| 13 | Storage: Create Storage Alias, Reset JBOD(s) and Partition All Disks in All JBOD(s) | OK |
| 14 | Calibrate Disks in All JBOD(s) | OK |
| 15 | Shared Storage Setup: Configure and Secure All Kernel Zones | OK |
| 16 | Shared Storage Setup: Install Oracle Grid Infrastructure 12c in Kernel Zones | OK |
| 17 | Shared Storage Setup: Apply GI PSU | OK |
| 18 | Shared Storage Setup: Configure ACFS and Mount Shared Filesystem in Global Zones | OK |
| 19 | Apply Global Zone Security Settings | OK |
+-----------------------------------------------------------------------------------------------------+
Use this procedure to run any system setup steps that have not been completed, or require
rerunning due to a possible problem. To determine the state of the system setup steps, see “List
the System Setup Steps (CLI)” on page 239.
Note - The setup steps are normally run when the system is initially set up at installation time.
Verifying that the System Is Ready for the Creation of VMs (CLI) 241
Verify the System, Topology, and Disk Readiness (CLI)
2. Type:
■ System readiness – Checks to ensure that the MiniCluster hardware and software are
configured properly and at expected levels. This check is normally performed before the
system is configured, but you can run this feature any time.
■ Topology verification – Checks that the I/O card locations and verifies network
connectivity.
■ Disk calibration – Checks the health of the drives in the system. This feature performs read
and write tests on a reserved area of each drive. The check is not destructive to data.
% mcmu readiness -a
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082216_171559.log
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
omc_sysready_combined_v2_082216_171559.log
[INFO ] [40;1;36m Checking for System Readiness..[0m
Aug 22 17:16:00 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
[INFO ] ___________________________REPORT________________________________
[INFO ] Description : Checking if aggrpvt0 aggregated link exists... OK
[INFO ] Description : Each node should be able to ping the other node over private
network.....OK
[INFO ] Description : Both nodes should have identical physical device - vanity name
mapping...OK
[INFO ] Description : Both nodes should have the physical devices on the same
slots...OK
[INFO ] Description : Checking INT and EXT HBA firmware version on mc3-n1.. ...OK
[INFO ] Description : Checking INT and EXT HBA firmware version on mc3-n2.. ...OK
[INFO ] Description : Checking System firmware version on mc3-n1.. ...OK
[INFO ] Description : Checking System firmware version on mc3-n2.. ...OK
Aug 22 17:17:50 mccn su: 'su root' succeeded for mcinstall on /dev/pts/1
[INFO ] Invoked by OS user: root
[INFO ] Find log at: mc3-n1:/var/opt/oracle.minicluster/setup/logs/
omc_verifytopology_082216_171750.log
[INFO ] [40;1;36m---------- Starting Verify Toplogy[0m
[INFO ] Check PCI Layout of Network Cards started.
[INFO ] Check PCI Layout of Network Cards succeeded.
[INFO ] Check PCI Layout of Estes Cards started.
[INFO ] Check PCI Layout of Estes Cards succeeded.
[INFO ] Check JBOD Disk Arrays started.
[INFO ] Check JBOD Disk Arrays succeeded.
Verifying that the System Is Ready for the Creation of VMs (CLI) 243
Verify the System, Topology, and Disk Readiness (CLI)
.
<output omitted>
.
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
omc_diskcalib_v2_082216_171755.log
[INFO ] [40;1;36m Calibrating all disks ..[0m
[ HDD ] /dev/chassis/JBODARRAY1/HDD0/disk c0t5000CCA23B0FBDA4d0
[ HDD ] /dev/chassis/JBODARRAY1/HDD1/disk c0t5000CCA23B12B068d0
[ HDD ] /dev/chassis/JBODARRAY1/HDD2/disk c0t5000CCA23B12DA48d0
[ HDD ] /dev/chassis/JBODARRAY1/HDD3/disk c0t5000CCA23B12D4A4d0
[ HDD ] /dev/chassis/JBODARRAY1/HDD4/disk c0t5000CCA23B12C030d0
[ HDD ] /dev/chassis/JBODARRAY1/HDD5/disk c0t5000CCA23B12F358d0
[ SSD ] /dev/chassis/JBODARRAY1/HDD6/disk c0t5000CCA0536CA820d0
[ SSD ] /dev/chassis/JBODARRAY1/HDD7/disk c0t5000CCA0536CA788d0
[ SSD ] /dev/chassis/JBODARRAY1/HDD8/disk c0t5000CCA0536CB3ACd0
[ SSD ] /dev/chassis/JBODARRAY1/HDD9/disk c0t5000CCA0536CA818d0
.
<output omitted>
.
S U M M A R Y R E P O R T
Refer to one of these sections based on the version of MCMU software on your
system:
■ “Managing Networks for v1.2.4 or Later Software (CLI)” on page 291
■ “Managing Networks for v1.2.2 or Earlier Systems (CLI)” on page 295
Verifying that the System Is Ready for the Creation of VMs (CLI) 245
246 Oracle MiniCluster S7-2 Administration Guide • October 2021
Configuring DB VMs (CLI)
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics provide CLI procedures for the DB VM groups and their associated components
(VMs, DB home, and DB instances).
■ “Creating DB VMs (CLI)” on page 247
■ “Update a DB VM Group (CLI)” on page 259
■ “Deleting DB VM Group Components (CLI)” on page 270
% mcmu tenant -P -c
Virtual Machine 1
Virtual Machine 2
Node 2 : mc3-n2
Virtual Machine 1
Virtual Machine 2
Define Cluster
Enter SCAN Name : dbgp1-scan
Select GRID Infrastructure Patch Level [12.1.0.2.160419]
(12.1.0.2.160419): 12.1.0.2.160419
MCMU creates the DB VM group profile according to the parameters you supplied.
PROFILE INFORMATION
VMGroupName : dbgp1
IP pool name : example_pool
SCAN_name : dbgp1-scan
SCAN_ip : xx.xxx.73.204,xx.xxx.73.205,xx.xxx.73.206
VM DEFINITIONS
VM 1
name : dbgp1-vm1-mc3-n1
globalName : mc3-n1
public_ip : <valid_VLAN_IP_addr1>
public_hostname : mc3-n1vm1-z1
virtual_ip : <valid_VLAN_IP_addr2>
virtual_hostname : mc3-n1vm1-z1-vip
VM 2
name : dbgp1-vm2-mc3-n1
globalName : mc3-n1
public_ip : <valid_VLAN_IP_addr3>
public_hostname : mc3-n1vm1-z2
virtual_ip : <valid_VLAN_IP_addr4>
virtual_hostname : mc3-n1vm1-z2-vip
VM 3
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
public_ip : <valid_VLAN_IP_addr5>
public_hostname : mc3-n2vm1-z1
virtual_ip : <valid_VLAN_IP_addr6>
virtual_hostname : mc3-n2vm1-z1-vip
VM 4
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
public_ip : xx.xxx.73.130
public_hostname : mc3-n2vm1-z2
virtual_ip : 192.0.2.2
virtual_hostname : mc3-n2vm1-z2-vip
Please insert the IP-mappings in the DNS Server if not already done.
3. Enter all VM and SCAN public IP addresses and public hostnames into your
DNS.
Ensure that you complete this step before you deploy the DB VM group.
Status : Active
EditStatus :
Description : Initial DB VM Group
- NORMAL redundancy
- Shared Storage
- CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
Syntax:
mcmu tenant -G -D VMgroupID
where VMgroupID is the ID of the DB VM group profile that you just created.
Caution - Ensure that you use the uppercase D option for the command. Using the lowercase d
option for this command deletes that VM group.
For example:
% mcmu tenant -G -D 1
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_082316_040823.log
Aug 23 04:08:23 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
Deploying DB VM Group...
[23/Aug/2016 04:08:28] INFO [dbzonegroup_install:122] Added zonegroup to action data.
updated message, old: Initializing with Insert IP Mapping
[23/Aug/2016 04:08:28] INFO [dbzonegroup_install:1467] Add zonegroup and operation type
to action.
.
<output omitted>
.
updated message, old: Finish adding zonegroup information to database. with GI Post
Installation Finished.
[23/Aug/2016 05:23:22] INFO [dbzonegroup_install:93] Method: do performed
[23/Aug/2016 05:23:22] INFO [dbzonegroup_install:132] Add Node to GRID Cluster ends...
updated message, old: GI Post Installation Finished. with Add Node to GRID Cluster
ends...
[23/Aug/2016 05:23:22] INFO [dbzonegroup_install:98] Action Ends at: 2016-08-23 12:23:22
[23/Aug/2016 05:23:22] INFO [dbzonegroup_install:100] Elapsed Time: 1277.46536207 (secs)
[23/Aug/2016 05:23:22] INFO [dbzonegroup_install:102] Performing method: do finished
Status: 0
Message: Deploying DB VM Group Profile succeed
% mcmu tenant -H -c
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_082316_184339.log
2. (If desired) Repeat Step 1 to install another version of the Oracle Database in the
DB VMs.
For example, if you originally installed Oracle Database 12c in /u01/app/oracle/
product/12.1.0/dbhome_12c, you can then install Oracle Database 11g in another home, such
as /u01/app/oracle/product/12.1.0/dbhome_11g.
Caution - Wait until you see the message Database home installation succeeded before you
repeat Step 1. Do not repeat Step 1 to install another version of the Oracle Database in the DB
VMs until the process completes for the previous installation.
4. Create DB instances.
Go to “Create DB Instances (CLI)” on page 255.
Create at least one instance in each DB VM. You can create multiple DB instances for each
DB Home. The total number of instances you can create is limited by the amount of disk space
available.
1. Create a DB instance.
% mcmu tenant -I -c
Status : Active
EditStatus :
Description : DB MVM Group 1 - NORMAL - SHARED - CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
status : Active
name : dbgp1-vm1-mc3-n1
globalName : mc3-n1
id : 1
memory : 522496
cores : 4
status : Active
name : dbgp1-vm2-mc3-n1
globalName : mc3-n1
id : 2
memory : 522496
cores : 3
status : Active
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
id : 3
memory : 522496
cores : 0
status : Active
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
id : 4
memory : 522496
cores : 0
status : Active
VM_id : 1
version : 12.1.0.2
home : /u01/app/oracle/product/12.1.0/dbhome_12c
type : RAC
id : 1
status : Active
VM_id : 1
version : 11.2.0.4
home : /u01/app/oracle/product/11.2.0/dbhome_11g
type : RAC
id : 5
status : Active
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
id : 3
memory : 522496
cores : 0
status : Active
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
id : 4
memory : 522496
cores : 0
% mcmu tenant -I -l 1
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_082416_162942.log
% mcmu status -Z -a
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082416_170213.log
[INFO ] Zone status on node1
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 acfskz running - solaris-kz excl
7 dbgp1-vm1-mc3-n1 running /mcpool/dbgp1-vm1-mc3-n1zroot solaris excl
8 dbgp1-vm2-mc3-n1 running /mcpool/dbgp1-vm2-mc3-n1zroot solaris excl
11 avm1-vm1-mc3-n1 running /mcpool/avm1-vm1-mc3-n1zroot solaris excl
14 avm2-vm1-mc3-n1 running /mcpool/avm2-vm1-mc3-n1zroot solaris excl
17 avm4-vm1-mc3-n1 running /mcpool/avm4-vm1-mc3-n1zroot solaris excl
20 avm5-vm1-mc3-n1 running /mcpool/avm5-vm1-mc3-n1zroot solaris excl
- appzonetemplate installed /mcpool/appzonetemplate solaris excl
- dbzonetemplate installed /mcpool/dbzonetemplate solaris excl
[INFO ] Zone status on node2
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 acfskz running - solaris-kz excl
7 dbgp1-vm1-mc3-n2 running /mcpool/dbgp1-vm1-mc3-n2zroot solaris excl
8 dbgp1-vm2-mc3-n2 running /mcpool/dbgp1-vm2-mc3-n2zroot solaris excl
11 avm1-vm1-mc3-n2 running /mcpool/avm1-vm1-mc3-n2zroot solaris excl
14 avm2-vm1-mc3-n2 running /mcpool/avm2-vm1-mc3-n2zroot solaris excl
17 avm6-vm1-mc3-n2 running /mcpool/avm6-vm1-mc3-n2zroot solaris excl
20 avm7-vm1-mc3-n2 running /mcpool/avm7-vm1-mc3-n2zroot solaris excl
- appzonetemplate installed /mcpool/appzonetemplate solaris excl
- dbzonetemplate installed /mcpool/dbzonetemplate solaris excl
Note - When this command completes, the updates are saved, but not applied.
% mcmu tenant -P -u
Listing DB VM Group Profile..
Status : Active
EditStatus :
Description : DBVM Group 1 - NORMAL - SHARED - CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
Enter u01 size (in GB, 100 to max 2182) (165): 200
Node 1 : mc3-n1
Virtual Machine 1
Virtual Machine 2
Virtual Machine 3
Node 2: mc3-n2
Virtual Machine 1
Virtual Machine 2
Virtual Machine 3
Cluster Information
PROFILE INFORMATION
VMGroupName : dbgp1
SCAN_name : dbgp1-scan
SCAN_ip : 192.0.2.10,1192.0.2.11,192.0.2.12
VM DEFINITIONS
VM 1
name : dbgp1-vm1-mc3-n1
globalName : mc3-n1
public_ip : xx.xxx.73.113
public_hostname : mc3-n1vm1-z1
virtual_ip : 192.0.2.14
virtual_hostname : mc3-n1vm1-z1-vip
VM 2
name : dbgp1-vm2-mc3-n1
globalName : mc3-n1
public_ip : xx.xxx.73.115
public_hostname : mc3-n1vm1-z2
virtual_ip : 192.0.2.16
virtual_hostname : mc3-n1vm1-z2-vip
VM 3
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
public_ip : xx.xxx.73.117
public_hostname : mc3-n2vm1-z1
virtual_ip : 192.0.2.18
virtual_hostname : mc3-n2vm1-z1-vip
VM 4
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
public_ip : xx.xxx.73.119
public_hostname : mc3-n2vm1-z2
virtual_ip : 192.0.2.20
virtual_hostname : mc3-n2vm1-z2-vip
VM 5
name : dbgp1-vm3-mc3-n1
globalName : mc3-n1
public_ip : xx.xxx.73.120
public_hostname : mc3-n1vm1-z3
virtual_ip : 192.0.2.22
virtual_hostname : mc3-n1vm1-z3-vip
VM 6
name : dbgp1-vm3-mc3-n2
globalName : mc3-n2
public_ip : xx.xxx.73.121
public_hostname : mc3-n2vm1-z3
virtual_ip : 192.0.2.24
virtual_hostname : mc3-n2vm1-z3-vip
Please insert the IP-mappings in the DNS Server if not already done.
Aug 24 17:17:29 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
3. Enter the new public IP addresses and public hostnames into your DNS.
Status : Active
EditStatus : edited
Description : DB MVM Group 1 - NORMAL - SHARED - CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
Enter ID of the VM Group Profile that you want to edit[1] (1): <Return>
Do you want to "[E]dit & Save" or "[A]pply previously saved changes"?
Enter E/A (E): A
.
<output omitted>
.
INFO:MCMU.controllers.dbzonegroupmanager:Zonegroup is updated with profile changes.
status: 0
message: Updating DB VM Group succeeded.
Getting DB VM Group Profile....
.
<output omitted>
.
% mcmu status -Z -a
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082416_180834.log
[INFO ] Zone status on node1
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 acfskz running - solaris-kz excl
7 dbgp1-vm1-mc3-n1 running /mcpool/dbgp1-vm1-mc3-n1zroot solaris excl
8 dbgp1-vm2-mc3-n1 running /mcpool/dbgp1-vm2-mc3-n1zroot solaris excl
1 avm1-vm1-mc3-n1 running /mcpool/avm1-vm1-mc3-n1zroot solaris excl
14 avm2-vm1-mc3-n1 running /mcpool/avm2-vm1-mc3-n1zroot solaris excl
17 avm4-vm1-mc3-n1 running /mcpool/avm4-vm1-mc3-n1zroot solaris excl
20 avm5-vm1-mc3-n1 running /mcpool/avm5-vm1-mc3-n1zroot solaris excl
23 dbgp1-vm3-mc3-n1 running /mcpool/dbgp1-vm3-mc3-n1zroot solaris excl
- appzonetemplate installed /mcpool/appzonetemplate solaris excl
- dbzonetemplate installed /mcpool/dbzonetemplate solaris excl
[INFO ] Zone status on node2
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 acfskz running - solaris-kz excl
% mcmu tenant -I -l 1
Aug 24 18:10:01 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
% mcmu tenant -I -c
Database Instance Profile Description
Select Database Instance Type [SINGLE/RAC/RACONE] : rac
Select Database Instance Template: Data Warehouse(DW) / Online Transaction
Processing(OLTP) [DW/OLTP] : oltp
List of Character Set
[1] AL32UTF8 [2] AR8ADOS710 [3] AR8ADOS710T
.
<output omitted>
.
[112] ZHT16HKSCS [113] ZHT16MSWIN950 [114] ZHT32EUC
Status : Active
EditStatus :
Description : DB MVM Group 1 - NORMAL - SHARED - CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
status : Active
name : dbgp1-vm1-mc3-n1
globalName : mc3-n1
id : 1
memory : 522496
cores : 4
status : Active
name : dbgp1-vm2-mc3-n1
globalName : mc3-n1
id : 2
memory : 522496
cores : 3
status : Active
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
id : 3
memory : 522496
cores : 0
status : Active
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
id : 4
memory : 522496
cores : 0
status : Active
name : dbgp1-vm3-mc3-n1 <<=== NEW VM
globalName : mc3-n1
id : 13
memory : 522496
cores : 0
status : Active
name : dbgp1-vm3-mc3-n2 <<=== NEW VM
globalName : mc3-n2
id : 14
memory : 522496
cores : 0
status : Active
VM_id : 13
version : 12.1.0.2
home : /u01/app/oracle/product/12.1.0/dbhome_12c
type : RAC
id : 9
status : Active
VM_id : 13
version : 11.2.0.4
home : /u01/app/oracle/product/11.2.0/dbhome_11g
type : RAC
id : 11
status : Active
name : dbgp1-vm1-mc3-n2
globalName : mc3-n2
id : 3
memory : 522496
cores : 0
status : Active
name : dbgp1-vm2-mc3-n2
globalName : mc3-n2
id : 4
memory : 522496
cores : 0
status : Active
name : dbgp1-vm3-mc3-n2
globalName : mc3-n2
id : 14
memory : 522496
cores : 0
7. List the DB instances to verify the presence and status of the new DB VM
instances.
% mcmu tenant -I -l 1
Aug 24 18:43:12 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
where VMgroupID is the ID of the DB VM group profile that you want to delete.
For example, to delete a DB VM group profile with an ID of 1:
% mcmu tenant -P -d 1
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_082316_034336.log
% mcmu tenant -P -l
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_082316_034511.log
Caution - Deleting a DB VM group deletes all the VMs, applications, and data associated with
the VM group. The deletion cannot be undone. Proceed with caution.
Delete a DB VM group.
To delete a RAC or RAC One Node instance for Oracle Database 12.2 and 18.3, you must
provide the SYS user password.
Delete a DB instance.
% mcmu tenant --dbinstance -d home_ID
where home_ID is the ID of the DB home that is associated with the DB instance that you want
to delete.
For example, to delete a DB instance that is associated with a DB home with an ID of 3:
% mcmu tenant --dbinstance -d 3
Delete a DB VM (CLI)
Use this procedure to delete DB VMs using the CLI.
% mcmu tenant -P -u
Virtual Machines Information
Node 1 : mc13-n1
Virtual Machine 1
% mcmu tenant -P -u
Listing DB VM Group Profile..
<output omitted>
Do you want to "[E]dit & Save" or "[A]pply previously saved
changes"?
Enter E/A (E): A
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics provide CLI procedures for configuring App VM groups and VMs.
Description Links
(Optional) Add IP addresses to the system for “Managing Networks (CLI)” on page 291
future VMs.
Create App VMs. “Create an APP VM Group (CLI)” on page 275
The process of creating App VMs involves creating an App VM group. Each group can contain
one or two App VMs. Once the groups are created, you deploy the groups, which makes the
App VMs available for use.
For information about planning for VMs, see “Planning to Create VMs” on page 77. For
details about the information you provide when creating App VMs, see “App VM Group
Parameters” on page 94.
% mcmu tenant -A -c
For example:
% mcmu tenant -A -c
Application Virtual Machine Group Profile Description
Enter Virtual Machine Group Profile Name : avm1
Enter Description : Drama App VM Group
Enter Type [Single,Multiple] (Multiple): multiple
Shared Storage [Yes,No] (No): yes
CIS Equivalent Security Settings are default. Do you want to enable PCI DSS Security
Settings [Yes,No] (No)? yes
mc3-n1
Virtual Machine 1
mc3-n2
Virtual Machine 1
.<output omitted>
.
Successfully Created Application VM Group Profile
Getting APP VM Group...
PROFILE INFORMATION
VMGroupName : avm1
IP pool name : example_pool
VM DEFINITIONS
VM 1
name : avm1-vm1-mc3-n1
globalName : mc3-n1
public_ip : <valid_VLAN_IP_addr1>
public_hostname : mc3-n1vm2-az1
VM 2
name : avm1-vm1-mc3-n2
globalName : mc3-n2
public_ip : <valid_VLAN_IP_addr2>
public_hostname : mc3-n2vm2-az1
Please insert the IP-mappings in the DNS Server if not already done.
Aug 23 16:32:12 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
5. Enter all new app VM names and public IP addresses into your DNS.
Use this procedure to an deploy App VM group. Once deployed, the VMs are available for
configuration and use.
Caution - Ensure that you use the uppercase D option. Using the lowercase d option deletes that
VM group.
where VMgroupID is the App VM group profile ID that was assigned by mcmu when the group
was created. To determine the VMgroupID, see “List a Summary of All App VM Group Profiles
(CLI)” on page 217.
For example:
% mcmu tenant -V -D 2
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_082316_164849.log
You can change parameters such as the number of cores assigned to each VM. You can also
mount an NFS. For undeployed VM groups, you can change IP addresses and hostnames.
If you need to transmit binaries or other files that are larger than 130 MB, you can update the
App VM group profile to increase the allowable file size. The maximum file size varies and
is displayed at the appropriate prompt. You cannot decrease the allowable size after you have
increased it. You will need to perform this change on both nodes.
For details about the information you provide when creating App VMs, see “App VM Group
Parameters” on page 94.
% mcmu tenant -A -u
For example:
% mcmu tenant -A -u
Listing APP VM Group...
Status : Active
EditStatus :
Description :
deletable : True
progress : False
VMgroupName : ff18
editable : True
VMgroupID : 2
Enter u01 size (in GB, 100 to max 2182) (165): 200
Node 1 : mc5qt-n1
Enter Cores [0 to max 28] (0):2
public_hostname : ff18-vm1-mc5qt-n1
private_hostname : ff18-vm1-mc5qt-n1-priv
public_ip : xx.xxx.73.131
private_ip : 192.0.2.1
Node 2 : mc5qt-n2
Enter Cores [0 to max 28] (0):2
public_hostname : ff18-vm1-mc5qt-n2
private_hostname : ff18-vm1-mc5qt-n2-priv
public_ip : xx.xxx.73.132
private_ip : 192.0.2.2
Updating APP VM Group Profile...
start to update profile
status: 0
message: Update APP VM Group Profile succeeded.
Getting APP VM Group...
PROFILE INFORMATION
VMGroupName : ff18
VM DEFINITIONS
VM 1
name : ff18-vm1-mc5qt-n1
globalName : mc5qt-n1
public_ip : xx.xxx.73.133
public_hostname : ff18-vm1-mc5qt-n1
VM 2
name : ff18-vm1-mc5qt-n2
globalName : mc5qt-n2
public_ip : xx.xxx.73.134
public_hostname : ff18-vm1-mc5qt-n2
Please insert the IP-mappings in the DNS Server if not already done.
Use this procedure to enable or disable shared storage for the App group. To see the
current state of shared storage, use the MCMU BUI, and see “Enable or Disable NFS
(BUI)” on page 149.
where VMgroupID is the ID of the App VM group that you want to delete. To determine the
VMgroupID, see “List a Summary of All App VM Group Profiles (CLI)” on page 217.
where VMgroupID is the ID of the App VM group that you want to delete. To determine the
VMgroupID, see “List a Summary of All App VM Group Profiles (CLI)” on page 217.
For example, to delete an App VM group with an ID of 2:
% mcmu tenant -A -d 2
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
You can use the MCMU CLI to manage MCMU user accounts. If you use the CLI to create a
user account, the subsequent user approvals must be performed using the CLI.
Note - To manage user accounts using the MCMU BUI, see “Managing MCMU User Accounts
(BUI)” on page 39.
where:
■ username is a unique name for the new user. The name cannot be root or mcadmin. It must
start with an alpha character. The name can contain alpha and numeric characters, and can
include , '.', '-' or '_' characters.
■ email is the email address for the new user.
■ fullname is the first and last name for the new user.
■ phonenumber is the new user's phone number (digits only. No special characters).
■ role is one of these values:
■ primary
■ secondary
■ tenant_admin
■ auditor
For role descriptions, see “User Roles” on page 39.
For example:
% mcmu user -c -u jsmith -e joe.smith@example.com -n Joe Smith -p 8881112222 -r primary
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082216_193715.log
[INFO ] User jsmith has been created, please ask the admin and supervisor to run the
command in New User Approval Request email to approve
An email is sent to the primary admin and supervisor accounts. The email contains a secure key
that is required to approve the new user.
Note - The user account is created, but not activated until the primary admin and the supervisor
approves the new user.
the mcmu CLI and paste the command line into mcmu to immediately approve the user. If that
doesn't work, perform this task.
Both the primary admin and the supervisor must approve the new user before the user account
is activated.
To see the status of approvals and rejections, see “List MCMU User Approval and Rejection
Status (CLI)” on page 286.
1. From the primary admin’s or supervisor's email account, obtain the secure key.
Open the email and copy the secure key. The email is sent from mcinstall@company-name.
where:
■ role is the role of the person approving the user. Specify one of these roles:
■ admin
■ supervisor
■ username is the name of the new user who is seeking approval.
■ key Paste the secure key string that was sent to the admin and supervisor as part of the
preliminary approval process.
The jsmith user account still requires the approval of the supervisor before the account is
activated.
When a user is created using the CLI, the MCMU admin and supervisor are sent an email
requesting approval of the user. The admin and supervisor must both approve the new user for
the account to be activated. If the admin or supervisor fail to approve, or reject the new user, the
account is not activated. After a new account is rejected, it cannot be approved.
To see the status of approvals and rejections, see “List MCMU User Approval and Rejection
Status (CLI)” on page 286.
1. From the primary admin's or supervisor's email account, obtain the secure key.
When a new user account is created, MCMU emails the primary admin and supervisor an email
that contains a secure key which is needed to approve or reject the user. The email is sent from
mcinstall@company-name.
Open the email and copy the secure key.
where:
■ role is the role of the person rejecting the new user. Specify one of these roles:
■ admin
■ secondary
■ username is the name for the new user that you are rejecting.
■ key is the secure key string that was emailed to the admin and supervisor. Paste the string
into the command line
Note - Do not use this procedure to view all users because as soon as a user is approved by the
admin and supervisor, the user is removed from the list. To see a list of approved users, use the
MCMU BUI. See “Display MCMU Users (BUI)” on page 43.
% mcmu user -l
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082216_194010.log
In this example, the user jsmith is no longer in the list because the jsmith user has been
approved by the admin and supervisor. The user bbaker was approved by the supervisor, but
is waiting for approval from the admin. The user tenadm has been rejected by the admin and
supervisor.
% mcmu user -l
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082316_011656.log
Use this procedure to delete a user account. The primary admin and supervisor must approve
the deletion though email sent from MCMU.
where username is the user name of the user that you are deleting from the system.
For example:
Once the deletion request is approved by the primary admin and supervisor, the account is
deleted.
Use this procedure to change an MCMU user's password. The new password is governed by the
password policies. See “MCMU Password Policies” on page 41.
where username is the user name for the user whose password you want to change.
For example:
2. Type:
where username is the user name for the profile you want to change.
The utility prompts you for changes. For parameters you do not want to change, press Return.
For example:
Username: user500
Email address: ray.ray@example.com
Full Name: Raymond Ray
Phone Number: 123456789
Title:
Organization:
Department:
Address:
Type of User: Primary Admin
Supervisor Username: mc-super
Supervisor FullName: Mr Smith
Supervisor email: mr.smith@example.com
Do you want to edit the user information? [yes/no] (no): yes
Please press ENTER to keep current value, or provide new value if you want to update
Enter email address [ray.ray@example.com]:
Enter full name [Raymond Ray]:
Enter phone number [123456789]: 408777888
Enter title []:
Enter organization []:
Enter department []:
Enter address []:
Enter supervisor username [mc-super]:
Enter supervisor full name [Mr Smith]:
Enter supervisor email address [mr.smith@example.com]:
[INFO ] User profile has been successfully updated
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
Use one of these sections based on the version of MiniCluster software running on your system:
An IP pool is a range of IP addresses. Each IP pool is a separate subnet. As of v1.2.4, you can
create multiple IP pools then assign different VM groups to different IP pools. You can also
assign a VLAN ID to an IP pool.
Use this procedure for MiniCluster systems running v1.2.4 or later. To determine your version,
see “List the MCMU Version (CLI)” on page 208.
# mcmu ippool -l
ID: 1
Name: default
Status: assigned
DNS servers: 192.x.x.x, 192.x.x.x
Address: 192.x.x.x
NTP servers: 192.x.x.x
CIDR prefix: 22
Gateway: 192.x.x.x
VLAN ID:
Domain name: example.com
IP range:
Start IP: 192.x.x.x
Size: 52
Use this procedure for MiniCluster systems running v1.2.4 or later. To determine your version,
see “List the MCMU Version (CLI)” on page 208.
3. Edit an IP pool.
Syntax:
mcmu ippool -e POOL_ID
where POOL_ID is the IP Pool ID.
You are prompted to make changes to the current values which are shown in parenthesis. Type
Return to accept the current value, or enter a new value.
# mcmu ippool -e 2
Do you want to edit the above information? [yes/no] (no): yes
Enter IP pool name (new): example_pool
Enter DNS servers, delimited by comma (192.x.x.x, 192.x.x.x): <Return>
Enter address (192.x.x.x): <Return>
Enter NTP servers, delimited by comma (192.0.2.1): <Return>
Enter CIDR prefix (22): <Return>
Enter gateway (192.0.2.1): <Return>
Enter VLAN ID (13): 24
Enter domain name (example.com): <Return>
IP range:
Start IP: 192.0.2.0
Size: 2
Do you want to [E]dit or [D]elete this IP range? Enter E/D (E): d
Do you want to add another IP range? [yes/no] (no): yes
Enter start IP: 192.x.x.x
Enter size: 2
Do you want to add another IP range? [yes/no] (no): yes
Enter start IP: 192.x.x.x
Enter size: 5
Do you want to add another IP range? [yes/no] (no): <Return>
[INFO ] IP pool has been updated successfully
Use this procedure for MiniCluster systems running v1.2.4 or later. To determine your version,
see “List the MCMU Version (CLI)” on page 208.
Add additional IP pools with the required network parameters before creating VM groups.
2. Edit an IP pool.
Example:
# mcmu ippool -c
Enter IP pool name: app_pool
Enter DNS servers, delimited by comma: 192.x.x.x, 192.x.x.x
Enter address: 192.x.x.x
Enter NTP servers, delimited by comma: 192.x.x.x
Enter CIDR prefix: 22
Enter gateway: 192.x.x.x
The new IP pool can now be assigned to App and DB VM groups during the creation of new
VM groups.
Use this procedure for MiniCluster systems running v1.2.4 or later. To determine your version,
see “List the MCMU Version (CLI)” on page 208.
3. Delete an IP pool.
Syntax:
mcmu ippool -D POOL_ID
where POOL_ID is the IP pool ID.
Ensure that you use the uppercase -D option.
You can only delete an IP pool that is not in use. This example shows that the IP pool status is
free, and can be deleted.
# mcmu ippool -D 2
ID: 2
Name: example_pool
Status: free
DNS servers: 198.51.100.197, 98.51.100.198
Address: 192.0.2.110
NTP servers: 192.0.2.1
CIDR prefix: 22
Gateway: 192.0.2.1
VLAN ID: 13
Domain name: example.com
IP range:
Start IP: 192.0.2.110
Size: 2
Use this procedure for MiniCluster systems running v1.2.2 or earlier. To determine your
version, see “List the MCMU Version (CLI)” on page 208.
Use one of these network interfaces to connect to the client access network:
■ Through the 10GbE NIC, using the first two ends of the four-ended splitter cable
■ Through the NET 2 and NET 3 ports
You can now configure additional networks on unused network interface slots for existing VMs,
either in the same subnet or on a different subnet.
1. Determine which network interface slots are unused and are therefore available
for you to configure as an additional network.
The network interface slots that are available for you to configure as an additional network
depends on how your MiniCluster is connected to the client access network:
■ If your MiniCluster is connected through the 10GbE NIC, through a QSFP to 4x SFP
+ or MPO to 4x LC duplex splitter cable – You have the first two ends of the splitter
cable (labeled A and B, or 1 and 2) connected to the client access network through 10GbE
switches. The following network interface slots are therefore available on both compute
nodes for you to configure as additional networks:
■ The other two ends of the splitter cable (labeled C and D, or 3 and 4) connected to the
10GbE NIC
■ The NET 2 and NET 3 ports
■ If your MiniCluster is connected through the NET 2 and NET 3 ports – You are using
those two ports on both compute nodes to connect to the client access network through
10GbE switches. The P 0 port (rightmost port, or port A) on the 10GbE NIC is therefore
available for you to configure as additional networks. You can connect a QSFP to 4x SFP
and SFP++ or an MPO to 4x LC duplex splitter cable to the P 0 port on the 10GbE NIC,
which enables you to connect the four ends of the splitter cable (labeled A through D, or 1
through 4) to the additional network.
Refer to the Oracle MiniCluster S7-2 Installation Guide for more information on the connection
options for the client access network.
% mcmu tenant -G -l
Listing DB VM Group...
Status : Active
Description :
VMgroupName : dbzg2
editable : True
deletable : True
progress : False
VMgroupID : 1
% mcmu tenant -A -l
Listing APP VM Group...
Status : Active
EditStatus :
Description : Drama App VM Group
- shared
- multiple
- CIS
deletable : True
progress : False
VMgroupName : avm1
editable : True
VMgroupID : 2
5. Enter this CLI command to begin the configuration process for the additional
network.
ID: 2
Interfaces: net6,net7
8. Select the network interface pairs that you want to use for the additional
network.
These are the network interface pairs that you can choose from:
A series of messages appear after you enter all the remaining necessary information for the
additional network, providing information on the additional network that is being configured.
The following message appears at the conclusion, which confirms that the additional network
was configured successfully.
Use this procedure to add IP addresses to MiniCluster so they can be applied to VMs as they are
created.
2. Type:
% mcmu tenant -M -i
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_061217_111547.log
Use this procedure to remove an available IP address or a range of IP addresses from the IP
pool.
2. Type:
% mcmu tenant -M -r
List of All Free IPs
[1] 192.0.2.12
[2] 192.0.2.13
[3] 192.0.2.14
[4] 192.0.2.15
[5] 192.0.2.16
[6] 192.0.2.17
Enter IP number or IP number range separated by comma (e.g. "1,3", "1-3", "1,2,3-5"): 6
[INFO ] Successfully removed IP from MiniCluster system
3. When prompted, type the number for the IP address or the range of IP
addresses, separated by a comma.
Enter IP number or IP number range separated by comma (e.g. "1,3", "1-3", "1,2,3-5"): 6
[INFO ] Successfully removed IP from MiniCluster system
Use this procedure for MiniCluster systems running v1.2.2 or earlier. To determine your
version, see “List the MCMU Version (CLI)” on page 208.
When the system was installed, IP addresses of available DNS and NTP servers were added to
the system. If you need to change or remove those IP addresses, perform these steps.
% mcmu tenant -M -i
Setting ssh timeout before carrying out further operations. Please wait..
3. Change an IP Address.
% mcmu tenant -M -d
Enter Comma Separated List of Maximal 3 unique IP Addresses of DNS Servers
(192.0.2.7,192.0.2.8): 192.0.2.9
[INFO ] Successfully updated IP range to IPADDRESS table
[INFO ] Successfully updated MiniCluster system
% mcmu tenant -M -t
Enter Comma Separated List of Maximal 3 unique IP Addresses of DNS Servers
(192.0.2.20,192.0.2.21): 192.0.2.22
[INFO ] Successfully updated IP range to IPADDRESS table
[INFO ] Successfully updated MiniCluster system
4. Remove an IP Address.
See “Remove an IP Address (CLI, v1.2.2 or earlier)” on page 299.
5. Verify that the IP Addresses and host names are mapped correctly in DNS.
% mcmu tenant -M -n
Setting ssh timeout before carrying out further operations. Please wait..
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
tenant_cli_060117_133005.log
IP | HOSTNAME
---------------+-----------------
192.0.2.12 | aagt2-vm1-cc1-n1
192.0.2.13 | aagt3-vm1-cc1-n1
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics provide CLI procedures for viewing and changing your security configuration.
■ “View and Change the Global Zone Password Policy (CLI)” on page 303
■ “Show Compliance Information (CLI)” on page 304
■ “Schedule a Compliance Run (CLI)” on page 304
■ “Set SSH Key Options (CLI)” on page 305
■ “Show Encryption Keys (CLI)” on page 305
■ “Back Up the Encryption Keystore (CLI)” on page 307
Note - The security -p command only changes the password policy in the global zone.
2. Type:
% mcmu security -p
-----+---------------
cis | CIS Equivalent
stig | DISA-STIG
none | None
pci | PCI-DSS
2. Type:
% mcmu compliance -l
INFO SSH login to mc2-n1 successfully.
.
<output omitted>
.
INFO SSH login to mc2-n1 successfully.
INFO:MCMU.controllers.common.pexpect_util:SSH login to mc2-n1 successfully.
where:
■ nodex is the node (node1 or node2).
■ VMname is the VM name. To determine VM names, see “List Details of an App Group
Profile (CLI)” on page 218. Compliance benchmarks are not supported on the kernel zones.
■ time is the time that you want the compliance benchmark to run, in 24-hour format (for
example, 13:01). The default is the current time.
■ frequency is the frequency that you want the compliance benchmark to run (once or
monthly).
3. Set the source zone from where the key file is copied.
where destination_VM is the destination VM that the key is copied to, separated by commas.
% mcmu security -b
Note - An encrypted .tar file cannot be unzipped with the untar command.
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics describe how to use the CLI to manage system storage.
Description Links
Enable or disable shared storage for a VM group. “Enable or Disable Shared Storage (CLI)” on page 309
View the status of a storage drive. “List Drive Status” on page 312
Add an External NFS to an App VM. “Add External NFS (CLI)” on page 315
View the status of the file systems. “Check the File Systems Status” on page 317
Enable a new storage array. “Configure an Added Storage Array (CLI)” on page 318
Manage the replacement of a drive. “Prepare a Drive for Removal (CLI)” on page 321
The shared storage provides storage space for any storage purpose, and is available to all VMs
within a group.
Caution - Systems deployed in highly secured environments should disable shared storage. For
more information, refer to the Oracle MiniCluster S7-2 Security Guide.
Note - You can also enable or disable the shared storage from those BUI pages.
3. Identify the VMgroupID for the group you plan to enable or disable shared storage.
Perform one of these commands:
% mcmu tenant -A -l
Listing APP VM Group...
Status : Active
EditStatus :
Description :
deletable : True
progress : False
VMgroupName : ff18
editable : True
VMgroupID : 2
■ To obtain the VMgroupID for a DB VM group:
% mcmu tenant -P -l
Listing DB VM Group Profile..
Status : Active
EditStatus :
Description : Initial DB VM Group
- NORMAL redundancy
- Shared Storage
- CIS
deletable : True
progress : False
VMgroupName : dbgp1
editable : True
VMgroupID : 1
where:
% mcmu tenant -V -t 2
Getting APP VM Group...
6. Access the shared file system by logging into the VM and perform Oracle Solaris
commands.
To access the file system:
% cd /sharedstore
Note - The /sharedstore directory is empty until you put software in the directory.
% ls /sharedstore
Downloads Music Pictures Presentations Templates Texts Videos
Related Information
■ Securing Files and Verifying File Integrity in Oracle Solaris 11.3 (https://docs.oracle.
com/cd/E53394_01/html/E54827/index.html)
■ Managing File Systems in Oracle Solaris 11.3 (http://docs.oracle.com/cd/E53394_01/
html/E54785/index.html)
■ Oracle Solaris 11.3 Information Library (https://docs.oracle.com/cd/E53394_01/)
Use this procedure to view the status of all disks in the cluster. You can view all information for
a specific disk, a quick status for all disks, or detailed status for all disks.
2. Determine how much information you want to retrieve and perform one of these
commands.
■ Get a quick view of the status and names of all disks.
% mcmu diskutil -l
For example:
% mcmu diskutil -l
[INFO ] Log file path : mc7-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_042617_143016.log
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
omc_diskutil_functionality_042617_143017.log
.
<output omitted>
.
DISK STATE
SYS//SYS/HDD0 OK
SYS//SYS/HDD1 OK
SYS//SYS/HDD2 OK
SYS//SYS/HDD3 OK
SYS//SYS/HDD4 OK
SYS//SYS/HDD5 OK
SYS//SYS/HDD6 OK
SYS//SYS/HDD7 OK
SYS//SYS/MB/EUSB-DISK OK
ORACLE-DE3-24C.1524NMQ001/HDD0 OK
ORACLE-DE3-24C.1524NMQ001/HDD1 OK
ORACLE-DE3-24C.1524NMQ001/HDD2 OK
ORACLE-DE3-24C.1524NMQ001/HDD3 OK
ORACLE-DE3-24C.1524NMQ001/HDD4 OK
ORACLE-DE3-24C.1524NMQ001/HDD5 OK
ORACLE-DE3-24C.1524NMQ001/HDD6 OK
ORACLE-DE3-24C.1524NMQ001/HDD7 OK
ORACLE-DE3-24C.1524NMQ001/HDD8 OK
ORACLE-DE3-24C.1524NMQ001/HDD9 OK
ORACLE-DE3-24C.1524NMQ001/HDD10 OK
ORACLE-DE3-24C.1524NMQ001/HDD11 OK
ORACLE-DE3-24C.1524NMQ001/HDD12 OK
ORACLE-DE3-24C.1524NMQ001/HDD13 OK
ORACLE-DE3-24C.1524NMQ001/HDD14 OK
ORACLE-DE3-24C.1524NMQ001/HDD15 OK
ORACLE-DE3-24C.1524NMQ001/HDD16 OK
ORACLE-DE3-24C.1524NMQ001/HDD17 OK
ORACLE-DE3-24C.1524NMQ001/HDD18 OK
ORACLE-DE3-24C.1524NMQ001/HDD19 OK
ORACLE-DE3-24C.1524NMQ001/HDD20 OK
ORACLE-DE3-24C.1524NMQ001/HDD21 OK
ORACLE-DE3-24C.1524NMQ001/HDD22 OK
ORACLE-DE3-24C.1524NMQ001/HDD23 OK
■ View detailed status of all disks, including path, state, and fault error.
% mcmu diskutil -s
For example:
% mcmu diskutil -s
[INFO ] Log file path : mc7-n1:/var/opt/oracle.minicluster/setup/logs/mcmu_042617_141349.log
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
omc_diskutil_functionality_042617_141349.log
.
<output omitted>
.
% mcmu diskutil -l
% mcmu diskutil -i diskname
% mcmu diskutil -l
The NFS service must be NFSv4. The NFS that you add can be any whole or partial directory
tree or a file hierarchy, including a single file that is shared by and NFS server.
When you add external NFS to a group, the remote file system is immediately accessible to all
the VMs in the group. External NFS is only made available to VMs in a group if shared storage
is enabled. See “Enable or Disable NFS (BUI)” on page 149.
b. To check the version of the NFS service provided by the NFS server, type:
% rpcinfo -p NFSserver_name_or_IPaddress | egrep nfs
100003 4 tcp 2049 nfs
The second column displays the version number. You might see several lines of output.
One of them must report version 4.
Related Information
■ Securing Files and Verifying File Integrity in Oracle Solaris 11.3 (https://docs.oracle.
com/cd/E53394_01/html/E54827/index.html)
■ Managing File Systems in Oracle Solaris 11.3 (http://docs.oracle.com/cd/E53394_01/
html/E54785/index.html)
■ Oracle Solaris 11.3 Information Library (https://docs.oracle.com/cd/E53394_01/)
Use this procedure to check the status of all the file systems.
For example:
% mcmu diskutil -f
[INFO ] Log file path : mc51-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_042517_154050.log
[INFO ] Log file path : /var/opt/oracle.minicluster/setup/logs/
omc_diskutil_functionality_042517_154050.log
Related Information
■ Managing File Systems in Oracle Solaris 11.3 (http://docs.oracle.com/cd/E53394_01/
html/E54785/index.html)
Note - When MiniCluster is installed, the installation process automatically detects all attached
storage (including multiple storage arrays), configures the storage, and makes the storage
available for use. This procedure is intended for situations when a storage array is added to the
system after the installation.
After you add a JBOD, ASM might need to rebalance to get to a stable state before the added
storage is available to use.
For example:
% mcmu diskutil -e
[INFO ] Log file path : mc12-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_100416_160829.log
[INFO ] Ensure that fmd Service is Functional and the System Utilities have
consistent view of JBODs ..
[INFO ] Ensure that fmd Service is Functional succeeded.
[INFO ] Cross-check the number of disks reported by diskinfo and format utilities
succeeded.
[INFO ] Compare the disks in all JBODs across both compute nodes succeeded.
[INFO ] Ensure that fmd Service is Functional and the System Utilities have
consistent view of JBODs .. Completed
Oracle Corporation SunOS 5.11 11.3 June 2016
Minicluster Setup successfully configured
Unauthorized modification of this system configuration strictly prohibited
[INFO ] Invoked by OS user: mcinstall
[INFO ] Find log at: mc12-n1:/var/opt/oracle.minicluster/setup/logs/
omc_node1exec_100416_160835.log
[INFO ] ---------- Starting Executing Script on the 2nd Node
[INFO ] Executing Script on the 2nd Node started.
[INFO ] Check the existence of the script on the 2nd node
[INFO ] Execute the script on the 2nd node
c0t5000CCA23B0B3508d0 HDD-8 OK
c0t5000CCA23B0BA71Cd0 HDD-8 OK
c0t5000CCA23B0BB1D4d0 HDD-8 OK
c0t5000CCA23B0BA6E0d0 HDD-8 OK
c0t5000CCA23B0BA768d0 HDD-8 OK
c0t5000CCA23B0B906Cd0 HDD-8 OK
c0t5000CCA0536C9078d0 SSD-1.6 OK
c0t5000CCA0536CAB44d0 SSD-1.6 OK
c0t5000CCA0536CAA48d0 SSD-1.6 OK
c0t5000CCA0536CA7D0d0 SSD-1.6 OK
c0t5000CCA0536CB368d0 SSD-1.6 OK
c0t5000CCA0536CB530d0 SSD-1.6 OK
c0t5000CCA0536C90D4d0 SSD-1.6 OK
c0t5000CCA0536CAB70d0 SSD-1.6 OK
c0t5000CCA0536C8BB0d0 SSD-1.6 OK
c0t5000CCA0536CB510d0 SSD-1.6 OK
c0t5000CCA0536CB518d0 SSD-1.6 OK
c0t5000CCA0536CB3A8d0 SSD-1.6 OK
c0t5000CCA0536CB498d0 SSD-1.6 OK
c0t5000CCA0536C90FCd0 SSD-1.6 OK
c0t5000CCA04EB4A994d0 SSD-200 OK
c0t5000CCA04EB47CB4d0 SSD-200 OK
c0t5000CCA04E0D6CD4d0 SSD-200 OK
c0t5000CCA04E0D65E4d0 SSD-200 OK
[INFO ] Verifying the JBOD(s).. Completed
[INFO ] Log file path : mc12-n1:/var/opt/oracle.minicluster/setup/logs/
omc_partitiondisk_100416_160906.log
[INFO ] Partitioning disk..
[INFO ] Erasing the disks, creating EFI labels,setting volume name...
[INFO ] Creating partitions...
[INFO ] Partitioning disk.. Completed
Storage alias for JBOD ORACLE-DE3-24C:1621NMQ005 was already created. Skipping ..
Creating alias JBODARRAY2 for JBOD ORACLE-DE3-24C.1539NMQ00D ..
Log file location: /var/opt/oracle.minicluster/setup/logs/omc-
crstoragealias.20161004.1609.log
Use this procedure to logically remove a storage array drive from the system before you
physically remove the drive.
The length of time that it takes to complete this procedure before you can physically remove the
drive depends on the type of drive you are removing:
■ SSD – The detach operation completes quickly and the drive can be removed immediately.
■ HDD – The detach operation takes several minutes to complete. Do not remove the drive
before the detach operation competes.
/dev/chassis/JBODARRAY1/HDD5/disk c0t5000CCA254964E3Cd0
/dev/chassis/JBODARRAY1/HDD6/disk c0t5000CCA0536CA5E4d0
/dev/chassis/JBODARRAY1/HDD7/disk c0t5000CCA0536CA7B0d0
/dev/chassis/JBODARRAY1/HDD8/disk c0t5000CCA23B0BF34Cd0
/dev/chassis/JBODARRAY1/HDD9/disk c0t5000CCA0536CB828d0
/dev/chassis/JBODARRAY1/HDD10/disk c0t5000CCA0536CB308d0
/dev/chassis/JBODARRAY1/HDD11/disk c0t5000CCA0536CAF2Cd0
/dev/chassis/JBODARRAY1/HDD12/disk c0t5000CCA0536CABE4d0
/dev/chassis/JBODARRAY1/HDD13/disk c0t5000CCA0536CB684d0
/dev/chassis/JBODARRAY1/HDD14/disk c0t5000CCA0536CA870d0
/dev/chassis/JBODARRAY1/HDD15/disk c0t5000CCA0536CAB88d0
/dev/chassis/JBODARRAY1/HDD16/disk c0t5000CCA0536CA754d0
/dev/chassis/JBODARRAY1/HDD17/disk c0t5000CCA0536CAD10d0
/dev/chassis/JBODARRAY1/HDD18/disk c0t5000CCA0536CAEF8d0
/dev/chassis/JBODARRAY1/HDD19/disk c0t5000CCA0536CA83Cd0
/dev/chassis/JBODARRAY1/HDD20/disk c0t5000CCA04EB272E8d0
/dev/chassis/JBODARRAY1/HDD21/disk c0t5000CCA04EB27234d0
/dev/chassis/JBODARRAY1/HDD22/disk c0t5000CCA04EB27428d0
/dev/chassis/JBODARRAY1/HDD23/disk c0t5000CCA04EB272A0d0
5. When the ASM rebalance is complete, you can remove the drive.
After the new drive is installed, reattach the drive. See “Reattach a Replaced Disk
(CLI)” on page 323.
In this example, HDD8 was replaced, and diskinfo shows that the full drive name for HDD8 is
c0t5000CCA0536CA710d0.
Also note that the storage array drives are identified by a JBODARRAY string.
% diskinfo
D:devchassis-path c:occupant-compdev
---------------------------------- ---------------------
/dev/chassis/SYS/HDD0/disk c0t5000CCA02D1EE2A8d0
/dev/chassis/SYS/HDD1/disk c0t5000CCA02D1E7AACd0
/dev/chassis/SYS/HDD2/disk c0t5000CCA02D1EDCECd0
/dev/chassis/SYS/HDD3/disk c0t5000CCA02D1ED360d0
/dev/chassis/SYS/HDD4/disk c0t5000CCA02D1EE6D8d0
/dev/chassis/SYS/HDD5/disk c0t5000CCA02D1EE6CCd0
/dev/chassis/SYS/HDD6 -
/dev/chassis/SYS/HDD7 -
/dev/chassis/SYS/MB/EUSB-DISK/disk c1t0d0
/dev/chassis/JBODARRAY1/HDD0/disk c0t5000CCA25497267Cd0
/dev/chassis/JBODARRAY1/HDD1/disk c0t5000CCA2549732B8d0
/dev/chassis/JBODARRAY1/HDD2/disk c0t5000CCA254974F28d0
/dev/chassis/JBODARRAY1/HDD3/disk c0t5000CCA254965A78d0
/dev/chassis/JBODARRAY1/HDD4/disk c0t5000CCA254978510d0
/dev/chassis/JBODARRAY1/HDD5/disk c0t5000CCA254964E3Cd0
/dev/chassis/JBODARRAY1/HDD6/disk c0t5000CCA0536CA5E4d0
/dev/chassis/JBODARRAY1/HDD7/disk c0t5000CCA0536CA7B0d0
/dev/chassis/JBODARRAY1/HDD8/disk c0t5000CCA0536CA710d0
/dev/chassis/JBODARRAY1/HDD9/disk c0t5000CCA0536CB828d0
/dev/chassis/JBODARRAY1/HDD10/disk c0t5000CCA0536CB308d0
/dev/chassis/JBODARRAY1/HDD11/disk c0t5000CCA0536CAF2Cd0
/dev/chassis/JBODARRAY1/HDD12/disk c0t5000CCA0536CABE4d0
/dev/chassis/JBODARRAY1/HDD13/disk c0t5000CCA0536CB684d0
/dev/chassis/JBODARRAY1/HDD14/disk c0t5000CCA0536CA870d0
/dev/chassis/JBODARRAY1/HDD15/disk c0t5000CCA0536CAB88d0
/dev/chassis/JBODARRAY1/HDD16/disk c0t5000CCA0536CA754d0
/dev/chassis/JBODARRAY1/HDD17/disk c0t5000CCA0536CAD10d0
/dev/chassis/JBODARRAY1/HDD18/disk c0t5000CCA0536CAEF8d0
/dev/chassis/JBODARRAY1/HDD19/disk c0t5000CCA0536CA83Cd0
/dev/chassis/JBODARRAY1/HDD20/disk c0t5000CCA04EB272E8d0
/dev/chassis/JBODARRAY1/HDD21/disk c0t5000CCA04EB27234d0
/dev/chassis/JBODARRAY1/HDD22/disk c0t5000CCA04EB27428d0
/dev/chassis/JBODARRAY1/HDD23/disk c0t5000CCA04EB272A0d0
3. Attach a disk.
For example:
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
These topics describe how to use the MCMU CLI to check the status of mctuner (the virtual
tuning assistant).
Note - For instructions on how to use the MCMU BUI to obtain virtual tuning information, see
“Checking the Virtual Tuning Status (BUI)” on page 173.
Note - For the most thorough notifications, configure the tuning assistant email address in the
global and kernel zones on both nodes.
2. Check the current email address that is configured in the tuning assistant.
In this example, the address is configured as root@localhost, which is the factory default, and
should be changed to an email address of an administrator.
This procedure shows the mctuner status for all enabled mctuner instances on the system.
2. Type:
In this example, the status of mctuner is online for the global and kernel zones on both nodes.
% mcmu mctuner -S
[INFO ] Log file path : mc3-n1:/var/opt/oracle.minicluster/setup/logs/
mcmu_082216_172246.log
INFO SSH login to mc3-n1 successfully.
INFO:MCMU.controllers.common.pexpect_util:SSH login to mc3-n1 successfully.
Aug 22 17:22:50 mccn su: 'su root' succeeded for mcinstall on /dev/pts/2
INFO su to user root successfully.
INFO:MCMU.controllers.common.pexpect_util:su to user root successfully.
INFO zlogin to acfskz successful.
INFO:MCMU.controllers.common.pexpect_util:zlogin to acfskz successful.
INFO SSH login to mc3-n2 successfully.
INFO:MCMU.controllers.common.pexpect_util:SSH login to mc3-n2 successfully.
INFO su to user root successfully.
INFO:MCMU.controllers.common.pexpect_util:su to user root successfully.
INFO zlogin to acfskz successful.
INFO:MCMU.controllers.common.pexpect_util:zlogin to acfskz successful.
node zone status issues notices
---------- ---------- ---------- ------------------------------ ----------
mc3-n1 global Online
mc3-n1 acfskz Online
mc3-n2 global Online
mc3-n2 acfskz Online
Note - Different versions of the MiniCluster software offer different mcmu commands and
options. For the most accurate CLI information for the MiniCluster you are using, use mcmu
help. See “Display mcmu Help For All Subcommands (CLI)” on page 204 and “Display mcmu
Help for a Specific Subcommand (CLI)” on page 205.
Only use the CLI commands to update MiniCluster if you are familiar with the updating process
and concepts. Otherwise, use the MCMU BUI. The MCMU BUI and updating concepts are
covered in “Updating and Patching MiniCluster Software (BUI)” on page 177.
Use this procedure to display the version status of the components through the CLI.
Alternatively, you can use the BUI. The BUI provides component version numbers, the CLI
does not. See “View Software Component Versions (BUI)” on page 181.
% mcmu patch -l
COMPONENT-------------------------------- | STATUS--------
MiniCluster Configuration Utility | CURRENT
Storage Tray firmware | CURRENT
Shared Filesystem software | CURRENT
Operating System package repository | CURRENT
Shared Storage Operating System | CURRENT
Compute Nodes Operating System | CURRENT
Compute Node firmware | UPGRADE_NEEDED
Grid Infrastructure | CURRENT
Oracle db home /u01/.../11.2.0.4/dbhome_3 | CURRENT
Oracle db home /u01/.../11.2.0.4/dbhome_4 | CURRENT
Oracle db home /u01/.../12.1.0.2/dbhome_1 | UPGRADE_NEEDED
Oracle db home /u01/.../12.1.0.2/dbhome_2 | UPGRADE_NEEDED
Use this procedure to update the MCMU software on a fully installed MiniCluster. This
procedure only updates the MCMU software. To update other software components, see
“Update Other MiniCluster Software Components (CLI)” on page 336.and “Update
MiniCluster Software (BUI)” on page 188.
The system can be updated while DB and App VMs are running.
Caution - The MCMU component must be updated before you update any other component.
(see “Update the MCMU Component (CLI)” on page 333).
Caution - For systems running MCMU v1.1.21 and earlier, you must update the MCMU
software through the MCMU CLI as described in this procedure. Do not attempt to update
MCMU through the BUI because the update might fail. If you experience this problem, follow
the instructions in the MiniCluster Release Notes (Doc ID 2214746.1) available at http://
support.oracle.com, under the heading Upgrading Fully Configured MiniCluster to 1.1.21.4.
Note - For the latest information about what updates are available, refer to the MiniCluster
Release Notes document that is available in MOS Doc ID 2153282.1 at: http://support.
oracle.com.
a. Ensure that the Patch Bundle and Component Bundle are downloaded to
MiniCluster.
See “Check for and Obtain the Latest Updates” on page 183.
b. Ensure that the Patch Bundle is unzipped and extracted in the /var/opt/
oracle.minicluster/patch directory.
See “Extract the Patch Bundle” on page 185.
Note – You can install the Component Bundle before performing this procedure, or at the
end of this procedure. For instructions, see “Install the Component Bundle” on page 187.
2. Log into the MCMU CLI on compute node 1 as a primary admin, such as
mcinstall.
See “Log in to the MCMU CLI” on page 31.
Syntax:
mcmu patch -p update_omctoolkit [path_to_omctoolkit.p5p]
By default, this command expects to find the omctoolkit.p5p file in the /var/opt/
oracle.minicluster/sfw directory.
■ If you ran the mcmu patch -p upload command (a command listed in “View Software
Component Versions (CLI)” on page 331), perform this command:
% cd /var/opt/oracle.minicluster/patch
% ls
README.txt mc-1.1.21.4-patch.tar.ac omc
beadm.py mc-1.1.21.4-patch.tar.ad patch-1.1.21.4
mc-1.1.21.4-patch.tar.aa mc-1.1.21.4-patch.tar.ae patch.
json
mc-1.1.21.4-patch.tar.ab mc_patch.py scripts
In this command line, replace patch-version_no with the directory name you identified.
Note - When the MCMU component is updated, web services are restarted and you might need
to refresh the browser cache (shift-reload) to use the MCMU BUI.
% mcmu -V
Oracle MiniCluster Configuration Utility
MCMU v1.1.21.4
% su - root
# svcadm clear apache22
# exit
Alternatively, you can perform the same updates using the BUI, which is less prone to human
error. See “Update MiniCluster Software (BUI)” on page 188.
The system can be updated while DB and App VMs are running.
Note - The components and options differ from version to version. To see the full list of patch
options for your particular version, perform the help command: mcmu patch -p -h.
Component Syntax
Storage array firmware mcmu patch -p update_jbod
GI and ACFS in kernel zones mcmu patch -p update_acfs
OS repository mcmu patch -p update_repo
Component Syntax
(both nodes)
Note - Depending on the POST configuration,
this update can take 40 - 50 minutes to complete.
GI in DB VMs mcmu patch -p update_gi -z DBgroup_name
Oracle DB home mcmu patch -p update_oh -z DBgroup_name --oh DBhome_full_path
3. Display the help output to see which update options are available on your
system.
% mcmu patch -h
■ Some component update options can be included on a single command line, separated with
commas.
■ These component options can only be updated individually (not combined on a command
line): update_gz, update_ilom, and update_omctoolkit
■ A system reboot (one node at a time) is automatically performed after updating each of
these component options: update_gz and update_ilom
■ The GI and Oracle DB homes must be at the same revision levels. When you patch these
components, patch the GI before you patch a DB home.
■ Some component options require additional command line arguments, as shown in the
examples.
Examples:
% mcmu tenant -G -l
Listing DB VM Group...
Status : Active
Description :
VMgroupName : mc5dbzg1
editable : True
deletable : True
progress : False
VMgroupID : 1
% mcmu tenant -G -l
Listing DB VM Group...
Status : Active
Description :
VMgroupName : mc5dbzg1
editable : True
deletable : True
progress : False
VMgroupID : 1
% mcmu tenant -H -L 2
DB HOME INFORMATION
ID: 2
VM_ID: 2
VMGROUP_ID: 1
DB_HOME: /u01/app/oracle/product/12.1.0/db_12c
VERSION: 12.1.0.2
TYPE: RAC
PATCH: 12.1.0.2.160419
STATUS: Active
A
ASR Auto Service Request. A feature of Oracle or Sun hardware that automatically opens service
requests when specific hardware faults occur. ASR is integrated with MOS and requires a
support agreement. See also MOS.
C
compute Shortened name for the SPARC server, a major component of MiniCluster.
server
G
GB Gigabyte. 1 gigabyte = 1024 megabytes.
H
HMAC Hashed Message Authentication Code. An algorithm used to generate one-time passwords.
I
ILOM See Oracle ILOM.
Glossary 341
IPMP
M
MOS My Oracle Support.
N
NIC Network interface card.
O
Oracle ASM Oracle Automatic Storage Management. A volume manager and a file system that supports
Oracle databases.
Oracle ILOM Oracle Integrated Lights Out Manager. Software on the SP that enables you to manage a server
independently from the operating system.
OTP A One-time Password. A MiniCluster administrator in the tenant admin role can enable two-
factor authentication for a specific user.
P
POST Power-on self-test. A diagnostic that runs when the compute server is powered on.
Q
QSFP Quad small form-factor, pluggable. A transceiver specification for 10GbE technology.
R
RAC Real Application Cluster.
S
SCAN Single Client Access Name. A feature used in RAC environments that provides a single name
for clients to access any Oracle Database running in a cluster. See also RAC.
SFP and SFP Small form-factor pluggable standard. SFP+ is a specification for a transceiver for 10GbE
+ technology.
SPARC server A major component of SuperCluster that provides the main compute resources. Referred to in
this documentation as compute server.
T
two-factor Strong authentication that is enforced with OTP.
authentication
Z
ZFS A file system with added volume management capabilities. ZFS is the default file system in
Oracle Solaris 11.
Glossary 343
344 Oracle MiniCluster S7-2 Administration Guide • October 2021
Index
A planning worksheets, 93
accessing Application Virtual Machine Group Profiles Summary
administration resources, 19 page, 133
MCMU BUI, 28 Application Virtual Machine Group Profiles tab, 133
MCMU CLI, 31 Approval Board
MCMU user registration page, 44 approving a new user, 47
My Oracle Support (BUI), 197 approving
Oracle Engineered Systems Hardware new users (BUI), 47
Manager, 168 new users (CLI), 284
the system, 27 approving a new user
adding Approval Board, 47
a DB VM to a group (BUI), 123 assigning disk redundancy, 103
IP address for a DNS server, 72, 300 authentication
IP address for an NFS server, 72 enabling, 50
IP address for an NTP server, 300
IP address for VMs, 299
IP addresses, 70 B
adding an IP address for an NTP server, 73 backing up global zone boot environments, 156
adding IP pools (CLI), 293 booting the system, 57
administering system security (BUI), 159 BUI session timeout configuration, 165
administration resources, 17, 19 bundles, 183
App VM cores, changing, 144
App VM group profile, creating, 135
App VM groups C
configuration parameters, 94 calibrating drives, 195
deleting (BUI), 146, 146 certificates and ports used by Oracle Engineered
deploying (BUI), 141 Systems Hardware Manager, 171
overview, 23 changing
viewing, 133 App VM cores, 144
App VMs DB VM cores, 121
configuring (BUI), 133 MCMU passwords (CLI), 288
creation task overview, 135 changing a password
editing (BUI), 144 in the BUI, 48
overview, 23 character sets, 93
345
Index
defining editing
App group profiles (BUI), 135 App VMs (BUI), 144
DB group profiles (BUI), 103 DB VM group profiles (BUI), 121
deleting IP address for a DNS server, 72
a DB VM group (BUI), 130, 131 IP address for an NFS server, 73
App VM groups (BUI), 146, 146 editing IP pools (CLI), 292
App VM groups (CLI), 281 enable security setting, 97
DB components (BUI), 126 encryption keys, showing, 305
DB homes (BUI), 130 encryption keystore, backing up, 307
DB homes (CLI), 272 external NFS, 149
DB instances (BUI), 126 adding (BUI), 152
DB instances (CLI), 272 deleting (BUI), 155
DB VM group (CLI), 271
DB VM group profile (CLI), 270
DB VMs (BUI), 127 F
DB VMs (CLI), 272 Factory Reset ISO, 183
users (CLI), 287 firewall for network security, 159
deleting an IP address for a DNS server, 72 firewall protection, 159
deleting an IP address for an NTP server, 73
deleting IP pools (CLI), 294
deploying G
a DB VM group (BUI), 112 GI patch level, 87
App VM groups (BUI), 141 global zones overview, 21
App VM groups (CLI), 277 group description field, 85
Deployment Review page, 112
DISA STIG security profiles, 82
disabling MCMU user accounts, 55
H
displaying
Home tab, 29, 64
full help (CLI), 204
home_ID, determining, 214
MCMU users, 43
hostname, viewing (BUI), 70
MCMU version (BUI), 63
MCMU version (CLI), 208
partial help (CLI), 205
displaying a connect string, 125 I
downloads, 183 IDs, VM, 99, 133
drives import existing instance, 90
preparing for removal, 321 initialization steps, reviewing and rerunning, 74
reattaching , 323 instance name, 93
drives, calibrating, 195 instance type, 91
instance_ID, determining, 216
internal NFS, 149
IP address
E adding for VMs, 299
edit user profile (BUI), 56 removing, 299
347
Index
IP addresses and hostnames, listing (CLI), 222 adding IP address for a DNS server, 72
IP addresses, viewing and adding (BUI), 66, 70 adding IP address for an NTP server, 73
IP pools, 66 Application Virtual Machine Group Profiles
Summary page, 133, 135
Application Virtual Machine Group Profiles
K tab, 133, 141
kernel zone Current Action Queue page, 76
checking the status of the GI (CLI), 228 Database tab, 103
kernel zones overview, 21 Database Virtual Machine Group page, 115
Database Virtual Machine Group Profiles Summary
page, 99, 103
Database Virtual Machine Group Summary
L
page, 99, 117
listing
deleting an IP address for a DNS server, 72
all instances (CLI), 216
deleting an IP address for an NTP server, 73
App group profiles (CLI), 217, 220
Deployment Review page, 112, 141
App VM groups (CLI), 275
editing an IP address for a DNS server, 72
DB home details (CLI), 215
editing an IP address for an NTP server, 73
DB homes (CLI), 214
Home tab, 29, 64
DB instance details (CLI), 216
logging into, 28
DB VM group details (CLI), 209
logging out of, 31
DB VM group profile details (CLI), 208
Network page, 66
IP and hostname entries (CLI), 222
overview, 17, 29
setup steps (CLI), 239
registration page, 44
listing IP pools (CLI), 291
Software and OS Information page, 64
locked accounts, unlocking, 48
System Information page, 70
logging into the
System Setup tab, 74
MCMU BUI, 28
System Status page, 29, 64
MCMU CLI, 31
Tasks tab, 76
VM, 32, 33, 35, 37
user approval page, 47
logging out of the
User Input Summary tab, 70
MCMU BUI, 31
viewing IP address allocations, 66, 70
MCMU CLI, 31
Viewing running tasks, 76
VM, 34, 36, 38
viewing the version of, 64
Virtual Machine Group Profiles tab, 99, 112
Virtual Machines tab, 99, 115, 117
M MCMU CLI
managing accessing, 31
MCMU user accounts (BUI), 39 adding IP address for a DNS server, 300
user accounts (CLI), 283 approving new users, 284
mcbackup, 156 backing up the encryption keystore, 307
mcinstall user account, 40 changing MCMU passwords, 288
MCMU BUI checking a kernel zone, 228
accessing, 28
configuring additional networks, 295 starting the grid infrastructure for application VM
creating a DB group, 247 groups, 235
creating a DB homes, 252 starting the grid infrastructure for the DB VM
creating a DB instance, 255 group, 235, 236
creating App VM groups, 275 stopping a DB VM, 237
creating new users, 283 stopping a kernel zone, 238
deleting a DB home, 272 stopping a node, 238
deleting a DB instance, 272 stopping all VMs in a group, 237
deleting a DB VM group, 270, 271 stopping the grid infrastructure for an application
deleting App VM groups, 281 VM group, 236
deleting users, 287 toggling shared storage, 280
displaying full help, 204 updating App VM groups, 278
displaying partial help, 205 updating IP address or range for a DNS server, 300
listing all DB homes, 214 verifying the system setup, 241
listing all instances, 216 viewing an IP address or range for a DNS
listing App group profiles, 217, 220 server, 300
listing DB home details, 215 MCMU CLI procedures
listing DB instance details, 216 displaying the MCMU version, 208
listing DB VM group details, 209 performing, 203
listing DB VM group profile details, 208 MCMU user accounts, 40
listing IP and hostname entries, 222 MCMU user accounts (BUI)
listing setup steps, 239 managing, 39
logging into, 31 MCMU users
logging out of, 31 approval process, 42
managing user accounts, 283 approving, 47
patching the system before initial setup, 333 changing passwords, 48
rejecting new users, 285 creating, 44
running setup steps, 240 disabling, 55
scheduling a security compliance run, 304 displaying, 43
setting SSH Key options, 305 rejecting, 47
setting up the system, 239 resetting passwords, 48
showing encryption keys, 305 MCMU version, displaying, 63
showing mctuner status, 328 MiniCluster
showing security compliance information, 304 resources, 17
showing the DB GI status, 226 tuning, 173
showing the kernel zone GI status, 224 updating MCMU software, 177
showing the kernel zone status, 228 My Oracle Support, accessing, 197
showing the system status, 223
showing the VM status, 228
starting a kernel zone, 234
starting a VM, 234 N
starting all VMs in a group, 234 network information (BUI), viewing, 66, 70
new instance, 90
NFS
349
Index
351
Index