QUESTION
QUESTION
B. Ansible is an OCI provided service for CM; Terraform is a third-party tool for infra-
structure as code.
Answer(s): C,E
Explanation:
The two correct explanations for the difference between Ansible and Terraform are:
Ansible auto- mates software installation and application deployment, while Terraform
manages infrastructure as code. This highlights the primary focus of each tool. Ansible
is mainly used for automating tasks related to software installation, application
deployment, and configuration management. It is well- suited for managing the
software stack and ensuring consistency across systems. On the other hand, Terraform
specializes in infrastructure provisioning and management, allowing users to define and
manage their infrastructure resources using code. Ansible focuses on infrastructure
configuration, while Terraform specializes in infrastructure provisioning. This highlights
the different aspects of infrastructure management that each tool addresses. Ansible is
designed to handle configuration management tasks, such as setting up software,
managing files, and applying configuration changes across systems. It excels at
ensuring the desired state of the infrastructure. In contrast, Terraform is focused on
provisioning infrastructure resources, such as virtual machines, networks, and storage.
It provides a way to define and manage these resources in a declarative manner,
allowing for infra- structure as code. It's worth noting that while Ansible is supported
and provided by OCI as a con- figuration management tool, Terraform is a third-party
tool that has gained popularity for managing infrastructure across multiple cloud
providers, including OCI.
QUESTION: 2
A. Add an imagePullSecrets section to the manifest file that specifies the name of
the Docker secret you created to access OCIR
C. Add a containers section that specifies the name and location of the images you
want to pull from OCIR. along with other deployment details.
D. Add an Auth section to the manifest file that specifies the name of the Docker
secret you create using Auth Token to access OCIR.
E. Add an image section that specifies the name and location of the images you
want to pull from OCIR along with other deployment details.
Answer(s): A,B,C
Explanation:
The three statements that are true regarding managing an application deployed in
Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and pulling images
from Oracle Cloud Infra- structure Registry (OCIR) are: Use kubectl to create a Docker
registry secret: To access images from OCIR, you need to create a Docker registry
secret in Kubernetes. This can be done using the ku-bectl create secret docker-registry
command. Add a containers section that specifies the name and location of the images
you want to pull from OCIR, along with other deployment details: In your deployment
manifest (e.g., YAML file), you need to define a containers section that specifies the
image names and locations from OCIR. This section includes other deployment details
such as re- source limits and environment variables. Add an imagePullSecrets section
to the manifest file that specifies the name of the Docker secret you created to access
OCIR: To authenticate and pull images from OCIR, you need to specify the name of the
Docker registry secret in the imagePullSecrets section of your manifest file. This
ensures that the appropriate credentials are used to authenticate with OCIR and pull the
required images. These steps enable your application deployed in OKE to pull the
necessary container images from OCIR during deployment, ensuring smooth and secure
deployment of your application.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/ContEng/Tasks/contengpullingimagesfromocir.htm
Hide Solution Next Question
QUESTION: 3
You are a developer who has made a mistake when adding variables to
your build_spec.yaml file. This mistake resulted in a failed build pipeline.
Which is a possible error you could have made?
B. used vaultVariable to hold the content of the vault secrets in OCID format
Answer(s): D
Explanation:
The possible error you could have made when adding variables to your build_spec.yaml
file that resulted in a failed build pipeline is assuming that a non-exported variable
would be persistent across multiple stages of the build pipeline. In a build pipeline,
variables need to be properly exported and managed to ensure their availability and
persistence across different stages. If you mistakenly assumed that a non-exported
variable would persist across stages, it could lead to issues where the variable is not
available or its value is not maintained as expected, causing the build pipeline to fail.
Answer(s): C,D
Explanation:
The two changes to infrastructure with the Oracle Cloud Infrastructure (OCI) Provider
for Terraform that will not result in any resources being destroyed or provisioned are:
Adding a CIDR block to a VCN: This change does not affect any existing resources. It
simply expands the IP ad-dress range of the VCN, allowing for the creation of additional
subnets within the VCN. Adding a subnet to a VCN:
Similar to adding a CIDR block, adding a subnet to a VCN does not affect any existing
resources. It defines a new subnet within the VCN's IP address range, allowing for the
al-location of resources to that subnet. Changing the image OCID of a compute
instance, changing the shape of a compute instance, and changing the Display Name of
a compute instance can potentially result in resource destruction and provisioning.
These changes may require the creation of new in-stances, reconfiguration of existing
instances, or termination of existing instances, depending on the specifics of the
changes.
Hide Solution Next Question
Viewing Page 2 of 26 pages.
Download PDF version with 100 Questions
QUESTION: 5
A. Users can migrate workloads from on-premises, but not from other cloud
platforms.
B. Users can avoid downtime during deployments and automate the complexity of
updating applications.
D. Users can only store code on public repositories and cannot access internal code
repositories.
Answer(s): B
Explanation:
The correct statement is: Users can avoid downtime during deployments and automate
the complexity of updating applications. The Oracle Cloud Infrastructure (OCI) DevOps
service provides a set of tools and services that help automate and streamline the
software development and deployment processes. One of the key benefits of OCI
DevOps is the ability to avoid downtime during deployments by implementing strategies
such as blue-green deployments or rolling deployments. By using OCI DevOps, users
can automate the complexity of updating applications by defining CI/CD (Continuous
Integration/Continuous Deployment) pipelines. These pipelines can include steps for
building, testing, and deploying applications, allowing for efficient and reliable updates
without disrupting the availability of the application. The other statements provided are
not accurate: OCI DevOps allows users to migrate workloads from on-premises
environments as well as from other cloud plat-forms. Users can store code in both
public and private repositories, including internal code repositories. OCI DevOps
provides visibility into the full lifecycle phases of applications, allowing users to assess
performance and make informed decisions.
Reference:
https://docs.oracle.com/en- us/iaas/Content/devops/using/devops_overview.htm
QUESTION: 6
What are the two items required to create a rule for the Oracle Cloud
Infrastructure Events Service? (Choose two.)
A. Service Connector
B. Rule Conditions
C. Actions
Answer(s): B,C
Explanation:
To create a rule for the Oracle Cloud Infrastructure Events Service, the following two
items are required: Rule Conditions: Rule conditions define the criteria or triggers that
need to be met for the rule to be activated. These conditions can be based on various
factors such as resource changes, time schedules, event types, or custom attributes.
Actions: Actions specify the operations or tasks to be performed when the rule is
triggered. These actions can include sending notifications, invoking functions,
publishing to streaming services, or triggering service connectors. The other options
mentioned are not directly related to creating a rule for the Events Service. The
Management Agent Cloud Service is used for managing on-premises resources, the
Install Key is used for accessing in-stances remotely, and Service Connector is used for
integrating different Oracle Cloud Infrastructure services.
Reference:
https://docs.oracle.com/en-us/iaas/Content/Events/Concepts/eventsoverview.htm
QUESTION: 7
You are a Site Reliability Engineer (SRE) and are new to Oracle Cloud
Infrastructure (OCI) DevOps. You need help tracking the performance of
your cloud native applications.
Which group of OCI services can help you get application insights?
A. Oracle Container Engine for Kubernetes (OKE), Instance Groups, and Functions
Answer(s): D
Explanation:
The group of OCI services that can help you get application insights is OCI Logging,
Monitoring, and Application Performance Monitoring (APM). OCI Logging allows you to
collect and analyze log data from your applications, infrastructure, and other resources.
It helps you track and trouble-shoot issues by providing visibility into the performance
and behavior of your applications. OCI Monitoring enables you to monitor the health,
performance, and availability of your cloud resources, including your applications. It
allows you to set up metrics, alarms, and notifications to proactively monitor and
respond to any issues or anomalies. OCI Application Performance Monitoring (APM) is
specifically designed to provide insights into the performance of your applications. It
helps you identify and diagnose performance bottlenecks, track user experiences, and
optimize the overall performance of your applications. By using OCI Logging,
Monitoring, and APM together, you can gain comprehensive visibility into your cloud
native applications and effectively monitor their performance and behavior.
QUESTION: 8
You have been asked to provision a new production environment on
Oracle Cloud Infra-structure (OCI). After working with the solution
architect you decide that you are going to automate this process.
Which OCI service can help automate the provisioning of this new
environment?
B. Oracle Functions
Answer(s): C
Explanation:
The OCI service that can help automate the provisioning of a new environment is OCI
Resource
Manager. OCI Resource Manager is a service provided by Oracle Cloud Infrastructure
that enables you to automate the process of provisioning, updating, and managing
infrastructure resources. It allows you to define your infrastructure as code using tools
like Terraform, and then use Resource Manager to create and manage stacks. Stacks
are the deployment units that contain the infrastructure resources defined in your code.
By leveraging OCI Resource Manager, you can automate the provisioning of a new
production environment by defining the required infrastructure resources in a stack
using Terraform code. Resource Manager will then handle the creation and
management of these resources, ensuring that your environment is provisioned
consistently and according to the de-fined infrastructure as code. Therefore, OCI
Resource Manager is the recommended service to automate the provisioning of a new
environment in Oracle Cloud Infrastructure.
QUESTION: 9
A. The user must create a compute instance to run the secret service.
Answer(s): B
Explanation:
The correct answer is: You must have a Vault managed key to encrypt the secret. A
prerequisite for creating a secret in the Oracle Cloud Infrastructure (OCI) Vault service
is having a Vault managed key. The Vault service allows you to securely store and
manage sensitive information such as pass-words, API keys, and other secrets. To
ensure the confidentiality of the stored secrets, they are encrypted using encryption
keys. In OCI Vault, the encryption keys used for encrypting secrets are managed by the
Vault service itself, and you need to have a Vault managed key available to encrypt the
secret before creating it.
You are using the Oracle Cloud Infrastructure (OCI) DevOps service and
you have success-fully built and tested your software applications in your
Build Pipeline. The resulting output needs to be stored in a container
repository.
Which stage should you add next to your Build Pipeline?
A. Deliver artifacts
B. Trigger deployment
C. Export packages
D. Managed build
Answer(s): A
Explanation:
To store the resulting output of your software applications in a container repository, you
should add the "Deliver artifacts" stage next to your Build Pipeline in the Oracle Cloud
Infrastructure (OCI) DevOps service. The "Deliver artifacts" stage is responsible for
packaging and delivering the build artifacts to the desired destination, such as a
container repository. It allows you to define the target location for storing the build
artifacts and configure the necessary credentials or access controls to authenticate
and authorize the delivery. By adding the "Deliver artifacts" stage, you ensure that the
output of your build process is securely and reliably transferred to the container
repository, making it available for deployment and further distribution as needed.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/devops/using/managing_build_pipelines.htm
QUESTION: 11
Answer(s): D
Explanation:
The correct answer is: Continuous delivery is a process that initiates deployment
manually, while continuous deployment is based on automating the deployment
process. In the DevOps lifecycle, continuous delivery and continuous deployment are
both approaches to software release and deployment, but they differ in the level of
automation and manual intervention involved. Continuous delivery refers to the practice
of continuously preparing software releases in a way that they could be deployed to
production at any time. It focuses on automating the build, test, and deployment
processes, ensuring that software is always in a deployable state. However, the
decision to actually deploy the software to production is made manually, typically by a
human operator or team. On the other hand, continuous deployment takes the
automation one step further. With continuous deployment, the software is automatically
deployed to production as soon as it passes all the necessary tests and checks. There
is no manual intervention in the deployment process, and it is fully automat-ed. This
approach allows for faster and more frequent deployments, reducing the time between
developing new features and making them available to users. So, the main difference is
that continuous delivery requires a manual trigger for deployment to production, while
continuous deployment automates the deployment process without the need for manual
intervention.
QUESTION: 12
B. You cannot setup Cloud shell access to the cluster if the clusters Kubernetes API
end-point has a private IP address.
C. Generating an API signing key pair is a mandatory step while setting up cluster
access using local machine if the public key is not already uploaded in the
console.
E. To access the cluster using kubectl you have to set up a Kubernetes manifest file
for the cluster.
The kubeconfig file by default is named config and stored in the
$HOME/.manifest directory
Answer(s): A,C,D
Explanation:
The three statements that are true regarding setting up cluster access for an Oracle
Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster are: When a
cluster's Kubernetes API endpoint has a public IP address, you can access the cluster in
Cloud Shell by setting up a kubeconfig file. This allows you to authenticate and interact
with the cluster using kubectl. Generating an API sign-ing key pair is a mandatory step
when setting up cluster access using a local machine if the public key is not already
uploaded in the console. This key pair is used for authentication and securing the
connection to the cluster. To access the cluster using kubectl, you need to set up a
Kubernetes con-figuration file (kubeconfig) for the cluster. By default, the kubeconfig
file is named "config" and is stored in the $HOME/.kube directory. This file contains the
necessary information and credentials to authenticate and communicate with the
cluster. These steps enable the DevOps engineer to access and manage the OKE
cluster, deploy new applications, and manage existing ones using kubectl or other
Kubernetes management tools.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm
QUESTION: 9
A. The user must create a compute instance to run the secret service.
Answer(s): B
Explanation:
The correct answer is: You must have a Vault managed key to encrypt the secret. A
prerequisite for creating a secret in the Oracle Cloud Infrastructure (OCI) Vault service
is having a Vault managed key. The Vault service allows you to securely store and
manage sensitive information such as pass-words, API keys, and other secrets. To
ensure the confidentiality of the stored secrets, they are encrypted using encryption
keys. In OCI Vault, the encryption keys used for encrypting secrets are managed by the
Vault service itself, and you need to have a Vault managed key available to encrypt the
secret before creating it.
QUESTION: 10
You are using the Oracle Cloud Infrastructure (OCI) DevOps service and
you have success-fully built and tested your software applications in your
Build Pipeline. The resulting output needs to be stored in a container
repository.
Which stage should you add next to your Build Pipeline?
A. Deliver artifacts
B. Trigger deployment
C. Export packages
D. Managed build
Answer(s): A
Explanation:
To store the resulting output of your software applications in a container repository, you
should add the "Deliver artifacts" stage next to your Build Pipeline in the Oracle Cloud
Infrastructure (OCI) DevOps service. The "Deliver artifacts" stage is responsible for
packaging and delivering the build artifacts to the desired destination, such as a
container repository. It allows you to define the target location for storing the build
artifacts and configure the necessary credentials or access controls to authenticate
and authorize the delivery. By adding the "Deliver artifacts" stage, you ensure that the
output of your build process is securely and reliably transferred to the container
repository, making it available for deployment and further distribution as needed.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/devops/using/managing_build_pipelines.htm
QUESTION: 11
Answer(s): D
Explanation:
The correct answer is: Continuous delivery is a process that initiates deployment
manually, while continuous deployment is based on automating the deployment
process. In the DevOps lifecycle, continuous delivery and continuous deployment are
both approaches to software release and deployment, but they differ in the level of
automation and manual intervention involved. Continuous delivery refers to the practice
of continuously preparing software releases in a way that they could be deployed to
production at any time. It focuses on automating the build, test, and deployment
processes, ensuring that software is always in a deployable state. However, the
decision to actually deploy the software to production is made manually, typically by a
human operator or team. On the other hand, continuous deployment takes the
automation one step further. With continuous deployment, the software is automatically
deployed to production as soon as it passes all the necessary tests and checks. There
is no manual intervention in the deployment process, and it is fully automat-ed. This
approach allows for faster and more frequent deployments, reducing the time between
developing new features and making them available to users. So, the main difference is
that continuous delivery requires a manual trigger for deployment to production, while
continuous deployment automates the deployment process without the need for manual
intervention.
A. When a cluster's Kubernetes API endpoint has a public IP address, you can
access the cluster in Cloud Shell by setting up a kubeconfig file
B. You cannot setup Cloud shell access to the cluster if the clusters Kubernetes API
end-point has a private IP address.
C. Generating an API signing key pair is a mandatory step while setting up cluster
access using local machine if the public key is not already uploaded in the
console.
E. To access the cluster using kubectl you have to set up a Kubernetes manifest file
for the cluster.
The kubeconfig file by default is named config and stored in the
$HOME/.manifest directory
Answer(s): A,C,D
Explanation:
The three statements that are true regarding setting up cluster access for an Oracle
Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster are: When a
cluster's Kubernetes API endpoint has a public IP address, you can access the cluster in
Cloud Shell by setting up a kubeconfig file. This allows you to authenticate and interact
with the cluster using kubectl. Generating an API sign-ing key pair is a mandatory step
when setting up cluster access using a local machine if the public key is not already
uploaded in the console. This key pair is used for authentication and securing the
connection to the cluster. To access the cluster using kubectl, you need to set up a
Kubernetes con-figuration file (kubeconfig) for the cluster. By default, the kubeconfig
file is named "config" and is stored in the $HOME/.kube directory. This file contains the
necessary information and credentials to authenticate and communicate with the
cluster. These steps enable the DevOps engineer to access and manage the OKE
cluster, deploy new applications, and manage existing ones using kubectl or other
Kubernetes management tools.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm
QUESTION: 13
Which is NOT a valid log category for the Oracle Cloud Infrastructure
Logging service?
A. Custom Logs
B. Hybrid Logs
C. Audit Logs
D. Service Logs
Answer(s): B
Explanation:
"The option ""Hybrid Logs"" is NOT a valid log category for the Oracle Cloud
Infrastructure Log-ging service. The Logging service in OCI provides the ability to
collect, search, and analyze logs generated by various OCI services and resources. The
valid log categories include: Service Logs: These are the logs generated by various OCI
services, such as Compute, Networking, Database, and Storage services. Custom Logs:
These are user-defined logs that can be sent to the Logging service using the Logging
SDK or APIs. These logs can be from applications or resources running in OCI. Audit
Logs:
These logs capture the activity and events related to the management of OCI resources,
such as API calls, user access, and policy changes. The ""Hybrid Logs"" option is not a
recognized log category in the OCI Logging service."
Reference:
https://docs.oracle.com/en- us/iaas/Content/Logging/Concepts/loggingoverview.htm
A. When naming a container repository, you may use capital letters but not hyphens.
For example, you may use BGdevops-storefront, but not bgdevops/storefront.
C. You must use a separate container repository for each image, but multiple
versions of that image can be in a single repository.
D. You must use the OCI DevOps Managed Build stage to define artifacts in the
artifact and container repositories and map the build pipeline outputs to them.
Answer(s): C
Explanation:
The proper rule to follow when creating container repositories inside the Oracle Cloud
Infrastructure (OCI) Registry is: You must use a separate container repository for each
image, but multiple versions of that image can be in a single repository. This means
that each distinct image should have its own repository, but different versions of the
same image can be stored within that repository. This allows for better organization
and management of container images. The other options mentioned are not correct:
Checking the "Immutable Artifacts" box does not exist as a requirement when creating a
container repository. Immutable artifacts refer to the immutability of the container
images themselves, not a setting in the repository. There are no restrictions on using
capital letters or hyphens in the naming of container repositories. Both capital letters
and hyphens are allowed in the repository name. The OCI DevOps Managed Build stage
is not directly related to defining artifacts in the artifact and container repositories. The
Managed Build stage is responsible for building and packaging application artifacts, but
it does not define the repositories themselves.
QUESTION: 15
Answer(s): E
Explanation:
A reasonable expectation for Configuration Management (CM) as it pertains to
applications in the Oracle Cloud Infrastructure (OCI) DevOps process is consistency in
performance, function, design, and implementation. Configuration Management ensures
that the application's configuration, set- tings, and dependencies are managed
consistently across different environments and deployments. It helps maintain the
desired state of the application and ensures that it behaves consistently in terms of
performance, functionality, design, and implementation. By using CM practices,
develop-ers can ensure that the application's configurations and dependencies are
accurately managed and deployed, minimizing variations and inconsistencies that could
lead to unexpected behavior or per- formance issues. This helps maintain a consistent
experience for users and facilitates smooth and reliable operation of the application.
QUESTION: 16
You are part of the cloud DevOps team managing thousands of compute
Instances running in Oracle Cloud Infrastructure (OCI). The OCI Logging
service is configured to collect logs from these Instances using a Unified
Monitoring Agent. A requirement has been created to archive logging data
into OCI Object Storage.
What OCI capability can help you achieve this requirement?
A. IAM policy
B. Logging Query
D. ObjectCollectionRule
Answer(s): C
Explanation:
"The OCI capability that can help achieve the requirement of archiving logging data into
OCI Object Storage is the ""Service Connector Hub."" The Service Connector Hub in OCI
enables you to configure connections and workflows between different OCI services. In
this case, you can create a connection between the OCI Logging service and OCI Object
Storage using the Service Connector Hub. By setting up a connection between these
services, you can define a workflow to automatically transfer the logs collected by the
Logging service to Object Storage for archiving. This ensures that the logging data is
securely stored and easily accessible for future analysis or compliance purposes. The
Logging Query capability allows you to search and analyze logs, but it does not directly
address the requirement of archiving the logging data into Object Storage.
ObjectCollectionRule is not a valid OCI capability and does not pertain to the archiving
of logging data into Object Storage. IAM policies are used to manage access and
permissions within OCI, but they do not directly pro-vide the capability to archive
logging data into Object Storage."
Reference:
https://docs.oracle.com/en-us/iaas/Content/service- connector-hub/archivelogs.htm
QUESTION: 17
What is the DevOps lifecycle, and how does it help businesses succeed?
Answer(s): C
Explanation:
The DevOps lifecycle is a multi-phased development cycle that focuses on rapid-release
and continuous delivery to unite team infrastructure and maximize the quality of
software. It encompasses the collaboration between development and operations
teams, emphasizing communication, automation, and continuous improvement. The
DevOps lifecycle typically includes the following phases: Plan: Teams identify business
goals, plan feature development, and prioritize tasks. Code: Developers write code and
apply version control practices to manage changes. Build:
The code is built into executable artifacts, which are often stored in a repository. Test:
Automated testing is performed to validate the functionality and quality of the software.
Deploy: The software is deployed to the target environment, following consistent and
repeatable processes. Operate: The application is monitored and managed in the
production environment, with continuous feedback loops. Monitor: Metrics and logs are
collected to monitor performance, identify issues, and optimize the system. By adopting
the DevOps lifecycle, businesses can benefit in several ways: Increased efficiency:
Automation and collaboration reduce manual efforts, enabling faster and more reliable
software delivery. Faster time to market: Continuous integration and continuous
delivery (CI/CD) practices enable frequent releases, allowing businesses to quickly
respond to market demands. Improved quality: Continuous testing and feedback loops
help catch and address issues earlier in the development cycle, improving the overall
quality of the software. Enhanced collaboration: DevOps promotes cross-functional
collabo-ration, breaking down silos between development, operations, and other teams,
leading to better communication and alignment. Greater stability and reliability:
Continuous monitoring and feed-back loops help identify and resolve issues proactively,
resulting in more stable and reliable systems. Scalability and flexibility: DevOps
practices enable businesses to scale their infrastructure and adapt to changing
requirements more easily. Overall, the DevOps lifecycle helps businesses succeed by
fostering a culture of collaboration, automation, and continuous improvement, leading
to faster de-livery, higher quality software, and better alignment between teams.
QUESTION: 18
C. Dynamic Groups access and OCI IAM policies to the code repository are not set.
D. A. Artifacts and build spec are removed before running the build.
Answer(s): C
Explanation:
The configuration error that could lead to the "unable to clone the repository" error is:
Dynamic Groups access and OCI IAM policies to the code repository are not set: This
error suggests that the necessary permissions and access controls have not been
properly configured for the OCI Code Repository. Dynamic Groups and IAM policies
control user access and permissions to various OCI re- sources, including code
repositories. Without the correct configuration, the build pipeline is unable to clone the
repository and retrieve the source code. The other options mentioned are not directly
related to the error mentioned: More stages were added than required to the build
pipeline: Adding more stages to the build pipeline than necessary would not cause an
error related to cloning the repository. It might impact the overall flow and logic of the
pipeline, but it is not directly related to the repository cloning process. Source files
connected directly to the build pipeline: Connecting source files directly to the build
pipeline is a typical setup and would not cause a "unable to clone the repository" error.
Artifacts and build spec being removed before running the build: Removing artifacts and
build specifications before running the build could impact the build process and result
in other errors, but it would not specifically cause an error related to cloning the
repository.
Reference:
https://docs.oracle.com/en-us/iaas/Content/devops/using/troubleshooting.htm
QUESTION: 19
(OPT_MISS) You have just run the managed build stage of an Oracle Cloud
Infrastructure (OCI) DevOps Build Pipeline. The pipeline failed, because
the code repository could not be accessed.
What might the problem be?
D. More than one code repository was assigned to the DevOps project
Answer(s): A
Explanation:
The possible problem that caused the failure of the managed build stage in an Oracle
Cloud Infra- structure (OCI) DevOps Build Pipeline could be that a vault secret has an
incorrect OCID assigned to it. In the context of the question, the issue is related to the
code repository not being accessible. This suggests that there might be a problem with
the authentication or credentials used to access the code repository. One possible
cause is that a vault secret, which typically stores sensitive information such as
credentials or access tokens, has an incorrect OCID (Oracle Cloud Identifier) as-signed
to it. If the secret's OCID is incorrect or doesn't match the expected value, it can result
in authentication failures and the inability to access the code repository, leading to a
pipeline failure. To resolve this issue, the administrator or developer should verify the
OCID assigned to the vault secret and ensure it is correct. They should also check the
code repository configuration and ensure that the correct credentials are being used to
access the repository.
Hide Solution Next Question
QUESTION: 20
A. An ENV instruction sets the environment value to the key, and it is available for
the subsequent build steps and in the running container as well.
B. The RUN instruction will execute any commands in a new layer on top of the
current image and commit the results.
C. WORKDIR instruction sets the working directory for any RUN, CMD, ENTRY-POINT
instructions and not for COPY and ADD instructions in the Dockerfile.
E. The COPY instruction copies new files, directories, or remote file URLS from
<src> and adds them to the filesystem of the image at the path <dest>.
Answer(s): C,E
Explanation:
The WORKDIR command is used to define the working directory of a Docker container at
any given time. The command is specified in the Dockerfile. Any RUN , CMD , ADD ,
COPY , or EN-TRYPOINT command will be executed in the specified working directory.
Reference:
https://www.geeksforgeeks.org/difference-between-the-copy-and-add-commands-in-a-
dockerfile/
QUESTION: 21
Answer(s): D
Explanation:
OCI DevOps deployment pipelines can work across OCI regions. From a single
deployment pipe-line, deployments can be executed into multiple regions, in parallel or
sequentially. To efficiently deploy an application in the Japan Central (ap-osaka-1)
region using an existing deployment pipeline set up in the US East (us-ashburn-1)
region, the recommended approach is: Deploy directly in ap-osaka-1 from the us-
ashburn-1 deployment pipeline. OCI DevOps allows you to deploy applications across
regions, and you can leverage this capability to deploy your application in a different
region than where the deployment pipeline is set up. You can configure the deployment
stage in your deployment pipeline to target the ap-osaka-1 region, specifying the
appropriate resources and settings for deployment in that region. This way, you can
achieve efficient deployment to the desired region without the need to create a separate
deployment pipeline. The other options mentioned are not the most efficient
approaches: Creating another deployment pipeline in ap-osaka-1: While it is possible to
create another deployment pipeline in the ap-osaka-1 region, it would introduce
additional complexity and management overhead. It is more efficient to leverage the
existing deployment pipeline and configure it to deploy in the desired region. Deploying
the application in us- ashburn-1 and duplicating it in ap-osaka-1: This approach would
involve deploying the application separately in both regions, which can lead to
duplication of efforts and increased maintenance complexity. It is more efficient to use
a single deployment pipeline and configure it to deploy in the target region directly.
QUESTION: 22
Answer(s): C
Explanation:
To automate infrastructure and configure Oracle Cloud Infrastructure (OCI) resources,
the recom- mended tool to use is Ansible. Ansible is a popular automation tool that
focuses on provisioning, configuring, and managing IT infrastructure. It uses a
declarative language called YAML to de-scribe the desired state of the infrastructure,
allowing you to define and automate the configuration of OCI resources such as
Compute, Load Balancing, and Database services. Ansible provides a col-lection
specifically designed for OCI, called the "Ansible Collection," which includes modules
and playbooks for interacting with OCI APIs. By utilizing Ansible in OCI, you can easily
automate the provisioning and configuration of your infrastructure, ensuring
consistency and reproducibility. An-sible's simplicity and agentless architecture make it
a flexible and efficient choice for managing OCI resources and automating
infrastructure tasks in the context of OCI DevOps.
Hide Solution Next Question
QUESTION: 23
A. The resources in the stack can still be edited or destroyed through the OCI
console, causing Resource Manager's state to be out of sync.
B. The resources in the stack can no longer be edited or destroyed through the
Terraform CLI on a local machine.
D. The Terraform state may become corrupted if multiple people attempt Apply jobs
in Resource Manager simultaneously.
Answer(s): C
Explanation:
The correct statement is: Resources provisioned by Resource Manager can only be
managed through Resource Manager, preventing the state from becoming out of sync.
When a stack is co-managed by multiple teams in Oracle Cloud Infrastructure (OCI)
Resource Manager, the resources provisioned by Resource Manager can only be
managed through Resource Manager itself. This ensures that the state of the stack
remains in sync and prevents conflicts that may arise from multiple teams making
changes simultaneously. Managing the resources through Resource Manager helps
maintain control and consistency over the stack deployment and configuration.
Reference:
https://docs.oracle.com/en-us/iaas/Content/ResourceManager/Concepts/resource-
manager-and- terraform.htm
QUESTION: 24
A. Output artifacts aren't permanent. If they are to be used in the Deliver Artifacts
stage, they need to be exported as output artifacts to a registry.
C. Deliver Artifacts is a required stage of the build pipeline, and the entire pipeline
won't work if it is not included in order to extract artifacts after the Managed
Build stage.
D. All artifacts are permanently stored in the build pipeline. Extracting just the ones
re-quired for deployment tells the deployment pipeline which artifacts to use.
Answer(s): C
Explanation:
This is because output artifacts are temporary files generated by the build process that
are needed to deploy an application. Since these artifacts are not permanent, they need
to be extracted from the build pipeline and stored in an Artifact Registry repository for
easy distribution, versioning, and management. The Deliver Artifacts stage in the build
pipeline is responsible for this task, which en- sures that the correct artifacts are used
for each deployment. Here is the reference link for more in- formation on Oracle Cloud
Infrastructure (OCI) DevOps build pipeline and Artifact Registry
Reference:
https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/devops/01oci-devops-
overview-contents.html#artifact-registry-overview
QUESTION: 25
E. Build the application as a single unit and use container technology to deploy it.
Answer(s): C,D
Explanation:
The two approaches that can be used to build the e-commerce website and achieve
deployment in- dependence, easier technology upgrades, and resiliency to architecture
changes are: Implement each module as an independent service/process: This
approach is aligned with the microservices architecture, where each module or
functionality is developed and deployed as a separate service. This al-lows for
independent updates, replacements, or deletions of specific modules without disrupting
the rest of the application. It provides flexibility, scalability, and easier technology
upgrades by enabling the use of different technologies and frameworks for different
services. Use microservices architec-ture: Microservices architecture involves breaking
down the application into smaller, loosely coupled services that communicate with each
other through APIs. This architecture promotes independent deployment of services,
making it easier to update or modify specific services without affecting the entire
application. It allows for better scalability, fault isolation, and resiliency to architecture
changes. The monolithic approach, where the entire application is built as a single unit,
is not suitable for achieving the mentioned goals. It can lead to challenges in
deployment independence, technology upgrades, and adaptability to newer
technologies.
QUESTION: 26
A DevOps team has 50 web servers under their preview and they want to
patch a server application.
Which element of Ansible can be leveraged for this task and how would it
help?
A. A playbook could be leveraged and executed against the group of web servers,
as de-fined in the Inventory. Then, Ansible would connect to each server and
apply the same set of configurations.
B. A playbook could be leveraged to explain the series of plays and tasks that need
to be run per server. Then, Ansible would connect with and configure each
server's infra-structure automatically using YAML.
C. A playbook could be leveraged and executed against the group of web servers,
as de-fined in the task list. Then. Ansible would connect to each server and apply
the same set of commands.
D. A playbook could be leveraged to perform ad hoc commands per server. Then.
Ansible will automatically communicate with the servers and execute the ad hoc
commands in the order defined.
Answer(s): A
Explanation:
The correct option is: A playbook could be leveraged and executed against the group of
web servers, as defined in the Inventory. Then, Ansible would connect to each server
and apply the same set of configurations. In Ansible, a playbook is a YAML file that
describes a series of plays and tasks to be executed against a group of hosts. The
inventory file defines the group of web servers that need to be patched. By leveraging a
playbook, the DevOps team can define the desired configuration or patching tasks once
and apply them consistently across all the web servers in the group. An-sible will
connect to each server in the inventory and execute the tasks defined in the playbook,
ensuring that the desired configurations or patches are applied uniformly. This
approach simplifies the management of multiple servers as the same playbook can be
executed against the entire group, eliminating the need to manually configure each
server individually. It also allows for automation and repeatability, ensuring that the
desired changes are applied consistently and efficiently.
QUESTION: 27
Answer(s): C
Explanation:
The correct Ansible AdHoc command to update to the newest version of Apache on all
defined web servers is: $ ansible webservers -m yum -a "name=httpd state=latest" In
this command: "ansi-ble" is the command to execute Ansible. "webservers" is the name
of the group defined in the inven-tory file that includes all the web servers. "-m yum"
specifies the module to use, which in this case is "yum" for package management. "-a"
is used to pass arguments to the module. "name=httpd" speci- fies the name of the
package to update, in this case, "httpd" (Apache). "state=latest" instructs Ansi- ble to
update the package to the latest version available. By running this command, Ansible
will connect to each server in the "webservers" group and use the "yum" module to
update the "httpd" package to the latest version.
QUESTION: 28
A. Code Repo: Allow dynamic group <Code Repository> to manage all resources in
compartment compartment name>; Build Pipeline: Allow dynamic-group
<BuildPipeline> to manage all-resources in compartment compartment name>
Answer(s): D
Explanation:
The correct DevOps IAM policy statements required for the CI/CD automation process
are: Code Repo: Allow dynamic-group <Code Repository> to manage all resources in
compartment <com- partment name> Build Pipeline: Allow dynamic-group
<BuildPipeline> to manage all resources in compartment <compartment name>
Deployment Pipeline: Allow dynamic-group <Deployment Pipeline> to manage all
resources in compartment <compartment name> These policy statements ensure that
the specified dynamic groups have the necessary permissions to manage all resources
within the specified compartments. The Code Repository dynamic group should have
permissions to manage resources in the Code Repository compartment, the
BuildPipeline dynamic group should have permissions to manage resources in the Build
Pipeline compartment, and the Deployment Pipe- line dynamic group should have
permissions to manage resources in the Deployment Pipeline compartment. This allows
for the automation process to trigger builds and deployments as code is pushed to the
Code Repository.
QUESTION: 29
A. Manually add approvers names and email addresses in the Deployment Pipeline
page.
B. Add approvers to the buildspec file before pushing the code to the OCI Code
Repository.
D. Add approvers to the Deployment Pipeline and give them access via OCI IAM
policy.
Answer(s): D
Explanation:
To add approvers to the approval workflow in the Deployment Pipeline of Oracle Cloud
Infrastructure (OCI) DevOps service, you can follow this approach: Add approvers to the
Deployment Pipe-line and give them access via OCI IAM policy. OCI DevOps allows you
to define an approval workflow as part of the Deployment Pipeline. To add approvers,
you can configure the appropriate settings in the Deployment Pipeline configuration.
This typically involves specifying the names or email addresses of the individuals who
should review and approve the deployment. These approvers will be notified when a
deployment is triggered and will be able to review and provide their approval. In order to
give the approvers access to the Deployment Pipeline, you can use OCI's Identity and
Access Management (IAM) service. By creating IAM policies, you can grant the
necessary permissions and access control to the designated approvers, ensuring that
they have the appropriate level of access to review and approve deployments. The other
options mentioned are not the correct approaches:
Manually adding approvers' names and email addresses in the Deployment Pipeline
page: This would not provide the necessary access and permissions for the approvers
to review and approve the deployment. IAM policies should be used for access control.
Emailing approvers before running the Deployment Pipeline: Emailing approvers
separately would not integrate with the auto-mated approval workflow in the
Deployment Pipeline. The approval process should be handled within the OCI DevOps
service. Adding approvers to the buildspec file: The buildspec file is primarily used for
defining the build stages and actions, not for managing the approval workflow.
QUESTION: 30
Your organization needs to design and develop a containerized
application that requires a connection to an Oracle Autonomous
Transaction Processing (ATP) Database. As a DevOps engineer, you have
decided to use Oracle Container Engine for Kubernetes (OKE) for the
container app deployment and you need to consider options for
connecting to ATP.
Which connection option is NOT valid?
A. Enable Oracle REST Data Services for the required schemas and connect via
HTTPS.
B. Create a Kubernetes secret with contents from the ATP instance Wallet files. Use
this secret to create a volume mounted to the appropriate path in the application
deployment manifest.
C. Install the OCI Service Broker on the Kubernetes cluster and deploy
serviceinstance and ServiceBinding resources for ATP. Then use the specified
binding name as a volume in the application deployment manifest.
Answer(s): A
Explanation:
The connection option that is NOT valid for connecting to an Oracle Autonomous
Transaction Pro- cessing (ATP) Database from Oracle Container Engine for Kubernetes
(OKE) is: Enable Oracle REST Data Services for the required schemas and connect via
HTTPS. Enabling Oracle REST Data Services (ORDS) is not a valid method for
connecting to an ATP Database from OKE. ORDS is a tool that allows you to create
RESTful web services against Oracle databases. However, it is not the recommended
method for establishing a connection between a containerized application in OKE and
an ATP Database. The other three options mentioned are valid approaches: Creating a
Kuber-netes secret with contents from the ATP instance Wallet files: This allows you to
securely store and use the necessary credentials to connect to the ATP Database in
your application's deployment man-ifest. Using Kubernetes secrets to configure
environment variables on the container with ATP in-stance OCID and OCI API
credentials: This approach allows you to set environment variables with-in your
application container to provide the necessary connection details to the ATP Database.
In-stalling the OCI Service Broker and deploying serviceinstance and ServiceBinding
resources for ATP: The OCI Service Broker simplifies the process of provisioning and
managing Oracle Cloud Infrastructure (OCI) services, including ATP, from within
Kubernetes. This option allows you to create a service binding that provides the
necessary connection information to your application as a volume in the deployment
manifest. These options provide more direct and secure connections be-tween the
containerized application in OKE and the ATP Database, ensuring proper integration and
data access.
QUESTION: 31
A. The storage administrator forgot to select "Oracle Managed" while creating the
bucket.
B. The resource bucket policy lacks the necessary Access Control List (ACL).
D. There is no Identity and Access Management (IAM) policy allowing the Object
Store service to use the Vault key.
Answer(s): C
Explanation:
The reason why the storage administrator cannot associate an encryption key from OCI
Vault to an Object Storage bucket in a new compartment could be: There is no Identity
and Access Management (IAM) policy allowing the Object Store service to use the Vault
key. This is because an IAM policy is required to authorize the Object Storage service to
use the encryption key from OCI Vault. The IAM policy should allow the service to use
the key and also give permission to access the Vault resource. Without the appropriate
IAM policy in place, the storage administrator will not be able to associate the
encryption key with the Object Storage bucket. Here is the link to the official
documentation on associating an Oracle-managed encryption key from OCI Vault with
an Object Storage bucket:
Reference:
https://docs.cloud.oracle.com/en-
us/iaas/Content/Object/Tasks/managingencryptionkeys.htm#associating-oci-vault-
managed-keys- with-object-storage-buckets
A. Configure both 3rd party monitoring tool and OCI Compute Agent on OCI
compute instances to collect required resource metrics. Use OCI Events service
(com.oraclecloud.devopsdeploy.createdeployment) with Notifications service to
track and notify all changes occurring in the target OCI environment.
B. Configure OCI Compute agent on on-premises VMs and OCI compute instances
to collect required resource metrics. Use OCI Events service to track the end-to-
end deployment process (com.oraclecloud.devopsdeploy.createdeployment) and
creation of new bucket (com.oraclecloud.objectstorage.createbucket Use OCI
Notifications and Events services to notify these changes.
Answer(s): D
Explanation:
The recommended solution to achieve the requirements mentioned would be: Configure
OCI Com- pute agent on OCI compute instances to collect the required resource
metrics. Use OCI Events and Functions services to track the end-to-end deployment
pipeline (com.oraclecloud.devopsdeploy.createdeployment) and the creation of new
OCI Object Storage buckets (com.oraclecloud.objectstorage.createbucket). Finally,
utilize OCI Notifications and Events services to notify these changes. Continuous
monitoring with resource metrics: Install and configure the OCI Compute agent on the
OCI compute instances to collect the required re-source metrics such as CPU utilization,
memory utilization, and disk IOPS. This ensures continuous monitoring of the VMs as
they are migrated to OCI. Monitoring the deployment pipeline: Utilize the OCI Events
service to track the end-to-end deployment pipeline. Specifically, use the event type
"com.oraclecloud.devopsdeploy.createdeployment" to monitor the deployment process.
This allows you to track the progress and status of the migration workflow. Notification
for new OCI Object Storage buckets: Leverage the OCI Notifications service in
conjunction with the OCI Events ser-vice. Set up a notification rule to trigger an email
notification whenever a new OCI Object Storage bucket is created. Use the event type
"com.oraclecloud.objectstorage.createbucket" to identify the creation of new buckets.
By combining the OCI Compute agent, OCI Events service, Functions service, and
Notifications service, you can ensure continuous monitoring of resource metrics, track
the deployment pipeline, and receive email notifications for any new OCI Object Storage
bucket creations during the migration workflow.
Hide Solution Next Question
QUESTION: 33
A. Enter the necessary vault secret variable OCIDS into the vaultVariables section.
B. Enter the variables you would like to use in later build steps into the
localVariables section.
C. Enter the details for binaries used in later pipeline stages into the outputArtifacts
section.
(Correct)
D. Enter the artifacts the build pipeline should permanently save into the
storeArtifacts section.
E. Enter the vault secrets needed for the deployment pipeline into the
exportedVariables section.
Answer(s): A
Explanation:
As a developer working on the Oracle Cloud Infrastructure (OCI) DevOps service, when
creating a build spec YAML file for the build pipeline, the following two actions are part
of the proper creation of the file: Enter the details for binaries used in later pipeline
stages into the outputArtifacts section:
In the outputArtifacts section, you specify the artifacts or files generated during the
build process that should be saved for future use. These artifacts can include compiled
binaries, libraries, configuration files, or any other relevant files that need to be
preserved. Enter the necessary vault secret variable OCIDs into the vaultVariables
section: In the vaultVariables section, you define the variables that correspond to the
Vault OCIDs (Oracle Cloud Infrastructure Vault service). These variables are used to
securely store and retrieve sensitive information, such as API keys, passwords, or other
secrets, required by the build pipeline or later stages of the deployment process. By
including these actions in the build spec YAML file, you ensure that the necessary
artifacts are properly saved and that the required vault secret variables are available for
secure access during the build and deployment pipeline execution.
Reference:
https://docs.oracle.com/en- us/iaas/Content/devops/using/build_specs.htm
QUESTION: 34
B. RBAC Roles
C. Network Policies
D. IAM Policies
Answer(s): C
Explanation:
As the OKE cluster administrator, you can define permissions to restrict pod-to-pod
communications except as explicitly allowed by using Network Policies. Network
Policies are a Kubernetes feature that allows you to define rules for network traffic
within the cluster. They provide fine-grained control over ingress (incoming) and egress
(outgoing) traffic between pods. By creating Network Policies, you can specify the
allowed communication paths between pods based on various criteria such as source
and destination pods, namespaces, IP addresses, ports, and protocols. This allows you
to enforce security and isolation within your OKE cluster, ensuring that pods can only
communicate with authorized pods or services. RBAC Roles and IAM Policies are used
to manage access control and permissions for managing and interacting with the
cluster itself, but they do not directly control pod-to-pod communications. Security
Lists, on the other hand, are associated with VCN (Virtual Cloud Network) resources and
control traffic at the subnet level, not at the pod level within the OKE cluster.
Reference:
https://docs.oracle.com/en-us/iaas/Content/Security/Reference/oke_security.htm
A. Add Rescue and Trigger stages to automatically trigger the failed deployment.
C. Rollback the failed stage in the pipeline to the previous successful released
version.
D. Automate backup and use the rerelease stage in the Deployment Pipeline.
Answer(s): C
Explanation:
In the Deployment Pipeline, when a deployment stage fails during the deployment of an
update to production, the recommended action is to: Rollback the failed stage in the
pipeline to the previous successful released version. By rolling back the failed stage to
the previous successful version, you can revert the deployment to a known stable state
and prevent the failed changes from being re- leased to production. This helps to
maintain the stability and integrity of the application. It allows you to address the issues
encountered in the failed stage, make necessary fixes or adjustments, and then proceed
with the deployment again once the issues are resolved. The other options mentioned
are not the most appropriate actions to perform in this scenario: Automating backup
and using the rerelease stage: While backups are important for data protection,
automating backup and using a rerelease stage would not directly address the failure in
the deployment stage. It is more focused on data backup and recovery. Adding Rescue
and Trigger stages: Adding Rescue and Trigger stages might help in certain situations,
but they are not the primary solution for handling a failed deployment stage. They are
more related to error recovery mechanisms or additional deployment steps, rather than
specifically addressing the failed stage. Using OCI DevOps Trigger and Rerun tool: While
OCI DevOps Trigger and Rerun tool can be useful for automating and managing
deployments, it is not specifically designed to handle a failed deployment stage. It is
more focused on triggering and rerunning pipelines or stages based on specific
conditions or events.
QUESTION: 36
A. Dimensions
B. Metric
C. Grouping Function
D. Statistic
E. Interval
Answer(s): A,C
Explanation:
When creating Monitoring Query Language (MQL) expressions in Oracle Cloud
Infrastructure Monitoring service, the optional components are: Dimensions:
Dimensions provide additional con- text or filters for the metrics being queried. They
allow you to narrow down the scope of the query by specifying specific resources,
regions, or other properties. Grouping Function: The grouping function is used to
aggregate or group the data based on specified dimensions. It allows you to perform
calculations or analysis on a subset of data and present the results in a summarized
form. The components that are not optional when creating MQL expressions are:
Statistic: The statistic component is mandatory and represents the specific metric or
data point you want to retrieve or analyze. It can be a simple statistic like average, sum,
count, etc., or a complex expression involving mathematical or logical operations.
Metric: The metric component is also mandatory and refers to the specific metric you
want to monitor or analyze. It represents the data being collected and reported by the
monitoring service, such as CPU utilization, network traffic, or custom metrics. Interval
is not a component of MQL expressions. It refers to the time range or period over which
the query is executed and is not specified within the MQL expression itself.
Reference:
https://docs.oracle.com/en-us/iaas/Content/Monitoring/Reference/mql.htm
QUESTION: 37
(CHK_1>3) What cannot be specified in a Schema Document for Oracle
Cloud Infrastructure (OCI) Resource Manager?
D. information about the application such as its name, description, and version.
Answer(s): A
Explanation:
The correct answer is: dependency relationships between variables. In an Oracle Cloud
Infrastructure (OCI) Resource Manager Schema Document, you can specify various
aspects of your template, such as information about the application, permissions, logo,
and pattern validations for string-type variables. However, dependency relationships
between variables cannot be specified in a Schema Document. Dependency
relationships between variables are typically defined in the Terraform con- figuration
files themselves rather than in the Schema Document. Terraform allows you to express
dependencies between resources and variables directly within the configuration using
features like interpolation and variable references. The Schema Document in OCI
Resource Manager primarily focuses on providing metadata and validation rules for the
template inputs, but it does not include features for defining dependencies between
variables.
A. The OKE cluster needs to have a secret with credentials of their OCIR repository
and use that secret in the Kubernetes deployment manifest.
B. They need to add IAM credentials for each user that deploys applications to the
OKE cluster.
C. The VCN hosting the OKE cluster worker nodes needs to have a NAT gateway to
access OCIR repositories.
D. They need to add a security list rule for TCP port 22 to connect to the OCIR
service.
Answer(s): A
Explanation:
A valid concern that needs to be further investigated in this scenario is whether the OKE
cluster has a secret with credentials of the Oracle Cloud Infrastructure Registry (OCIR)
repository and if that secret is being used in the Kubernetes deployment manifest.
When deploying an application on OKE and pulling images from OCIR, the cluster needs
to authenticate and authorize access to the OCIR repository. This is typically done by
creating a Kubernetes secret that contains the credentials (authentication token or
username/password) required to access the repository. The secret is then referenced in
the Kubernetes deployment manifest to allow the cluster to pull the images. If the
images are not getting pulled from the designated OCIR repository, it suggests that the
OKE cluster might be missing the necessary secret with the OCIR credentials or the
secret is not properly referenced in the deployment manifest. Further investigation
should focus on ensuring the existence and correct configuration of the secret and its
usage in the deployment process.
QUESTION: 39
A. Their source code and Kubernetes manifest are in different Git repositories.
B. The build spec.yaml file is in the root directory of their Git repository, and they
didn't specify a path to it.
E. They did not export a vault variable in the vaultVariables section of the
build_spec.yaml file.
Answer(s): D,E
Explanation:
The two situations that might be causing the problem with the build pipeline in the
Oracle Cloud Infrastructure DevOps service are: They did not export a vault variable in
the vaultVariables section of the build_spec.yaml file.
When using vault variables in the build specification file (build_spec.yaml), it is
necessary to export the vault variables in the vaultVariables section of the file. If this
step is missed, the pipeline may fail due to missing or inaccessible vault variables.
Their build specification file is available in a different directory of their Git repository,
and there is no reference to its location. The build specification file (build_spec.yaml)
should be properly referenced in the pipeline configuration. If the file is located in a
different directory than the default location or if its location is not specified correctly in
the pipeline configuration, the pipeline may fail to find and execute the build
specification, leading to a failure. To resolve these issues, the development team
should ensure that they export the required vault variables in the build specification file
and correct-ly reference the location of the build specification file in the pipeline
configuration.
QUESTION: 40
You host your application on a stack in Oracle Cloud Infrastructure (OCI)
Resource Manager. Due to recent growth in your user base, you decide to
add a CIDR block to your VCN, add a subnet, and provision a compute
instance in it.
Which statement is true?
B. You can provision the new resources in the OCI console and add them to the
stack with Drift Detection.
C. You cannot provision the new resources in the OCI console first, then later add
them to the Terraform configuration and state.
D. You can make the changes to the Terraform code, run an Apply job, and Resource
Manager will provision the new resources.
Answer(s): A
Explanation:
The correct statement is: You need to provision a new stack because Terraform uses
immutable in- frastructure. In Oracle Cloud Infrastructure (OCI) Resource Manager,
Terraform uses the concept of immutable infrastructure, which means that any changes
to the infrastructure are managed through the Terraform code. In this scenario, if you
want to add a CIDR block, subnet, and compute instance to your VCN, you would need to
make the necessary changes to your Terraform code, create a new stack in Resource
Manager, and deploy the updated code. This ensures that the infra-structure is created
consistently and according to the desired state defined in the Terraform code. Simply
provisioning the new resources in the OCI console and later adding them to the
Terraform configuration and state would not be the recommended approach in this
case.
Hide Solution Next Question
QUESTION: 41
B. Use the default dashboard that comes configured with the Kubernetes
implementation on the Oracle Cloud Infrastructure Container Engine for
Kubernetes (OKE).
Answer(s): C,D,F
Explanation:
The three actions that the DevOps Engineer must perform to easily manage and
troubleshoot applications on Oracle Cloud Infrastructure Container Engine for
Kubernetes (OKE) are: Create a service account and the clusterrolebinding, obtain an
authentication token for the service account using the kubectl command, and run a
kubectl proxy command to enable the Kubernetes dashboard. This allows for easy
access to the dashboard and management of deployed applications. Automatically
deploy the Kubernetes dashboard during cluster creation, create the cluster using the
API, and set the iskubernetesDashboardEnabled attribute to true. This ensures that the
Kubernetes dashboard is automatically deployed and accessible. Manually deploy the
Kubernetes dashboard on an existing cluster and access it using the appropriate URL.
This involves deploying the dashboard manually and accessing it through the specified
URL, which allows for management and troubleshooting of applications. Using these
actions, the DevOps Engineer can effectively manage and troubleshoot applications
deployed on OKE, leveraging the Kubernetes dashboard for enhanced visibility and
control.
Hide Solution Next Question
QUESTION: 42
You are a DevOps engineer who has recently joined a new department.
You have created 10 Terraform stacks using Oracle Cloud Infrastructure
(OCI) Resource Manager. Each stack creates a different set of resources
in OCI for your development team.
What determines the cost of these Terraform stacks?
A. Resource Manager stacks are free but you are charged for the resources they
create.
B. The cost depends on the number of lines of text in your Terraform configuration
files.
C. The cost for each stack will be higher for a Pay As You Go subscription than for
monthly flex billing.
D. The cost depends on the length of time it takes to build each resource using
these Terraform stacks.
Answer(s): A
Explanation:
The correct answer is: The cost for each stack will be higher for a Pay As You Go
subscription than for monthly flex billing.
When it comes to the cost of using Terraform stacks created with Oracle Cloud
Infrastructure (OCI) Resource Manager, it's important to note that Resource Manager
stacks are free to use. However, you will be charged for the resources that are
provisioned by these stacks. The cost of these resources will depend on factors such
as the types of resources created, their con- figurations, usage duration, and the pricing
model associated with your subscription. The pricing model you choose, such as Pay As
You Go or monthly flex billing, will affect the cost of the re-sources provisioned by your
Terraform stacks. Pay As You Go typically incurs usage-based charges, where you pay
for the actual consumption of resources. Monthly flex billing, on the other hand,
provides predictable costs based on fixed monthly commitments. The number of lines
of text in your Terraform configuration files or the time it takes to build resources does
not directly determine the cost. It's the actual usage and configuration of provisioned
resources that impact the cost.
QUESTION: 43
B. Application code can be pushed to the Resource Manager Stack for automatic
deploy-ment.
C. Terraform code can be packaged and pushed to the OCI Code Repository to
deploy the changes.
D. Manual builds can be run from the Build Pipelines to deploy the changes.
Answer(s): A
Explanation:
The company can use the OCI DevOps Triggers feature to automate deployment of their
application code changes to the production server. Therefore, the correct answer is: OCI
DevOps Triggers feature can be used to automate deployment. OCI DevOps Triggers
allow for automatic builds and de- ployments based on changes to the code repository.
When a new commit is pushed to the reposito- ry, the trigger can initiate a build pipeline
that creates an artifact and deploys the new version of the application to the production
server. Here is the link to the official documentation on using triggers in OCI DevOps to
automate application deployment:
Reference:
https://docs.cloud.oracle.com/en- us/iaas/devops/using/using-triggers.htm
QUESTION: 44
In OCI Secret management within a Vault, you have created a secret and
rotated the secret one time. The current version state shows: Version
Number Status 2 (latest) Current 1 Previous In order to rollback to version
1, what should the Administrator do?
A. Create a new secret version 3 Pending Copy the contents of Version 1 Into
version 3.
B. From the version menu, select "Promote to current.
(Correct)
C. From the version 2 latest menu, sect Road and select version when given the
option.
D. deprecate version 2 (latest), Create a new secret version 3, create soft link for
version-3 to version 1.
Answer(s): B
Explanation:
The correct answer is: From the version menu, select "Promote to current." To rollback
to a previous version in OCI Secret Management within a Vault, the administrator should
select the desired version from the version menu and choose the option "Promote to
current." In this scenario, the administrator wants to rollback to version 1, so they would
select version 1 from the menu and promote it to the current version. This action will
make version 1 the active and current version of the secret, replacing version 2. The
"Promote to current" option allows administrators to switch between different versions
of a secret and make a specific version the active one.
QUESTION: 45
C. When the custom metrics from the services exceed a configured threshold.
Answer(s): A
Explanation:
The Kubernetes Cluster Autoscaler increases or decreases the size of a node pool
automatically based on resource requests, rather than on resource utilization of nodes
in the node pool.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/ContEng/Tasks/contengusingclusterautoscaler.htm
QUESTION: 46
A. OCI DevOps helps with security issues and ensures integrated security through
auto-mated Jira notifications.
B. OCI DevOps assists with high failure rate and outages through Anomaly
Detection. Monitoring Services and Cloud Analytics
C. OCI DevOps helps with deployment delays by ensuring rapid and continuous
integration and delivery through CI/CD pipelines:
D. OCI DevOps helps with erratic code issues by ensuring speedy code execution
through shared repos and tight operational feedback loops
Answer(s): C
Explanation:
The correct answer is: OCI DevOps helps with deployment delays by ensuring rapid and
continuous integration and delivery through CI/CD pipelines. Oracle Cloud Infrastructure
(OCI) DevOps pro-vides a set of tools and services that support the implementation of
DevOps practices. One of the key capabilities of OCI DevOps is enabling rapid and
continuous integration and delivery through CI/CD (Continuous Integration/Continuous
Delivery) pipelines. CI/CD pipelines automate the build, testing, and deployment
processes, allowing developers to quickly and efficiently deliver software updates and
new features. By using OCI DevOps, companies can streamline their development and
deployment processes, reducing deployment delays and improving their ability to keep
up with competitors. OCI DevOps provides features such as source code management,
build automation, artifact management, and release automation, all of which contribute
to faster and more re-liable software delivery.
Answer(s): C
Explanation:
When creating stages for an Oracle Kubernetes Engine (OKE) deployment pipeline, you
are able to include the following actions within the pipeline stages themselves: Add a
stage to apply the Kubernetes manifest to the Kubernetes cluster: This stage allows you
to apply the desired Kubernetes manifest files that define the deployment configuration
of your application to the OKE cluster. It ensures that the desired state of your
application is reflected in the cluster. Add a stage to apply the container image to the
Kubernetes cluster: This stage involves deploying the container image of your
application to the OKE cluster. It ensures that the latest version of your application's
container image is deployed and running in the cluster. Add a stage to deploy
incrementally to multiple OKE target environments: This stage allows you to deploy your
application incrementally to multiple OKE target environments, such as staging and
production. It enables you to control the deployment process and ensure that the
application is rolled out smoothly across different environments. Add a stage to deliver
artifacts to an Oracle Cloud Infrastructure (OCI) Artifact Registry: This stage involves
delivering the artifacts generated during the build process, such as container images or
other artifacts, to the OCI Artifact Registry. The Artifact Registry provides a centralized
location for storing and managing artifacts, making them easily accessible for
deployment to OKE or other environments. Including these actions within the pipeline
stages themselves helps streamline the deployment process and ensures that all
necessary steps are included within the automated pipeline, minimizing the need for
manual intervention or external processes.
QUESTION: 48
A. Initiate the automated upgrade process using the OCI Console. CLI, or API.
B. Upgrade the node pools one at a time, then once all node pools are upgraded,
upgrade the control plane.
Answer(s): C
Explanation:
The correct approach to upgrade an Oracle Container Engine for Kubernetes (OKE)
cluster to a newer version of Kubernetes is to first upgrade the control plane and then
upgrade the node pools. Here are the steps to follow: Upgrade the control plane: Initiate
the upgrade process for the control plane using the OCI Console, CLI, or API. This
upgrade will update the Kubernetes control plane components running in the master
nodes of the cluster. Upgrade the node pools: After the control plane upgrade is
completed, you can proceed to upgrade the node pools. A node pool is a set of worker
nodes within the cluster. You can upgrade the node pools one at a time or
simultaneously, depending on your requirements and cluster configuration. By following
this approach, you ensure that the control plane is upgraded first, which ensures
compatibility and stability with the new version of Kubernetes. Afterward, you can
upgrade the node pools to ensure all worker nodes are running the latest version of
Kubernetes. It is important to note that before performing any upgrades, it is
recommended to review the release notes and upgrade documentation provided by
Oracle for any specific instructions or considerations related to the version you are
upgrading to.
Reference:
https://docs.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengaboutupgradingclusters.htm
QUESTION: 49
B. Ansible Collection
D. Terraform
Answer(s): B,D
Explanation:
To use a Kubernetes cluster in your deployment architecture on Oracle Cloud
Infrastructure (OCI) with OCI DevOps service, the two recommended tools or services
are: Terraform: Terraform is a widely used Infrastructure-as-Code (IaC) tool that allows
you to define and manage your infra- structure resources in a declarative way. You can
use Terraform to define and provision your Ku- bernetes cluster on OCI, including the
necessary networking, compute resources, and container services. Ansible Collection:
Ansible is an open-source automation tool that helps with configuration management,
application deployment, and orchestration. The Ansible Collection for OCI provides
modules and playbooks specifically designed to manage and interact with OCI
resources, including Kubernetes clusters. You can use Ansible Collection to automate
the deployment and management of your Kubernetes cluster on OCI. The other options
mentioned are not directly related to managing Kubernetes clusters on OCI: Compute
Jenkins Plug-in: Jenkins is a popular open-source automation server used for
Continuous Integration/Continuous Deployment (CI/CD) processes. The Compute
Jenkins Plug-in is specific to managing OCI compute resources using Jenkins but does
not directly address Kubernetes cluster deployment. Chef Knife Plug-in: Chef is a
configuration management tool that helps with managing infrastructure as code. The
Chef Knife Plug-in is used to interact with the Chef tool, but it does not directly address
Kubernetes cluster deployment on OCI. OCI Resource Manager: OCI Resource Manager
is a service that helps you automate the process of deploying infrastructure resources
on OCI.
While it can be used to manage various OCI resources, including compute instances, it
does not specifically focus on Kubernetes cluster deployment.
QUESTION: 50
C. Configure the SSH file so that their SSH key is used when connecting to OCI
Code Repositories
Answer(s): B
Explanation:
After triggering a Build Pipeline in the OCI DevOps service, you can perform the
following action:
Create a reference to a secret in the OCI Vault: This allows you to securely store and
manage secrets, such as API keys or passwords, that are required for the deployment
process. By referencing the secret in the OCI Vault, you can ensure that sensitive
information is protected and easily accessible during the deployment. The other options
mentioned are not directly related to actions that can be performed after triggering a
Build Pipeline: Setting up a Kubernetes cluster as an environment for deployment is
typically done before triggering the Build Pipeline. It involves provisioning the necessary
infrastructure to support the deployment of containerized applications. Configuring an
OCI compartment for storing DevOps resources is a configuration step that is done
independently of the Build Pipeline. Compartments are used to organize and manage
resources in OCI, but they are not directly related to the Build Pipeline process.
Configuring the SSH file for OCI Code Repositories is not an action that is performed
after triggering the Build Pipeline. SSH keys are typically configured before interacting
with code repositories, and they are not specific to the Build Pipe-line process.
A company uses OCI logging service to collect logs. You need to move the
archive log data to OCI Object storage.
Which OCI feature should you use to achieve the goal? (Choose two.)
C. Compartments
D. IAM policy
Answer(s): B,D
Explanation:
To move the archive log data to OCI Object Storage, you should use the following OCI
features:
Service Connector Hub: The Service Connector Hub allows you to create a service
connector be- tween OCI Logging and OCI Object Storage. You can configure the
connector to automatically export log data from the Logging service to Object Storage.
IAM Policy: IAM policies are used to define permissions and access control for
resources in OCI. You need to configure the appropriate IAM policies to allow the
necessary permissions for the Logging service to write data to Object Storage.
Compartments and Oracle Digital Assistant are not directly related to the task of
moving archive log data to OCI Object Storage.
What is the correct logging CLI syntax for the log search with a query for
REST call responses having status code 400, within a Log Group "web"
and the Log "application"?
Answer(s): A