Working With Oracle Autonomous Database
Working With Oracle Autonomous Database
Notas:
• El examen está en la versión 2019, en ese entonces los servicios de autonomous
operaban sobre servidores exadata compartidos y en versión 18c, por lo que tenía
menos características que ahora, del mismo modo, algunos documentos hablan de
las características actuales y puede que no apliquen al examen. Por ejemplo, hacen
referencia a que no se puede hacer dblinks pero ya existe un package que permite
esta funcionalidad.
• La familia de servicios actuales tiene 4 modalidades, los mismos 2 servicios originales
en modo shared pero con la opción de tenerlos en modo dedicated, por lo que cada
uno tiene sus propias características y funcionalidades.
• Los videos no hacen mucho énfasis en las vistas analíticas, pero en el examen hay
varias preguntas sobre este tema.
• También hacen énfasis en las características removidas en autonomous y en los
packages disponibles, los pdf de la arquitectura compartida de ADW y ATP tienen
todo el detalle necesario.
• Se añaden las referencias empleadas para responder el examen de pruebas, que
realmente es un brain dump mal respondido que he intenado corregir para que sirva
de referencia, pero hay cosas que no tengo la certeza que apliquen a la versión del
examen.
• Todas las preguntas las intente basar en los servicios en modo shared exadata.
Referencias:
• https://docs.oracle.com/en/cloud/paas/atp-cloud/atpug/using-oracle-autonomous-
transaction-processing-shared-exadata-infrastructure.pdf
• https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-
cloud/user/using-oracle-autonomous-data-warehouse-shared-exadata-
infrastructure.pdf
• https://www.oracle.com/a/ocom/docs/database/oracle-autonomous-database-
strategy-wp.pdf
• https://www.oracle.com/database/technologies/datawarehouse-bigdata/adb-
faqs.html#GENERAL-BOOKMARK
• https://blogs.oracle.com/datawarehousing/making-database-links-from-adw-to-other-
databases
• https://docs.cloud.oracle.com/en-us/iaas/Content/Database/Tasks/adbcreating.htm
• http://www.oracle.com/us/products/database/autonomous-database-self-securing-
5116048.pdf
• https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dwhsg/part-
analytic-views.html#GUID-AC5ACD4F-69F3-4C32-B7A1-EABF93639BEC
Welcome to Oracle Autonomous Database technical overview. Let's start with what you will
learn and the course objectives we will cover in this video. We will start with a quick overview
of the Oracle Cloud Infrastructure, followed by an overview of the autonomous database. We
will go over key features of the Oracle Autonomous Database in the areas of self-driving, self-
securing, and self-healing. We will learn how the Oracle Autonomous Database integrates and
fits with the Oracle Cloud Infrastructure. And finally, we will put it all together with a typical
workflow on deploying an autonomous database.
The Oracle Autonomous Database is the integration of the Oracle Database running on the
Exadata platform with our complete infrastructure automation and our fully automated data
center operations. Automated data sections include provisioning, patching, upgrading, and
online backups, monitoring, scaling, diagnosing, performance tuning, optimizing, testing, and
change management of complex applications and workloads, and automatically handling
failures and errors.
Let's start with the Oracle Cloud Infrastructure. Because the Oracle Autonomous Database
services are hosted as part of the Oracle Cloud Infrastructure, it is important to understand what
OCI is, its key features, and how it integrates. So let's proceed with an overview of the OCI
infrastructure.
Oracle built an enterprise cloud capable of running the most demanding and most innovative
workloads. And we followed three key design principles. We knew that to be effective in
supporting the system of records that run our customers' businesses, we need our infrastructure
to be compatible with the critical and complex workloads our consumer base cares about as well
as providing the same level of performance as what they have gotten on premises or better.
That entails, first, industry-leading performance stats in terms of compute power and storage
IOPS capability.
But even more importantly, in many ways, is the consistency of this performance. To effectively
run stateful systems of record, performance can't be reduced by what's happening next to the
customer. And it can't vary from the moment to moment, day to day, or month to month. To
get this, we eliminate resource oversubscription from compute, memory, and network
resources. This makes our cloud more expensive to build, but it gives our cloud the ability to run
enterprise workloads more effectively than any other cloud today.
Oracle Cloud Infrastructure provides low, predictable pricing. We made the pricing of our cloud
components low so that our customers could save money by moving to the cloud. But almost
more importantly, we made the economics of our cloud far more predictable by making services
all-inclusive and pushing autonomous services.
Oracle Cloud Infrastructure makes it easier to deploy Oracle products. Our existing customers
can deploy faster and more easily and focus on using the product rather than the mundane tasks
of managing and continuously upgrading the infrastructure. With tools and processes to help
migrations, it makes it easier for Oracle customers to migrate to Oracle's cloud.
We built our cloud to support all the functionality and performance available in customer data
centers, but with the benefits of increased agility, elimination of mundane tasks like managing
hardware and facilities upgrades, patches, and capacity forecasting. We have deep expertise in
cloud-specific automation. To make the migration possible without risk or high cost, we offer
tools to connect our cloud to your data center to ours to enable the migration itself. Everything
we run in our cloud is consistent with what you run in your own data center, including the Oracle
Database itself, the surrounding ecosystem of tools like RAC, Data Guard, GoldenGate and all
the third-party and management tools our customers use.
And we built it so that customers wouldn't take a step backwards in terms of performance when
they move to the cloud. We give them the ability to run Exadata-engineered systems as a cloud
service, offering the highest level of performance and scalability for Oracle workloads,
something that is widely used on premises environments and not available in any other cloud.
We will also build a cloud network with massive interconnect bandwidth and no resource
oversubscription to ensure that noisy neighbors isn't an issue and high performance we deliver
is invariable depending on external factors.
These are often complex, customized environments that can easily move to vanilla SaaS
environments. We give these customers an easy path to move apps, as they run in their own
data centers, to the cloud, where they get the same performance or better than on premises
while no longer wasting time on hardware refreshes, system upgrades, or other mundane tasks.
And they often save significant money as well. They get to bring all their customizations and
easily integrate with other applications that also run in our cloud.
The next category is custom and ISV applications that run on the Oracle Database. There are
thousands of enterprise organizations and software companies that use Oracle Database as a
key foundation for applications they build. Oracle has made it easier for these organizations to
build services that take advantage of our managed cloud database as well as the infrastructure
optimized around this stack to reduce the level of effort they undertake in deploying these
applications in our cloud. We can eliminate many of the mundane tasks of standing up and
maintaining the database and the underlying hardware.
Our platform is also a great fit for performance- and data-intensive workloads. This includes true
high-performance compute workloads of multiple varieties as well as data lake and other
compute- and storage-intensive workloads where data access and consistent performance are
key success criteria. Our design principle for cloud-native applications is to focus on leveraging
the industry-leading development streams in open source and elsewhere, making our cloud
compatible with what customers are already using this with success.
With Oracle Kubernetes engine and registry for containers, customers can deploy the industry
standard in container deployment and management on top of our predictable and performant
bare metal infrastructure that avoids the conflict and performance degradation of hypervisors
and server agents. We built an open-sourced-- our .fn project for serverless architectures, which
can be downloaded and run anywhere, also available as a highly flexible and reliable cloud
service. For management, we are heavily supporting Terraform from HashiCorp, a widely used
infrastructure automation framework that can be used to program infrastructure deployments
in our cloud as well as on premises and in other clouds.
Our approach is extremely comprehensive. Our cloud infrastructure provides all the core
services to build and deploy production applications. Oracle has been building our PaaS services
in our own infrastructure.
Services include compute, containers, storage, database, autonomous database, security, and
integration. We also have an extensive SaaS offering, including CX, HCM, SCM, EPM, ERP, and
data as a service. Since our hardware selection and design choices were focused on a
dependable performance production applications need, it was easy for us to also cover
performance-intensive workloads, including any HPC workload, even those requiring specialized
hardware. And if you're building new cloud-native applications utilizing functions, Docker, or
Kubernetes, we have those services as well.
Oracle Cloud Infrastructure is hosted in regions and availability domains. A region is a localized
geographic area. And an availability domain is one or more data centers located within a region.
Our region is composed of one or more available domains. Most cloud infrastructure resources
are either region-specific, such as virtual cloud networks, or availability-domain-specific, such as
compute instance.
Traffic between availability domains and between regions is encrypted. Availability domains are
isolated from each other, fault tolerant, and very unlikely to fail simultaneously. Because
availability domains do not share infrastructure such as power or cooling, or the internal
availability domain network, a failure at one domain within a region is unlikely to impact the
availability of the others within the same region. The availability domains within the same region
are connected to each other by a low-latency, high-bandwidth network, which makes it possible
for you to provide high-availability connectivity to the internet and on premises and to build
replicated systems in multiple availability domains for both high availability and disaster
recovery. Regions are completely independent of other regions and can be separated by vast
distances across countries or even continents.
A fault domain is a grouping of hardware and infrastructure within an availability domain. Each
availability domain contains three fault domains. Fault domains let you distribute your instances
so that they are not on the same physical hardware within a single availability domain. A
hardware failure or compute hardware maintenance that affects one domain does not affect
instances on other fault domains.
To control the placement of your compute bare metal DB system or virtual machine DB system
instances, you can optionally specify the fault domain for a new instance at launch time. If you
do not specify the fault domain, the system selects one for you. To change the fault domain for
an instance, terminate it and launch a new instance in the preferred fault domain. Use fault
domains to, number one, protect against unexpected hardware failures, number two, protect
against plant outages due to compute hardware maintenance.
Oracle offers a broad variety of compute solutions from small and virtualized to very large and
dedicated, from web servers to high-performance application servers, with either network block
storage or local non-volatile memory. These options enable you to build a range of applications
on the same high-performance network from traditional enterprise to modern scale-out, from
unpredictable to steady state. Virtual machines and bare metal compute with predictable IOPS,
block storage for general purpose needs-- these standard options include new instances based
on AMD EPYC processors, which cost less than half of our other VM offering, and higher bare
metal core counts.
Dense IOB virtual machines with local non-volatile memory storage provide a range of compute
and capacities with high IOPS, bare metal GPUs with two P100 and eight P100 GPUs, 28 to 52
cores, virtual memory GPU options, and predictable IOPS block storage, bare metal compute
with 52 cores, high memory, and optional local non-modeled SSD provisioning in under five
minutes. And there's a specialty HPC SKU with higher all-core Turbo Core frequencies and
RDBMA capabilities.
Oracle Cloud Infrastructure also provides a production-ready RDBMA network in the cloud,
enabling us to serve tightly coupled HPC workloads as well as easily parallelizable ones. The
cluster network is an RDBMA-based network that lets you form clusters of compute, storage,
GPU, or hybrid that use secure, ultra-low latency networks between cluster nodes. This allows
complex CFD or simulation workloads to run on OCI targeted for the hardest product
development workloads such as CFD, crash simulations, reservoir modeling, or DNA sequencing.
Oracle Cloud provides optimized storage for nearly any use case. Local, non-volatile SSD provides
the fastest performance for transactional database and HPC use cases. File storage offers a
managed file storage service that scales from just kilobytes of data to exabytes, making it ideal
for enterprise applications, big data, analytics, scale in applications, and container-based
applications. Block storage is the most flexible for application development and deployment in
classic tiered applications. Object storage provides great economics for backup and archive as
well as big data lakes. Whether your application prefers a tiered storage strategy with snapshots,
backups, and replications, or more of a scale-up model, OCI offers a wide range of highly
performant options.
Oracle Cloud Services make security a top priority. Security is broken down into four areas--
number one, deeper customer isolation to prevent customer peering or accidental data sharing;
number two, data encryption end to end, data not be viewed by non-authorized users; number
three, network protection to prevent access to applications and data; number four, verifiable
security for full accountability of access to any resource to comply with regulations and for
forensic analysis.
In this light, you can see a brief overview of our extensive services. This is the true enterprise
cloud you've heard mentioned. We don't focus on micro-instances or VMs with time sliced,
fractional CPU allocations. We focus on providing what business needs to run real production
workloads, workflows that have to scale up as well as out, workloads that may require the
reliability of a solid, traditional hardware infrastructure in addition to the plentiful approach of
cloud, workloads that need low-latency access to storage and networks. And we provide
businesses with simple pricing and predictable costs instead of an arcane system that penalizes
you for running the high-performance production applications you depend on.
And once you get to our cloud, then the innovations kick into high gear. Customers have a full
range of options to deprecate and eliminate their data centers if they choose to or to keep them
running for some workloads with deep compatibility and connectivity options with the Oracle
Cloud. We allow customers to expand their curation of data with deep analytics and integration
options to get into Oracle's new Autonomous Database Cloud service that eliminates tedious
management tasks and represents the future of enterprise data management.
Customers can augment their own data with data we own in our Oracle Data as a Service Cloud.
Customers can also expand their network of applications, surrounding their data with cloud-
native functionality that allows them to build new, innovative approaches to managing and
making use of data, including our Kubernetes-based container service, our flexible .fn serverless
capabilities, as well as a broad ecosystem of third-party options that unlock new value from
data.
Let's look at the Autonomous Database. Transform from building and maintaining database to
using autonomous services and modern clouds. This allows you to, one, innovate faster with
lower costs, developing and optimizing new application faster, cutting runtime costs up to 90%,
eliminate full-stack administration costs, and number two, ensure data safety by eliminating
cyber-attack vulnerabilities and obataining service-level objectives of 99.95%.
Starting in Oracle Database 9i, we began to introduce, and matured, many sophisticated
automation capabilities from memory management to workload monitoring and tuning, all of
which are used in the Autonomous Database. But it's not just the database management that
Oracle has been automating. We have also spent the last decade working on the database
infrastructure with our engineered systems, which provide the best platform for the Oracle
Database as they are only preconfigured, pretested, and optimized platforms for the database.
Oracle Autonomous Database is actually a family of cloud services with each member of the
family optimized by workload. The first member of the family is the Autonomous Data
Warehouse, which has been optimized for analytic workloads such as data warehouse, data
marts, or as part of a data lake. The second member is the Autonomous Transaction Processing.
ATP is optimized for transaction processing or mixed workload environments and makes an
excellent platform for new application development.
All members of the Autonomous Database family share the same fully automated, high-
performance Exadata infrastructure that provides world-class availability and scalability. They
also share complete automation of all database administration tasks, such as provisioning,
patching, securing, backups, et cetera. Autonomous Transaction Processing only difference from
Autonomous Database Warehouse when it comes to how to optimize for each specific workload
within the database.
In terms of memory, Autonomous Data Warehouse uses the data set large to cache. So memory
is used to speed up large joins and aggregations such as group-by operations. On Autonomous
Transaction Processing, we use a majority of the memory to cache the active data set to avoid
any IO. We also use RDBMA to access data directly in memory on the other service in the RAC
cluster.
Regardless of the workload, we need to keep optimizer statistics current to ensure we get
optimal execution plans. With ADW, we're able to achieve this by gathering statistics as part of
the bulk load activities. With ATP, where data is added using more traditional insert statements,
statistics are gathered automatically periodically. As the data volume changes, or new access
structures are created, there is the potential for execution plans to change. And any change
could result in performance regression. So we use Oracle SQL plan management to ensure that
plans only change for the better.
Now that you have learned how autonomous came to be, let's take a deeper look at the key
features. Oracle Autonomous Database provides many benefits for business users and
administrators, starting with the lower operation costs due to optimized and on-demand sizing
configurations that only get billed when used by the hour. It provides entry-level configurations
of just one CPU and 1 terabyte of allocated database space, and includes backups, analytics, and
development tools, and full management for the service price. There is substantial risk reduction
by running an Oracle Autonomous Database due to the several risk mitigation strategies
included with the service.
All data is encrypted at rest and in communication. Defined user roles ensure no accidental data
inspections by non-authorized users. Application of security patches as soon as they are
available and robust security around the Oracle Cloud operations combine to reduce many areas
of risks normally associated with on-premises installation. The very fast provisioning and
availability of an Oracle Autonomous Database and its ease of access from anywhere with
internet connectivity makes it an excellent platform for innovation, much faster than installing
and implementing an Oracle database on premises. We will discuss this in more detail in another
module.
Instantiating an Oracle Database on the Oracle Cloud and only takes a few steps and minutes,
making it simple to implement. If you have an existing database application, it can be exported
and imported into autonomous database through a fully automated process in the included SQL
developer tool, allowing you to re-point application servers to the autonomous database and
have a quick migration to the cloud. Autonomous database runs on Oracle's highly optimized
database infrastructure, Exadata, which makes it the fastest Oracle Database platform in the
cloud while providing on-demand elasticity by allowing customers to add both computing and
storage as needed, when needed, without sustaining an outage. This topic will be further
covered in detail in a future module.
The mission of the autonomous database is to provide a service that is self-driving, which will
automatically take care of all database and infrastructure management as well as monitoring
and tuning. So the user will simply specify the service-level agreement, and Oracle will make it
happen. We believe this will help reduce costs and improve productivity by automating the
mundane tasks of having to provision, patch, and back up databases. Freeing up their IT teams
to focus on the task will bring value to the business.
We also want the database to be self-securing, protecting itself from both external and internal
malicious attacks. We do this by automating encryption of all data, whether it's at rest or in flight
and automatically applying security updates with no downtime. Finally, we want the
autonomous database to be self-repairing. And by that, we mean it will automatically recover
from any failure and minimize all kinds of downtime, including planned maintenance, with an
SLA guarantee of 99.95% availability. That's less than 30 minutes downtime per year, including
planned maintenance. It will also elastically scale compute or storage.
It is to provision a new industry great proven database that uses RAC and Exadata in minutes.
Oracle applies all the best practices of 40 years in this database. You don't have to worry about
configuring, applying, tuning, installing the hardware, or software, or anything. It's all taken care
of for you. You select CPU and storage separately-- so scaling independent when you need more
CPU resources or just additional storage.
In autonomous database, optimizer statistics are gathered automatically during direct-path load
operations. If users need additional statistics, they can gather stats manually at any time.
Machine learning also allows autonomous database to optimize executions based on usage
patterns of each database. Because autonomous database and the Exadata platform it runs on
are so efficient at running the Oracle Database, by default, optimizer and parallel hints are
ignored. Parallelism generally is determined by defined services in the autonomous database--
more of this in future modules.
Users have the ability to explicitly re-enable hint processing if it is required for specific reasons.
Although autonomous database is designed to completely automate and provide the best
environment for running the Oracle database applications, Oracle realizes there may be specific
reasons, such as application compatibility or referential integrity, where items such as indexes
may be required. So let's review where Oracle recommends self-tuning services provided by the
autonomous database.
Number one, tables do not need to be partitioned. And partitioning should not be used as a
performance-enhancing design objective in autonomous database deployments. Databases that
are being migrated to autonomous database should have partitioning removed unless there's a
specific operation reason for use.
Number two, in general, indexes should not be used on tables for performance reasons.
Autonomous database and the Exadata platform it runs on provide automatic enhanced
indexing for data retrieval that, in most cases, performs better than manual indexing. Number
three, autonomous databases use compression of data in the database. So additional
compression does not need to be used.
Number four, in-memory tables cannot be used in autonomous database. And number five,
tables spaces do not need to be created. Manual tuning of partitioning, indexes, materialized
views, and compression is available, but should only be used with careful consideration, such as
in cases where migration of an existing system whose data loading scripts rely on partitioning or
indexing is used for referential integrity.
Oracle execution plans are like driving directions. They will change as the data distribution
changes-- data volumes and statistics. Indexes can be thought of as roads and bridges. With
auto-indexing, new roads will be added as the workload continues. Changes in data volume and
SQL workloads are continuously captured. And machine learning algorithm processes changes
to find new optimal plans and indexes.
An expert system that implements indexes based on what a skilled performance engineer would
do is part of the environment. It first captures, periodically, the application's SQL history into a
SQL repository and includes SQL plans, bind values, execution statistics, et cetera. It then
identifies candidates for indexes that may benefit the newly-captured SQL statements. It creates
the index candidates as usable and invisible indexes, metadata only. And it drops indexes
obsoleted. by the newly created indexes, performing a logical merge.
Third step is to verify these new indexes. At this point, yes, the optimizer index candidates will
be used for captured SQL statements, materialize indexes, and run SQL to validate that the
indexes improve the performance. And all verification is done outside application workflow.
At that point, there's a decision to be made. If the performance is better for all statements, the
indexes are marked visible. If performance is worse for all statements, the indexes remain
invisible. If performances were for some, the indexes are marked visible except for the SQL
statements that regressed.
There is a monitor capability which monitors index usage in continuous mode, automatically
creates-- the indexes that have not been used in a long time will be dropped. You can switch this
service off. It is resource-controlled, so it will only use one CPU for doing auto-indexing. If we
take a copy of our data and run on our CPUs, then we may run into security and data privacy
issues.
Automatic indexing creates secondary indexes that are used to improve SQL performance other
than primary key and foreign key indexes. It applies to tuned and untuned applications. For
tuned applications, existing secondary indexes may be outdated, or an important one can be
missing. Some secondary indexes may also be dropped if they are no longer useful.
For untuned applications, development frameworks and object relational mappers often only
generate primary key indexes, and sometimes, foreign key indexes. Auto-indexing augments
existing primary and/or foreign key constraints to improve performance. It supports single-
column and concatenated indexes, function-based indexes, and compression advanced load. In
this example, you can see how ATP created 43 auto-indexes in just 30 minutes to improve
performance.
Two functions of the automated management feature of ADB are backups and patching.
Backups are scheduled on a nightly basis to Database Backup Cloud Service with a retention
period of 60 days. The cost of backup and storage is included with the price of ADB. The GUI
console shows detailed information about backups that have been taken and allows restores
from any of them. Full-stack patching is done once a quarter in rolling fashion across nodes of
cluster to maintain the availability of the service. Time to apply patches is automatically selected
by Oracle Cloud operations, but customers can override the selection and select an alternate
time.
Let's move to self-securing and dive into what helps the database to be self-securing to protect
all your data. This section highlights the benefits of self-securing and the key Oracle technologies
and capabilities that enable them. This is certainly not an exhaustive list. But these are the key
capabilities that we will be diving into in this section and understanding how they work to make
the database self-securing.
Autonomous database stores all data in encrypted format in the Oracle Database. Only
authenticated users and applications can access the data when they connect to the database.
All connections to the autonomous database use certificate-based authentication and Secure
Socket Layer, SSL. This ensures that there is no unauthorized access to the autonomous database
and that communications between the client and server are fully encrypted and cannot be
intercepted or altered. Certificate-based authentication uses an encrypted key stored in a wallet
on both the client, where the application is running, and the server, where your database service
on the autonomous database is running.
The key on the client must match the key on the server to make a connection. A wallet contains
a collection of files, including the key and other information, needed to connect to your database
service in autonomous database. For data encryption security keys, Oracle allows separation of
keys. For encryption at rest, Oracle allows you to enable and disable encryption. Oracle delivers
it, by default, in the on option.
Patching is very expensive, because it requires downtime and several man hours to patch all the
databases in your environment. Also, patches may be applied once in a quarter. So it's an
ongoing effort. Autonomous database will patch your systems for you while the database is
running. It needs no downtime and manual effort. So it can never happen that you forget to
apply the patch or didn't have time to do so. It ensures that you're always protected from known
cyber-attacks.
In autonomous database, no logins are allowed to the OS. No root or SYSDBA logins are allowed.
The only allowed logins are as admin, privileged default autonomous database user, or regular
database user. No call-outs to the OS are allowed from autonomous database. This prevents
installing or modifying any software on the system. Database clients can connect securely using
a TLS wallet. Databases in dedicated autonomous database run in customer private virtual cloud
networks to prevent network access by other customers or hackers.
Public IP is not required. Secure configurations are deployed at all levels of autonomous
database-- OS, database, storage, et cetera. Oracle automatically applies updates and the latest
security patches on a quarterly or off cycle for high-impact security vulnerability. Native
encryption prevents data access from outside the database.
Oracle tools leveraged by autonomous database for security are Data Masking and Database
Vault, which accomplish, first of all, no access to the database node or file system-- Oracle DBAs
are separated from actual data-- number two, dynamic reduction and masking of data-- Oracle
can apply security policies as data leaves the database-- for example, convert social security
number to a representation like xx and the last four digits of the number. Number three, static
masking for test dev databases can simply convert sensitive fields. Number four, metadata
tagging-- this is part of the label security option. Data can be marked as sensitive, confidential,
et cetera. And it is included for free in the Database Cloud Service.
And number five, full defense in-depth, it is built over 30 years of meeting the needs of the most
demanding organizations, high-threat environments, security services, and financial institutions.
Database auditing is configured by default and customizable to meet your needs. Autonomous
database comes preconfigured using Oracle Unified Audit. This feature includes automated
auditing for privileged user activity and login failures and optional preconfigured policies for the
Center for Internet Security audit benchmarks, account management, and much more.
The audit trail is available through service REST call invocations. Database audit trails can also
be retrieved. Future release will include detailed auditing through additional security services.
The autonomous database provides preventive protection against all unplanned and planned
downtime and rapid, automatic recovery from outages without downtime.
There is a broad range of events that can cause database downtime, including component,
storage, and servers failures, database crashes, or even site-wide or regional outages due to a
natural or man-made disaster, data corruption that can cause incomplete backups or render the
data useless, human error, which plays a significant role in many cases, whether it's a database
table that was dropped, a cable that was accidentally unplugged, or a tape that was lost, planned
downtime for patching, upgrades and maintenance, which represents an increasingly
disproportionate percentage of overall downtime for many growing organizations.
Oracle has successfully addressed all of these causes of downtime and disruption in on-premises
environments for decades with the Oracle Maximum Availability Architectural, Oracle MAA.
Oracle MAA is a set of advanced technologies and best practices that can be deployed to handle
any service-level requirement, with solutions ranging from periodic backups to zero data loss
and zero-downtime-replication-based disaster recovery. The MAA portfolio is also available in
the Oracle Cloud and has been enhanced with automated functionality that minimizes-- in many
cases, eliminates-- human intervention.
Exadata not only continuously monitors for failing devices, it also provides redundant database
servers that provide active, highly available cluster servers, hot-swappable power supplies and
fans, redundant power distribution units, provides redundant storage grids that provide data
mirrored across storage servers, and redundant, non-blocking IO paths, and redundant networks
that include redundant IB connections and switches. The self-healing software automatically
runs all monitoring and fault prevention tools in the background 24 hours a day, seven days a
week.
It uses Oracle's 40 years of experience to build machine learning models to make sure they
monitor the system and make sure the system is providing the maximum availability and healing.
it applies machine learning algorithms and Oracle's best practices to fully automate database
operations. Oracle uses machine learning algorithms like anomaly detection, pattern
recognition, problem signatures to detect and prevent issues and failures and fix the known
issues or erase SRs of the half of our customers. Bug detection should be our job, not yours.
It can correlate problems across different systems to give you a complete story of the fault that
occurs. The hardware supports itself and heals itself. With cloud-based, region-based
duplication everywhere, database hardening, RAC redundant compute, triple-mirrored storage,
we can provide 99.95% service level objective through the stack. All these technologies that
make the Oracle Database highly available are now provided to you with the autonomous
database. So you don't have to think about network failures, hardware failures, failing disks, or
if your host fails, on taking backups, or even if your entire region sinks in an earthquake.
We have now reviewed the Oracle Cloud Infrastructure and the autonomous database key
features. Now let's take a deeper look at the architectural component. The autonomous
database is placed on an Exadata system based on the region where the customer is located or
closest to. This placement, except for region location, is invisible to the customer, but is done to
minimize traffic latency and maximize data center efficiency.
Oracle completely manages and controls all operation aspects of the system, including patching,
software versions, isolation, backups, and other operational procedures. This provides the most
flexible and least obtrusive environment for the customer and the most effective environment
for Oracle Cloud Operations to manage. This also allows Oracle to offer this service with a very
minimum required description of one OCPU and 1 terabyte of storage, which, while already
providing a sizable development environment, is a very low cost of entry. The minimum
commitment for a customer to the environment is one hour of built usage time.
When a user or other process connects to the autonomous database, the connections are routed
through connection manager servers that distribute and manage connections into the databases
in the Exadata servers. These connection managers are attached and connected to the network
infrastructure. Oracle Cloud Infrastructure physical or virtual servers that run applications that
leverage databases in autonomous database are also a typical, but not required, component of
a fully integrated autonomous database cloud stack.
Connectivity to object storage such as Amazon S3 also leverage this architecture. Connectivity
to the autonomous database for permitted access, such as those of Cloud Oracle Operations,
virtual cloud networks, or shared services are performed through a whitelisted IP service that
guarantees that only determined IP addresses can access these services directly.
Supporting storage for databases on the autonomous database are provided by the Oracle Cloud
Infrastructure Object Storage service. Oracle Autonomous Database performs automatic
backups on provisioned databases. And those backups get stored in private storage defined in
the Oracle Object Store. Backups are automatic and non-optional an autonomous database. And
no setup is required by users.
However, autonomous database allows users to create their own additional backups for other
operational purposes, including point in time recover if needed. And that backup needs to be
stored in a user-defined OCI object store bucket. Buckets and credentials need to be defined by
the user. And those set of backups are user-maintained. The backups can be accessed like any
other file on object store.
Staging dump files, Oracle external tables, and other objects used by the database are stored in
user-created object storage buckets. And the process for creating and maintaining these buckets
is the same as user-defined backup buckets. Oracle Database services are exposed in two
different ways. Most of the actions are exposed through the easy-to-use cloud user interface,
providing click-through screens for achieving most functions.
Because a lot of database applications are part of a much larger ecosystem controlled through
the scripting or other tools, autonomous database provides REST APIs to perform any supported
operation. For example, database creation, termination, backup, restore, start, and stop, or
scanning CPUs or storage can be performed either through the user interface or the REST APIs.
We will be discussing both of these in more detail in future modules.
Autonomous database includes monitoring capabilities available through the Cloud Service
dashboard with an easy-to-use UI and can also be performed through Enterprise Manager Cloud
controls. Developers and DBAs can use the included SQL developer tool for developing database
applications or performing DBA management operations. SQL Developer natively understands
how to interface with the autonomous database cloud credentials. So no in-depth knowledge of
how to connect to the Oracle Cloud Services is required.
Using Oracle REST Data Services, ORDS, developers can easily build REST APIs for data and
procedures in the database. Connecting to the autonomous database is done using credential
wallets via SQL*Net, JDBC, or ODBC. The wallet can be downloaded from the Service Console or
using REST APIs. We will cover this procedure in more detail in future modules.
Without the wallet and credentials, there is no easy way to access the autonomous database,
thus providing a secure, customer-managed process for allowing users to connect in the
database. Applications that use JDBC thin driver require the Oracle database credentials,
including the Oracle Wallet or Java Key Store, JKS, files when connecting to the autonomous
database. The wallet location can be included in a JDBC URL-- this requires Oracle JDBC thin
driver 18.1 or higher-- or in the ojdbc.properties file, which requires Oracle JDBC thin driver 18.1
or higher as well.
Java properties can be set prior to starting the application. This requires Oracle JDBC thin driver
12.2.0.1 or higher. If you connect to the autonomous database through HTTP proxy, you need
to update your tnsnames.ora file to add the HTTP proxy host name and port to the connection
definition. In addition, you need to add the https_proxy and the https_proxy_portparameters in
the address section of connection definitions.
As with other Oracle Cloud Services, one of the design objectives of the Oracle Autonomous
Database is the ability to connect on-premises databases and applications to autonomous
databases. Typical scenarios include extract, transform, and load directly into ADB and business
intelligence applications accessing ADB. These applications access ADB directly to perform
analytics and visualization of data in the database.
The recommended connectivity for autonomous database is through Oracle's FastConnect
service, which creates a very fast private network link between the customer's data center and
Oracle's Public Cloud. FastConnect Public Peering enables you to access public services in the
Oracle Cloud without traffic traversing the internet path. Using FastConnect Public Peering, you
can connect to public services like the Oracle Object Storage, public load balancers in your VCN,
public IPs on compute, or supported SaaS services, as well as Oracle's Autonomous Database
service. FastConnect can be implemented as a co-location or provider model.
For third-party tools accessing Oracle Autonomous Databases-- for example, in this graphic,
Cognos-- the recommended connectivity services is through Megaport cloud routers. Megaport
makes it easy to connect to Oracle Cloud regions across the US, Europe, and Asia-Pacific. With
Megaport, you can provision dedicated and private connections from 386-plus Megaport-
enabled data centers to Oracle Cloud Infrastructure, FastConnect and Oracle Cloud
Infrastructure, FastConnect Classic in less than 59 seconds. Scalable bandwidth enables you to
pay only for what you need, when you need it.
For workloads that have both an application server and an autonomous database in the same
Oracle Cloud Infrastructure region-- for example, a web-hosted application on OCI
infrastructure-- that accesses data in an autonomous database, access between the two is done
through the public IP address of each service. However, this traffic never leaves the OCI region
and is not routed through the external public internet. Instead, it is directly routed through a
service gateway that connects it to services. This provides higher security, because this traffic
will never leave the data center, and provides much lower latency and better bandwidth since
all the traffic is routed through the internal high-speed networks in the region.
The Oracle Autonomous Database leverages extended architecture components that enhance
its functionality for customer needs. It is integrated, at no cost, with Oracle SQL Developer,
which is an extensive development and management tool for the Oracle Database. Also included
and integrated into the autonomous database offering is Oracle Machine Learning, which is a
notebook-based environment that includes machine learning functionality built into
autonomous database.
A third tool included with the Oracle Autonomous Database is Oracle Data Visualization Desktop,
which is the extremely capable business intelligence and analytics tool. In the graphic, the
components that are included with autonomous database are colored in salmon color. For data
movement in and out of autonomous database, the Oracle Cloud Platform provides object
storage that the autonomous database uses to stage files that are being loaded into it or for
external backups that users want to perform. However, autonomous database uses internal
Exadata storage for database object storage and does not require the use of cloud object store
for its operation. Many third-party applications are certified against Oracle Autonomous
Database and can be connected through OCI, JDBC, and ODBC protocols.
Let's look at the developer tools and some of the other features in the architecture in greater
detail. As we previously noted, an included component of the autonomous database service is
Oracle Machine Learning, also referred to as OML. OML is a web-based notebook environment
based on Apache Zeppelin.
With OML, users can quickly start running queries in an HTML environment without the need to
install a client query tool. OML is autonomous database-aware and makes it easy to leverage
functionality such as resource services defined, machine learning algorithms in the database,
SQL scripts, and graphical analytics tools that are part of OML. Notebooks can be saved and
shared with other OML users. And OML provides an easy, integrated SQL and analytic
development and runtime environment that can be accessed from anywhere, anytime.
Choose from a variety of visualizations to look at data in a specific way. Data Visualization
Desktop's benefits include a personal single-user desktop application, offline capability,
completely private analysis, full control of data source connections, direct access to on-premises
data sources, lightweight, single-file download, no remote server infrastructure, and no
administration tasks.
Oracle SQL Developer 17.4 and later provide support for wallets using cloud [INAUDIBLE]
connection type. Oracle recommends that you use version 18.2 or later however. But earlier
versions may still work with autonomous database.
The console overview page shows real-time and historical information about the utilization of
the service. The activity page shows real-time and historical information about the utilization of
the service. And the administration page allows downloading of client credentials, set resource
management rules, set administrative password, manage OML users, download Oracle Instant
Client, and provides a mechanism to send feedback to Oracle.
Oracle Autonomous Database is certified with many third-party vendors. And Oracle encourages
any vendor that wants to certify their application or tool against the Oracle Autonomous
Database to do so. Although autonomous database runs the latest version of the database, it is
the exact same version that customers can run on premises or any other cloud service. Oracle
restricts some operations against its autonomous services to better control and maintain and to
provide true hands-off autonomous experience to users. In this slide, you can see some of the
vendors that have certified their applications with Oracle's Autonomous Database.
Now let's go over typical considerations that a customer would evaluate when implementing an
autonomous database. When considering a move to, or a new deployment in, autonomous
database, what is a typical workflow of planning and deployment that occurs? Unlike on-
premises deployments, there are many steps that are not needed with autonomous database.
Because of its nature, it is meant to be an easy environment to deploy.
However, there still are several considerations to evaluate-- number one, determining the level
of automation and functionality required; number two, determine the main workload
characteristics for the database; number three, depending on the workload characteristics,
select Autonomous Data Warehouse or Autonomous Transaction Processing service; number
four, provision the selected service; number five, determine how to load data into the
autonomous database; and number six, determine what to do with the application. Let's
examine each of these considerations in more detail.
Oracle Autonomous Transaction Processing supports all operational business systems, including
both departmental as well as mission-critical applications. But unlike other cloud providers, ATP
doesn't just support one transaction processing use case, it can also support mixed workloads
where you have a mixture of transaction processing, reporting, and batch processing, making it
the perfect platform for real-time analytics based off operational databases. This enables users
to get immediate answers to any question.
Integrated machine learning algorithms make it the perfect platform for applications with real-
time predictive capabilities. Advanced SQL and PL/SQL support make it the perfect platform for
application developers as developers can instantly create, effortlessly use Autonomous
Transaction Processing, eliminating any dependence and delays on others for hardware and
software. The fact that it's self-tuning also eliminates any database tuning, accelerates
developer productivity.
Oracle Autonomous Data Warehouse supports all types of analytical warehouse and decision
support database workloads. ADW is particularly well-suited for creating new dependent or
independent data marts that allow easy start of analytical projects. It is a good environment for
sandbox experimentation by data scientists sifting through data and for storing large amounts
of data and data lakes. Its included analytics and visualization tools, Oracle Machine Learning
and Oracle Data Visualization Desktop, provide an end-to-end environment for application
development, data analysis, and fast, flexible database services.
Once you decide which service better suits your needs, the next step is to provision the
database. Provisioning the database involves very few steps, but it's important to understand
the components that are part of the provisioning environment. When provisioning a database,
the number of CPUs, in increments of one storage, in increments of 1 terabyte, and backup are
automatically provisioned and enabled in the database. In the background, an Oracle is being
added to the container database that manages all the users in autonomous databases.
Because the autonomous database runs on Exadata systems, real application clusters is also
provisioned in the background to support the on-demand CPU scalability of the service. This is
transparent to the user and administrator of the service. But be aware, it is there. For higher-
end offerings, there is the option of creating an optional remote standby database for automatic
failover.
Loading and maintaining data in the autonomous database can be done as one-time loads best
when staged through Oracle Object Store, or as a continuous data ingestion or synchronization
with other sources. Autonomous database supports three object stores and can read and write
directly to these three. The supported object stores are Oracle Object Store, Amazon S3, and
Azure Object Store. Object stores are ideal for staging export dump files that are going to be
imported into the autonomous database.
The same applies for flat files that would be loaded into the database. Autonomous database
supports the Oracle Database external tables feature. So flat files on object store can act as
autonomous external tables. Please note, it is best to host these tables on Oracle Object Stores
that are FastConnected to the autonomous database to reduce latency and other issues around
access time to database objects.
Also available for transaction and data work location in real or near-real time, or to maintain
synchronized copies of the databases, are Oracle GoldenGate, which can be configured with
autonomous database as a target database. This allows ADB to become a full replica copy of
another database for uses such as reporting, disaster recovery, or development, testing, and
QA. Once a decision is made to move the database to autonomous database services, the next
step is to determine what to do with the application accessing that database.
Just like rehosting the database in the cloud, re-hosting the application to the cloud may have
its own benefits. If the application using the autonomous database is an existing application,
there are two preferred options for hosting the application. First option is to keep the
application in its existing environment, and replace the existing database with access to the
autonomous database. The second option is to rehost the application to the Oracle Cloud
Infrastructure. Rehosting the application may be straightforward or may require substantial
reconfiguration.
Oracle provides tools such as Ravello to assist in these migrations. If the application using the
autonomous database is a new application, then it is highly desirable to also develop the
application on the Oracle Cloud Infrastructure Development environment, which will benefit
from a robust infrastructure offering and close connectivity to the autonomous database.
We will be covering several topics in this section. We will start off by walking through the steps
to provision the Oracle Autonomous Database service. We will describe how to find and
download the database credentials and then show how to connect to the Oracle Autonomous
Database, and we will finish by navigating through the Oracle Cloud Infrastructure Console.
When you have completed this video, you will be able to articulate how to connect and navigate
the OCI Console, understand how to provision the Autonomous Database service. You will
understand how to find and download database credentials and how to connect to your
Autonomous Database service.
Let's get started with navigating the OCI Console. To create an Autonomous Database, navigate
to the Oracle Cloud login page at cloud.oracle.com and click Sign In at the top. The first
information to enter is your Oracle Cloud account name. This is sometimes also referred to as
the cloud tenancy. Please note you may not see the exact screenshot you see in these slides, as
the Oracle Cloud looks and feels is consistently being improved. But the latest process will be
very similar and will always require the same information.
Once you have entered your Cloud account name, click the Next button. In the Oracle Cloud
account sign in page, fill in your Cloud user account and password. And select Sign In. Notice this
account name is not the database account name for the database you will create, but the
account name for your Oracle Cloud account.
After logging in, you will be placed on the main Oracle Cloud Infrastructure Console. Depending
on whether this account has been used before and the main console page customized, you may
see different services listed on the bottom portion of the console screen. When you sign into
the console, you'll see the home page. In the upper left, you will find the navigation menu, which
you can use to navigate to the various cloud services pages. There you can create, manage, and
view your cloud resources. The help menu in the upper right provides access to links, to
documentation, and to Oracle support.
Next to the help menu, you will see your current region displayed. If your tenancy is ascribed to
multiple regions, you can switch regions by selecting a different region from the region menu.
Note that the console displays resources that are in your current selected region. So if your
tenancy has instances in CompartmentA in us-phoenix-1 and instances in CompartmentA in us-
ashburn-1, you can only view the instances in one region at a time, even though they are in the
same compartment.
To check the overall status of your purchased services and manage your accounts or
subscriptions, you can use Oracle's My Services Dashboard. To quickly navigate to My Services,
open the Navigation menu, go to the Administration, and click My Services Dashboard. This
action takes you to the Cloud account sign in page. Or if you are signed in with your Oracle
Identity Cloud Service credentials, you're automatically signed in to My Services.
Administrators can use My Services to monitor billing, metrics, and manage accounts. Important
notifications can also be found in the My Services Dashboard, including Security Alerts, Quota
Breaches, or Schedule Maintenance. To sign out of the console, open the user menu and then
click Sign Out.
To work with related services and resources, use the navigation menu in the upper left. Some of
the services and resources you can work with include archive storage to help you store data that
is accessed infrequently and requires long retention periods, audit, which helps you track activity
in your environment, block volume allows you to dynamically provision and manage block
storage volumes, compute, which helps you provision and manage compute hosts, known as
instances, to meet your compute and application requirements, and container engine for
Kubernetes, which allows you to define and create Kubernetes clusters to enable the
deployment, scaling, and management of containerized applications.
Data transfer helps you easily and security migrate your data in an offline manner to Oracle
Cloud Infrastructure. Database, which helps supervision and manage Oracle databases. DNS,
which helps you create and manage your DNS zones. Email delivery to help you send large
volume email and ensure that your messages reach your recipients' inboxes. File storage helps
you manage shared file systems, mount targets and snapshots. And IAM, which helps you said
administrators, users, and groups and specify their permissions.
Internet intelligence, which provides you internet analytics with market performance and IP
troubleshooting tools. Key management to help your centrally manage the encryption of your
data. Load balancing helps you create a regional load balancer within your VCN. Networking
helps you set up virtual versions of traditional network components. Object storage, which helps
you manage data as objects stored in containers. And finally, registry, which helps you store,
share, and manage development artifacts like Docker images in an Oracle managed registry.
Let's start by looking at provision an Autonomous Database service. Once logged in, select
Autonomous Database from the Cloud Console if the icon is displayed. If you don't see an
Autonomous Database icon, on the top left, there is a drop down menu. And from the drop
down menu, you can select Autonomous Database.
On the next page, you should select which workload type your database service needs to
provide, transaction processing or data warehousing. Select the appropriate workload and then
select the Create Autonomous Type instance. This will take you to the screen where you can
create your Autonomous Database and manage any that you previously created. Click on the
Create Autonomous button. In this case, you will be creating an Autonomous Data Warehouse.
In this screen, you will be asked to provide the information we discussed earlier. Compartment.
This allows different organizations or user groups to be segregated into specific resource and
reporting groups. Display name displays the database name. Database name is the actual name
of the database. CPU core account is the number of CPUs to provision the database with. Storage
capacity in terabytes to provision the database with. The database administrator password. The
license type. This will determine the rate charged per use.
In the license type section, it is important to identify which license mechanism you will be using.
If you already own licenses on premises and want to leverage to bring your own license pricing,
then select the option for my organization already owns Oracle database software licenses. In
this case, the autonomous service is charged at a lower rate that reflects the existing database
licenses. If no licenses are owned or wish to be used under the BYOL program, then select the
second option for charging based on licenses and infrastructure used.
The text section is optional but allows you to use metadata to organize and track resources
within your tenancies. Text are composed of keys and values that can be attached to resources.
Once all the inputs are complete, select Create Autonomous Database.
At this point, you will be placed back in the Autonomous Database Details page, and the
database will appear in a provisioning state. The page will automatically update to available
status once the database creation process is complete. At this point, the database is provisioned
and running and ready to be connected to and used.
Let's look at what's involved in downloading the database credentials. The Autonomous
Database only accepts secure connections, so you will need to download the wallet credentials
before you can set up the connection. You can download the wallet from the Autonomous
Database Service Console or from the DB Connection button on the Instance Details. To access
a service console, database and the table listing your Autonomous Database Instances, and click
on the three vertical dots on the right hand side. The wallet can also be downloaded through
API Calls.
The next slide show how to download the wallet through the console. In the Autonomous
Database Console, click on the name of the database you want to connect to. After selecting
your database, you're placed on the Database Console page. To download the connection
credentials, click on DB Connection button. This will bring up the connection information for the
database. You can directly copy the connect string and TNS name, as shown in the bottom of
the screen. You can also download the wallet, which contains several files in a compressed zip
file. To download the wallet, click on the Download button.
When downloading the wallet, a password must be provided so that some encrypted
communication between the clients and the database can use a password as additional security.
Make sure you remember this password. Please note, this password is not the same as the
database user or admin password or your Oracle Cloud user account. The same password could
be specified, but they are stored and used differently. After specifying the password, click on the
Download button. Make sure you know the location where your wallet was downloaded. Store
credential files in a secure location and share the files only with the authorized users to prevent
unauthorized access to the database.
After the download process is complete, you will be returned to the Autonomous Database
Console page. The wallet can also be downloaded from the Autonomous Database Console
under the Administration tab. Select Download The Client Credentials, also known as Wallet.
Now let's look to see how to connect to an Autonomous Database service. Connections to the
Oracle Autonomous Database use certificate authentication and Secure Socket Layer, SSL.
Certification authentication uses an encrypted key stored in wallet files on both the client and
the server. Oracle client credentials, the wallet files, are downloaded from autonomous
database by a service administrator.
When you download the client credentials from the Download Wallet dialog, you must enter a
wallet password in the password field and confirm the password in the confirm password field.
The password must be at least eight characters long and must include at least one letter and
one numeric character or one special character. This password protects the download client
credentials wallet. This wallet is not the same as the transparent data encryption wallet for the
database. Therefore, a different password is used to protect the client credentials wallet.
Oracle Autonomous Databases can be connected to through SQL net, JDBC, or ODBC processes.
Unlike other cloud services, you do not get a Unix command line interface on the system hosting
your Autonomous Database instance. JDBC thin connections used 12.1 and 12.2 JDBC thin
drivers and use Java Keystore, JKS, which can be defined in the JKS connection properties. JDBC
and ODBC use the Oracle client interface calls OCI and tools like SQL net and Data Pump use it
to communicate with the database. All connections use SSL for encryption. No unsecure
connections are allowed to the Autonomous Database. This is why clients require a security
credentials wallet to connect.
In the previous slide, we went over the steps for downloading the wallet to use for secure
connections. However, in order to execute API keys, you will need to upload the public key. From
the console, navigate to the Identity. Select Users Panel and select Add Public Key. There are
two connectivity options to establish a connection to the Autonomous Database.
One is through the public internet directly, and the other is using Oracle's FastConnect service
with public peering. This second option provides private connections from on premises
networks. To review the options to establish connectivity to an autonomous database, in this
example, a connection is established through the public internet between a customer's on
premise network and the Oracle data center through a public internet connection using SSL
encryption.
To access the Autonomous Database from behind the firewall, the firewall must permit the use
of the port specific in the database connection when connecting to the servers in the connection.
The default port number for Autonomous Data Warehouse and Autonomous Database
transaction processing is 1522. In this case, to establish connectivity to an Autonomous
Database, a connection is established through the FastConnect with public peering service
offered from Oracle Data Center to a customer's on premise network using SSL encryption.
Oracle Cloud offers Oracle Cloud Infrastructure service gateway service, which offers private
access to Oracle services deployed in the Oracle services network. This allows for additional
levels of privacy and obfuscation for customers that require complete network isolation and
private security.
The tnsnames.ora file provided with the credentials zip file contains five database services
names. Sessions in these services may get disconnected too if they-- slide 39. The tnsnames.ora
file provided with the credentials file contains five database service names. Sessions in these
services may get disconnected if they stay idle for more than five minutes, and other user
sessions require the resources consumed by the idle session. This allows resources to be free for
other active users in your database.
One of the most productive tools to use with Autonomous Database is SQL Developer, which is
included for free with Autonomous Database. In this section, we show how to connect the
Autonomous Database with SQL Developer. SQL developer fully supports connectivity to
Autonomous Database and is Autonomous Database aware. To establish a connection to
Autonomous Database, select the Connections green plus sign at the top left corner of the tool.
A new connection screen will appear, the same for an autonomous or non-autonomous
database. In the top three lines, you will want to name the connection and fill in the appropriate
username and password for the user you will be connecting to. Under Connection Type, the type
to use for Autonomous Database is Cloud Wallet. When using this connection type, which is the
only connection type supported for Autonomous Database, the location of the downloaded
wallet must be specified in the configuration file entry.
Navigate to and select the downloaded wallet file. If the correct wallet file is selected, the service
drop down will list all the services available to the Autonomous Database. Select the one that
most appropriately provides the service level your connection will require. Please refer to the
section on user connections and services in another module for more information on this.
Finally, to establish a connection, click Connect.
Applications that use JDBC thin driver require the Oracle database credentials, including the
Oracle Wallet or Java Keystore JKS files when connecting to the Autonomous Database. You can
include the wallet location in a JDBC URL to make a connection. We will outline the process in
this section. If your application uses Oracle JDBC thin driver version 18.1 or higher, the
connection can be specified using the JDBC URL. It's recommended that you use version 18.3 or
later to make it easier to configure connections with the security credentials wallet. The 18.3
drivers can be downloaded from the Oracle Technology Network.
Once you have the necessary drivers, you can set up the connection by unzipping the security
credentials wallet zip file and then specifying the wallet directory in the JDBC URL. You can
download the JDBC thin driver, the universal connection pool, the READMEs, and the companion
JAR files as a whole or as individual downloads. Once you have the necessary drivers, locate and
unzip the Autonomous Database security credentials files. As a reference, we walked through
the credentials download process in the previous section.
How the JDBC URL is formed depends on whether the TNS admin environment variable is set on
your computer. The TNS underscore admin environment variable points the directory where the
Oracle SQL net configuration files are located. If TNS underscore admin is not set in the
environment, then you must include the wallet directory in the JDBC thin URL. In this case, our
unzip directory is /Users/ybaskan/ADW/wallets.
Here is what we covered today. We started by articulating how to connect and navigate the OCI
console and understand how to provision the Autonomous Database service. Then we have to
understand how to find and download database credentials and how to connect to your
Autonomous Database service.
We will then cover migrating to ADB using Data Pump, followed by how to migrate or
synchronize ADB with GoldenGate. Once data is migrated into ADB, Oracle Data Sync can be
used for synchronization of ADB with source systems. In the last section, we will cover how to
use DBMS_CLOUD for loading data into ADB.
When you have completed this video, you will be able to describe the options and considerations
for migrating to ADB. You will be able to articulate how to migrate to Autonomous Database
with Data Pump and GoldenGate. Finally, you will be able to articulate how to load data to ADB
with Data Sync and DBMS_CLOUD.
Let's start with the migration options and considerations. The diagram above provides a high-
level architectural diagram for different ways to load and maintain data in ADB. Data can be
loaded directly to ADB through applications such as SQL Developer, which can read data files
such as TEXT and XLS and load directly into tables in ADB.
A more efficient and preferred method for loading data into ADB is to stage the data into Cloud
Object Store, preferably Oracle's, but also support it are Amazon S3 and Azure Blob Storage. Any
file type can be staged in Object Store. Once the data is in Object Store, Autonomous Database
can access it directly. Tools can be used to facilitate the data movement between Object Store
and the database, for example, IMP, DP, and for export, DMP files from other databases, or
Parquet, JSON, .CSV format, among others. Depending on the amount of data to be loaded,
either of these strategies would work.
A physical database can't simply be migrated to Autonomous Database because the database
must be converted to a pluggable database, upgraded to the current version of the database,
and encrypted. Any changes to Oracle shipped stored procedures or views must be found and
reverted. All uses of container database admin privileges must be removed. All legacy features
that are not supported must be removed, such as legacy large objects.
Data Pump must be used for migrating databases versions 10.1 and above to Autonomous
Database, as it addresses the issues just mentioned. For online migrations, GoldenGate can be
used to keep old and new databases in sync. More on this will be covered later in the module.
When considering the extended set of methods that can be used for migration and loading, it's
important to segregate the methods by functionality and limitations of use against Autonomous
Database. The considerations are as follows. How large is the dataset to be imported? What is
the input file format?
Does the method support none-Oracle-database sources? Does the method support using
Oracle and/or third-party object store? Use the chart in this slide to go over the considerations
to decide which method is the most appropriate for any specific Autonomous Database loading
requirement.
You can load data into ADB Cloud using Oracle Database Tools and Oracle and third-party data
integration tools. You can load data from files local to your client computer or from files stored
in cloud-based object store. The Oracle Autonomous Database has built-in functionality called
DBMS Cloud specifically designed so the database can move data back and forth with external
sources through a secure and transparent process.
DBMS Cloud allows data movement from the object store, data from any applications or data
sources, export to TEXT, .CSV, or JSON, [INAUDIBLE] third-party data integration tools. DBMS
Cloud can also access data stored on object storage from other clouds, AWS, S3, and Azure Blob
storage. DBMS Cloud does not impose any volume limit, so it's preferred method to use.
SQL Loader can be used for loading data located on local client file systems into Autonomous
Database. There are limits around OS and client machines when using SQL Loader. Data Pump is
the best way to migrate a full or part database into ADB, including databases from previous
versions.
Because Data Pump will perform the upgrade as part of the export/import process, this is the
simplest way to get the ADB from any existing Oracle database implementation. SQL Developer
provides a GUI front end for using Data Pump. They can automate the whole export and import
process from an existing database to ADB. SQL Developer also includes an import wizard that
can be used to import data from several file types into ADB. A very common use of the wizard is
to import Excel files into Autonomous Data Warehouse.
Oracle Object Store is directly integrated into Autonomous Database, and is the best option for
staging data that will be consumed by Autonomous Database. Any file type can be stored in
object store, including SQL Loader files, Excel, JSON, Parquet, and of course dump files. Flat files
stored in object store can also be used as Oracle database external tables. So they can be queried
directly from the database as part of a normal DML Operation.
Object store is separate than storage allocated to the Autonomous Database for data set object
storage, such as tables and indexes. That storage is part of the exadata assist, and is
automatically allocated and managed. Users do not have direct access to that storage.
One of the main considerations when loading and updating ADB is the network latency between
the data source and the Autonomous Database. Many ways to measure this latency exist. One
is the website cloudharmony.com, which provides many real-time metrics for connectivity
between the client and Oracle Cloud Services.
It's important to run these tests when determining which Oracle Cloud Service location will
provide the best connectivity. The Oracle Cloud dashboard has an integrated tool that will
provide real time and historic latency information between your existing location and any
specified Oracle Data center. When planning for a immigration, after performing network and
data transfer analysis, the total data transfer time to ADB should be quantified to ensure
operational continuity.
The chart in this slide includes a sample of how long it would take to transfer data into object
store, for then loading into ADB. The numbers here are representative of a specific location. If
the transfer imposed importing into the database will be long and if it's important to keep the
source and new ADB systems in sync, contingency plans such as using Oracle GoldenGate to
synchronize the database could be integrated into the migration plan.
Customers that have a large database may want to migrate to ADB. And our customers they
have slow connectivity to an Oracle Data center can leverage the data transfer service available
to object store customers. This allows customers to ship their data on hard drives to Oracle
Cloud Services where they will be loaded into Oracle Object Store. For transfer operations that
could take days or weeks and risk being interrupted, this provides a safe and multi-use
movement method.
When migrating data to Autonomous Database, table statistics are gathered automatically
during direct path load operations. If direct path load operations are not used, such as with SQL
Developer loads, the user can gather statistics manually as needed. When migrating an existing
database that uses optimizer in parallel hints, ADB will ignore those by default. Because ADB
runs in a highly tuned exadata environment, most database operations should run faster or at
least as well as database tune with lots of hints to perform faster. Should hints be required for
some reason, they can be explicitly re-enabled.
Autonomous Database uses hybrid columnar compression for all tables by default. This gives the
best compression ratio and optimal performance for direct path load operations like the loads
done using DBMS Cloud package. If you perform DML operations like update and merge on your
tables, these may cause a compression ratio for the effective rows to decrease, leading to larger
table sizes. These operations may also perform slower compared to some of the operations on
an uncompressed table. For best compression ratio and optimal performance, Oracle
recommends using bulk operations like direct path loads and create table select statements, but
if your work requires frequent DML operations like update and merge on large parts of the table,
you can create those tables as uncompressed tables to achieve better DML performance.
Let's look at how to perform a migration using Data Pump. Data Pump EXPDP and IMPDP can be
used for database versions 10.1 and above and source databases, and will perform the upgrade
steps necessary to convert the database from older versions to the current Autonomous
Database version. Data Pump Import lets you import data from Data Dump files residing on the
Oracle Cloud Infrastructure Object Store, Oracle Cloud Infrastructure Object Store Classic, and
AWS S3.
You can save your data to your cloud object store and use Oracle Data Pump to load data to the
Autonomous Database. Oracle Data Pump offers very fast bulk data and metadata movement
between Oracle databases and Autonomous Database. This is the preferred way to move
between an existing database implementation and an Autonomous Database. The primary
mechanism that Data Pump uses is data file copy, but it will also leverage direct path, external
tables, or network link imports.
Oracle Data Pump Export provides several export modes. Oracle recommends using the schema
mode for migrating to Autonomous Database. You can list the schemas you want to export by
using a schema parameter. For a faster migration, export your schemas into multiple Data Pump
files and use parallelism. You can specify the dump file name, the format you want to use with
a dump file parameter.
Set the parallel parameter to at least the number of CPUs you have in your Autonomous
Database. The exclude data_options parameters ensure that the object types not required in
Autonomous Database are not exported and table partitions are grouped together so that they
can be imported faster during the import to Autonomous Database. In this example, you see
under the exclude and data_options, the recommended objects that should not be exported, as
they are either not supported or not recommended for Autonomous Database.
To use Data Pump from ADB, a credential identifying the Object Storage bucket to use must be
defined with a DBMS_CLOUD.CREATE_CREDENTIAL function. This will allow ADB to access
objects that are stored in the object store, including dump files. To export an existing database
to prepare for import into ADB, use the XTP command and add the exclude option for database
functionality that is not recommended or supported in ADB. This will prevent errors during the
import process.
In this example, the exclude and data_options parameters ensure that the object types not
required in Autonomous Database are not exported and table partitions are grouped together
so that they can be imported faster during to import to import your schemas and data. If you
want to migrate your existing indexes, materialized views, and materialized view logs to
Autonomous Database and manage them manually, you can remove these objects from the
exclude list, which will export those objects types too. This is exploring the SH schema and is
creating 16 parallel threads.
To import and export a database into ADB, use the MDB command with the admin or user with
permission to import and the connect string with the ADB wallet. This example shows importing
using a defined credential and from a dump file in object store. In the previous export, we had
created 16 parallel strings, so we will import with the same parallel parameter. The same exclude
options are specified.
For the best import performance, use the high database service for your input connection and
set the parallel parameter to the number of CPUs in your Autonomous Database, as shown in
this example. If using Data Pump versions 12.2 or older, the credential parameter is not
supported. If you're using an older version of Data Pump Import, you needed to define the
default credential property for Autonomous Database and use a default credential keyword in
the dump file parameter.
An extra step is necessary in this case. The same process as before would be used to create the
credential, but after, the created credential needs to be made the default credential. This is done
with the alter database property, set default_credential statement, as seen in this slide. In the
MDB command, the keyword default_credential is specified before the location of the dump file
in the dump file parameter. See the example in this slide.
The log files for Data Pump Import operations are stored in the directory DATA_PUMP_DIR. This
is the only directory you can specify for the Data Pump directory parameter. To access the log
file, you need to move the log file to your cloud object storage using the procedure
DBMNS_CLOUD.PUT_OBJECT.
This process is not automatic. And if the logs are not moved, you will receive a warning when
running the MDB that the logs are not there. In this example, we're moving the log import.log
to object store with a DBMS_CLOUD.PUT_OBJECT command.
Migrations to ADB using Oracle GoldenGate-- good uses for using GoldenGate to replicate
information to Autonomous Database include using Oracle GoldenGate On Premises to replicate
data to Autonomous Database for real-time data warehousing, replicate on premises data to
the Autonomous Database to set up a staging environment for downstream ETL, or real-time
data warehousing. For operational reporting, replicate real-time data from multiple on-premises
data sources and deliver to the Autonomous Database for creating reports. The Oracle
GoldenGate cloud service can also be used to migrate data to the Autonomous Database.
There are some limitations of using the GoldenGate replicat process with Autonomous
Database. Currently, only non-integrated replicats are supported with Oracle Autonomous
Database. For the best compression ratio in your target tables in Autonomous Database, Oracle
recommends replicating changes, including updates and deletes, from your source system as
inserts into staging tables and using in-database batch operations to merge the changes into
your target table.
You can configure in the Autonomous Database instance as a target database for Oracle
GoldenGate On Premises. The source for replicating the Autonomous Databases can be Oracle
GoldenGate On Premises release 12.3.0.1.2 and later. Those are certified with Oracle
Autonomous Database for remote delivery using the non-integrated replicats only.
However, any supported release of the Oracle GoldenGate for any supported database and
operating system combination that can send trail data to Oracle GoldenGate for Oracle Database
Release 12.3.0.1.2 and later can be used as a source system. Oracle Autonomous Database
cannot be a source database. It can only ingest data, in other words, be a target database.
The following data types are not supported for replicating data the Oracle Autonomous
Database-- LONG, LONG RAW, XMLTYPE STORE AS OBJECT relational, XMLSTORE AS BINARY,
BFILE, MEDIA, and SPATIAL. To configure ADB for GoldenGate replication, use a pre-created
ggadmin user in ADB.
Ggadmin has been granted the rights and the privileges for Oracle GoldenGate On Premises
replicat to work. By default, this user is locked. To unlock the ggadmin user, connect to your
Oracle Autonomous Database using the admin user using any SQL client tool. By default, those
user is locked. To unlock the ggadmin user, alter user ggadmin identified by password account
unlock command.
For replication processes, a new target user must be created which is different than the
administration user just discussed. This user must be created and granted privileges to perform
appropriate operations on the database. Once the user is created, connect to ADB as that user.
To prepare the on-premise database for synchronization, the following steps must be followed.
Log in to your Oracle GoldenGate On Premises Oracle database, create a new Oracle GoldenGate
On Premises user. You can do that by the create user username identified by password-- grant
DBA connect resource to the user just created, create some tables for the process with the drop
table and create table user source DWCS key, and create your extract using commands such as
extract, userid, extrail, and table.
To prepare on-premises databases for synchronization, these additional steps must be followed.
Create your extract data pump with the extract command, specify the RMT host, select the
appropriate tables, and add your extracts. Adding extracts are done with the add extract, add
extrail, add tmtrail, specifying the DB log-in and specifying the trandata.
Connect to your Oracle GoldenGate On Premises instance using an SSH and private key. Once
you are connected to your Oracle GoldenGate On Premises instance, change user to Oracle and
transfer the credential zip file that you downloaded for Oracle Autonomous Database, contains
a wallet connection information, as described in previous modules. Edit the tnsnames.ora file in
Oracle GoldenGate On Premises instance to include the connection details that is available in
the tnsnames.ora file in your key directory, the directory where you unzip the credential file
which you downloaded for Autonomous Database in the Connecting to Autonomous Database
module.
Log in as your new replication user, create the replication tables, connect to GGSCI, and
configure Oracle GoldenGate manager, and open the Manager parameter file to edit it. That is
done with the edit param mgr command under GGSCI. Ensure that the manager parameter file
has the following information-- port number, access rules, purge old extracts, minkeep files,
auto-restart, add the ggschema and ggadmin to your GLOBALS file. And then stop and restart
the Manager.
You are now ready to configure replicat files. And as you can see in this slide here, these are the
steps to configure the replicat file. At this point, the replication process should be running. Insert
records into your source database and then ensure that the data is replicated into your Oracle
Autonomous Database table using the stats REPDWCS command.
Now let's look at what's entailed in migrating using Data Sync. Use Data Sync to upload and
manage data to Autonomous Database commonly in conjunction with OOAC. You can load data
files, CSV and Excel, various relational sources, tables, views, SQL statements, OTBI, JDBC data
sources, in Oracle Service Cloud. You can load the relational tables or datasets-- capable of
incremental extract and loads, it supports several updates strategies, capable of de-duping
source data. And you can replicate it and transform data when the target is Autonomous
Database.
Data Sync supports transformations, ELT, surrogate key generation, foreign key lookups, et
cetera. Data Sync efficiently writes parallelizing data loads and using parallel streams for a target
table. Data Sync is capable of pre/post load processing, SQL and store procedures. Data Sync job
scheduling automates async data extraction from source. And Data Sync can easily integrate
with third-party scheduling systems. Supported Data Sync sources are JDBC, MongoDB, PSIbase,
Hive, salesforce.com, AWS, both redshift and postgre, MySQL, MSAccess, Spark, and NetSuite.
And supported targets include the Autonomous Database as well as others.
Once Data Sync is installed, to start using it, the steps to follow are, from the project menu, click
New, and create a new project. In the Connections view, specify connection details for your data
target and your data source if it's a database. In the Project view, specify loading options for
your project. In the Jobs view, create a new job. Then click Run job.
In the next few slides, we will show the screenshots of the process. From the Projects menu,
click New and create a new project from Project's menu. Click New and create a new project. In
the connections view, specify connection details for your target and your data source if it's a
database.
For the URL, you will specify the service name for your database. Normally, this will be
database_high using the high service. You will also need the path to the directory where you
unzip your wallet file.
Please refer to the previous section where this was covered. Please note the example you see
here on the screenshot. When done, click Test Connection to save the connection.
From the main screen, you should see the connection set up in the previous slide called target.
Highlight the connection and click on Project so you can create the objects and do the mapping.
Use data from objects to import source table definitions. You can manually create the objects,
but that takes longer and more prone to errors.
Select the source connection target to connect to the database. Click on Search to bring a list of
objects. If you get an error message, then you will need to go back to your connection and click
on Test to make sure it's working. If it's then working, make sure your schema is in all uppercase
letters. Select the check import definition, in this case, for all three tables on the source
database. Then click on Import.
You should now see your source objects listed in the window. Now click on data from objects
again so that you can add three flat files that are source. From the dropdown, choose file source.
For file location, specify your file location. For a file name, they should already get populated
from selecting the file. Logical name, they should already get populated from the file name. Do
not select Delete files if unsuccessful load. Click on Next to continue to Import options.
Under Import options, specify the information about the data in your source. Select Next. Since
you are using an existing source if you defined one above, click Select an existing. If you did not
define one, then click Create new followed by select Next. Select how you want to map source
columns to target columns.
In this case, the mapping is by position. The column order will be the same in the source and the
target databases. Repeat above steps to map every table in the source, the tables in the target.
You should end up with a source and target entry for each table you have on your target
database.
Once you have mapped all objects in the source table, in this case file.INVENTORY_EXTRACT,
and click on Targets tab, you will see a column that says Load strategy. In this example, purge all
the records before loading so you can reload. Double click on the file that says, Add New Update
Existing.
In the pop up, set the Load strategy to replace data in table and click OK. Repeat this process for
every source object to import. Click on the project summary tab. A line showing each file loading
into the correct target table should show, in this case, three tables.
At the end of the record, the load strategy should be set to replace data in table. Click on the
jobs button to create a job to run your project. Click on New to create a job. Fill in the parameters
for your job.
Pick your ADB database connections. Once your job completes, it should say how many loads
were loaded and status of completed. At this point, your ADB database should be loaded with
the information you selected from the source files. You can repeat this process for any source
that Data Sync supports.
For data loading from files in the cloud, Autonomous Database provides a new PL/SQL package
called DBMS_CLOUD. The PL/SQL package DBMS_CLOUD provides support for loading data files
in the cloud to your tables in Autonomous Database. This package supports loading from files in
the following cloud services-- Oracle Cloud Infrastructure Object Storage, Oracle Cloud
Infrastructure Object Storage Classic, Azure Blob Storage, and Amazon S3. For the fastest data
loading experience, Oracle recommends uploading the source files to a cloud-based object store
such as Oracle Cloud Infrastructure Object Storage, before loading data into your Autonomous
Database. Oracle provides support for loading files that are locally in your data center, but when
using this method of data loading, you should factor in the transmission speeds across the
internet which may be significantly lower.
Let's start by understanding how to create an object storage for Autonomous Database access.
ADB uses authorization tokens to access the Oracle Cloud Infrastructure Object Storage. The first
step is to generate the authorizations token for your cloud in Oracle Cloud Infrastructure, then
create a database credential in your Autonomous Database with this token.
In Oracle Cloud Infrastructure, on the top right in the User icon, select User Settings. On the left,
after selecting User Settings, select Authentication Tokens on the left. Select Generate Token--
slide 60. This will generate a token and be displayed.
Once the token is generated, click Copy to copy to a clipboard, as you will need this to create
the credentials. And the token won't be displayed again. In a SQL connection to ADB, and
connected as admin, run the DBMS_CLOUD command using the authorization token just created
as the password.
In this example, we are creating a credential called OBJ_STORE_CRED with the Oracle Cloud
Infrastructure user [INAUDIBLE] @oracle.com. And for the password, we're using the token
generated in the previous step. OBJECT_STORE_CRED is the credential name of the credential
being created.
You can specify any name you want. [INAUDIBLE] @oracle.com is the Oracle Cloud Infrastructure
username, which is a user who owns the object store, not the database user that you're
connected to. Password contains the token we generated and copied to the clipboard.
Once the credential is created, it can be used to access a file as an external table or to ingest
data from the file into database tables. DBMS_CLOUD makes it easier to use external tables. And
the organizational external needed in other version of the Oracle database are not needed.
To access the files in object store, use the DBMS_CLOUD copy data. In this case, we have a flat
file in object store accessed by the credential we created in the previous step. This example
maps the file on objects called channels.csv and maps it to an Oracle table called channels. When
accessing a file in object storage, use the file_uri_list identifier to point to the file. There is a
specific format the identifier needs to follow to make sure the file can be accessed.
In the next few slides, we break down the components of the identifier. The https:// statement
always starts with the keyword swiftobjectstorage, follow the data center region where the
object store is. Next is the tenant name, which is the tenancy specified when logging into the
Oracle Cloud, followed by the object store bucket that contains the file, and last, the actual name
of the file in object store.
The dbms_cloud.copy_data loads source files into an existing target table, handles compression,
parallelism, logging automatically, and logs all loads in the table user_load_operations. Several
parameters need to be defined to use this functionality. Let's see in more detail.
You will need an existing table in the database, that is the table_name. You will need a defined
credential name, as discuss previously. You will need a FILE_URI_LIST, as defined before. You
will need the schema name that owns that table to be inserted into. The column list in the source
files needs to be identified. The format of the source files needs to be specified.
In this example, a CSV file is being loaded into channels table of the database. Note that not all
parameters need to be defined.
The external table name defined in the database-- you will need a defined credential name as
discussed previously. You will need to FILE_URI_LIST, as defined before. You will need to list the
columns in the external table-- this is the list of columns in the source files-- and the format of
the source files.
In this example, we're using a file called channels.csv residing on object store as an Oracle
database external table. And the database external table is called CHANNELS_EXT.
Once the table is defined, it can be accessed like any other table in the database. Here we are
running a select statement against a file in object store defined as an external table. All load
operations are logged into the user load operations and DBA load operations tables in the
database. For troubleshooting or reporting, these tables can be queried. And they contain
information such as how many records were rejected during a load.
SQL Developer provides easy-to-use data loading wizards for loading data into Autonomous
Database. Let's walk through an example of this functionality. As a reminder, SQL Developer is
a free tool included with Autonomous Database. The SQL Developer data import wizard enables
you to import data from files into tables.
To import data into a new table, right click the Tables node in the Connections navigator of SQL
Developer and select Import Data. To import into an existing table, right click the table and select
Import Data. Beginning with the 18.1 release, you can use the wizard to load data from files in
the cloud tables in Autonomous Database. For loading data from the cloud, Autonomous
Database uses PL/SQL package DBMS_CLOUD just discussed.
In the Connections pane of SQL Developer in the Tables tab of your connected ADB database,
right click and select Import data. In the source selection, select the Oracle Cloud storage as a
source type. Provide the file URI, as described earlier, as defined in previous sections. In the
credential selection list, you should have already defined a credential as discussed in the
previous section.
In the dropdown list, you will see the name of the credentials you created. Select the one you
want to use. From this point on, the loading process is the same as any other loading process in
SQL Developer. The main difference is the source file, the credential process used to load an
Autonomous Database.
You can use Oracle SQL Loader to load data from local files in your client machine into
Autonomous Database. Using SQL Loader may be suitable for loading small amounts of data, as
the load performance depends on the network bandwidth between your client and Autonomous
Database. Again, for large amounts of data, Oracle recommends loading data from the Oracle
Cloud Object Storage.
SQL Loader would be the recommended path for loading from earlier releases such as 9.2, or 8i,
and so on. Generate a flat file and load using the recommended parameters from the above
documentation. Oracle recommends using the following SQL Loader parameters for the best
load performance-- read size 100 megabytes, bind size 100 megabytes, direct equals no.
ADB gathers optimizer statistics for your tables during the load operations if you use the
recommended parameters. If you do not use the recommended parameters, then you need
gather optimizer statistics manually. For loading multiple files at the same time, you can invoke
a separate SQL Loader session for each file.
Here is what we covered today. We started by describing the options and considerations for
migrating to ADB. Then we articulated how to migrate the Autonomous Database with Data
Pump and GoldenGate. Finally, we articulated how to load data to the Autonomous Database
with Data Sync and DBMS_CLOUD followed by SQL Developer and SQL Loader.
Let's start by understanding users in ADB. Creating a new user involves logging in as admin user
and then issuing the standard create user Oracle database command. The minimum required to
create a user is defining the user's name and the password. Once a user is created, roles must
be assigned to the user.
There are several pre-defined roles that grant the combination of individual roles to users. Users
can still be granted individual roles, such as connect, select, et cetera. But when granting roles,
such as dwrole, all roles necessary to perform user-level operations are granted to the user.
Users can also be created through frontend tools, such as SQL Developer or through OSI calls, as
described later in this module.
Let's see how to start and stop and scale an Autonomous Database. The Autonomous Database
Instance can be scaled on demand without tedious, manual steps. Autonomous Database allows
for independently scaling of compute or storage, resizing occurs instantly, fully online, memory
IO bandwidth concurrency scales linearly with CPU, so when CPUs are scaled, the system
automatically adjusts other resources to match, stopping an instance on demand to conserve
resources, starting an instance on demand. Starting, stopping, and scaling can be done by
clicking through the console. See examples in the following slides or through API calls, such as
will be covered in the next module.
To scale the Autonomous Database from the console, select the database to be scaled. This will
place you in the database console. Select scale up or down button. The scaling window appears.
In the CPU count box, enter the number of CPUs you want provisioned in your environment. The
default entry has the current value of CPUs provisioned on the environment. In this case, there
is one CPU provisioned. So, in this example, we scaled the database up to four CPUs.
You can also scale the storage in terabytes of database space allocated by updating the storage
parameter. When the parameters are updated, select Update. And you will be placed back on
the database console where the database will show as Scaling in Progress. Please notice that the
color of the database icon remains green because database scaling operations do not require
system outages. When the scaling is complete, the status will change to Available, and the CPU
core count will be updated to the selected CPU account during the scaling process.
To stop the database, select the appropriate database from the autonomous console. In this
case, there is only one, Test EW. This will place you in the control console for that specific
database. Select the Stop button. This will place the database in stopping status. And when the
process is complete, the database will show as stopped. Once the database is stopped, the
button that was Stop is now Start.
To restart the database, select Start. During the start up operations, the status of the database
will be Starting. And once it is fully up and operational, it will become Available. Stop and start
operations are relatively quick and are not depending on the size of the database or number of
CPUs allocated but are pretty consistent across instances.
Let's look at REST APIs and Autonomous Database. For DBAs and developers that would prefer
to interact with the Oracle Autonomous Database Cloud Services programmatically over REST
rather than log into the Cloud console and click through screens, Oracle's Cloud offers full REST
APIs. This provides a mechanism for developing customized deployment and management
scripts that can be saved and reused for deployments, set gold standards, and store entire
application infrastructure stacks as versions of controlled code.
The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS requests and
responses and support HTTP and SSL protocol TLS 1.2, the most secure industry standards. Calls
to the Oracle Cloud Infrastructure using REST APIs can be written in popular scripting languages,
such as node.js, Python, Ruby, Perl, Java, C#, bash, or curl.
All Oracle Cloud Infrastructure API requests must be signed for authentication purposes. Oracle
Cloud Infrastructure APIs are the ones used to interact with autonomous database. These avoid
using username-passwords and are based on the draft-cavage-http-signatures-08 specification
for secure communication over the internet.
The steps to create and sign API requests are form the HTTPS request. SSL protocol TLS 1.2 is
required. Then, create the signing string, which is based on parts of the request, followed by
creating the signature from the signing string using the private key and the RSA-SHA256
algorithm. Add the requesting signature and other required information to the authorization
header in the request. You will also need to generate an ssh key pair in the pem format.
This example creates an Autonomous Database in the Phoenix Data Center with a database
name of adatabasedb1, the specified password, 8 CPUs, and 1 terabyte of storage. The response
to an Autonomous Database API command, in this case, a create command, will include the
current status of the create process and other relevant information regarding the database
being created.
In this example, we see the database parameters, such as the compartment or tenancy of the
database, the database display name, the name of the Oracle database, the current status of
the database, the time of creation process, the number of CPUs allocated, the number of
terabytes of storage allocated, the license model being used, and the direct link to the database
console.
In these examples, we see the basic API call for deleting an existing autonomous database,
starting and stopping an existing autonomous database. And again, the result will be similar to
what we saw in the previous slide. In previous sections, we saw how to scale the database from
the console. In this REST API call, the database has been scaled to a total of 20 CPUs allocated.
In the second example, a backup is initiated on the database.
Now let's look at how to manage a database using command line. The CLI is a small footprint
tool that you can use on its own or with the console to complete Oracle Cloud Infrastructure
tasks. The CLI provides the same core functionality as the console plus additional commands.
Some of these, such as the ability to run scripts, extend the console's functionality. The CLI is
built in Python version 2.7.5 or 3.5 or later, running on Mac, Windows, or Linux. The Python code
makes calls to the Oracle Cloud Infrastructure APIs to provide the functionality. These are REST
API they use HTTPS requests and responses.
To install and use the CLI, you must have an Oracle Cloud Infrastructure account, a user created
in that account, and a group with a policy that grants the desired permissions. This account user
can be you, another person, or a system that calls the API. A keypair used for signing API requests
with a publicly uploaded to Oracle. Only the user calling the API should assess the private key.
Python version 2.7.5 or 3.5 or later running on Mac, Windows, or Linux. Note that if you use the
OCI installer and do not have Python on your machine, the installer offers to automatically install
Python for you. If you already have Python installed on your machine, you can use a Python
version command to find out which version you have installed.
The supported Autonomous Database commands using OCI-CLI are create, to create a new
Autonomous Database; delete, to delete and an Autonomous Database, generate-wallet,
creates and downloads a wallet; get, gets the details of the specified Autonomous Database; list,
gets a list of Autonomous Databases; restore, restores an Autonomous Database; start, which
starts an Autonomous Database; stop, which stops an Autonomous Database; update, which
updates the specified Autonomous Database; and backup, which creates, gets, or lists backups.
This example, OCI-CLI call, creates the Autonomous Database. Notice that variables are defined
including all the parameters previously discussed that are required to create an Autonomous
Database. Then, the command oci db autonomous-data-warehouse create with parameters is
run.
In this slide, you can see examples of calls for some of the other supported OCI-CLI commands
that were previously described. The format is always the same starting with the oci db call, then
specify the database service, in this case, autonomous-data-warehouse followed by the
command and parameters and associated variables to run.
Now let's look at backup and restores for Autonomous Database. Autonomous Database
automatically backs up your database for you. The retention period for backups is 60 days. You
can restore and you cover your database to any point in time in this retention period. You can
do manual backups using the Cloud console, for example, if you want to take a backup before a
major change to make restore and recovery faster.
The manual backups are put in your Cloud Object Storage Bucket. When you initiate a point in
time recovery, Autonomous Database decides which backup to use for faster recovery. You can
initiate recovery for your Autonomous Database using the Cloud console or the API call.
Autonomous Database automatically restores and recovers your database to the point in time
you specify.
From the Autonomous Database console in the lower section under the Backup section, a full
list of all the backups that have been taken on this instance is listed going back up to 60 days,
which is the retention period.
To restore from a backup from the Database console from the Actions dropdown menu in the
top left, select Restore. This will bring up the Restore prompt. In the Restore prompt, select
Specified Timestamp or select Backup to restore it to a point in time or to restore a specific
backup. Select a backup from the list of backups. You can limit the number of backups you see
by specifying a period using from and to calendar fields.
When the appropriate backup to restore from is selected, proceed by clicking the Restore
button. When the restore operation finishes, your Autonomous Database instance opens in
read-only mode and the Instance Details page Lifecycle State shows Available, Needs Attention.
If the restore point you specified was correct and you want to open your database in read-write
mode, click Stop. And after the database stops, click Start to start the database. After stopping
and starting the database, open in read-write mode.
In addition to automatic backups, Autonomous Database also allows you to take manual
backups to your Oracle Cloud Infrastructure Object Storage. To perform manual backups, you
need to find the Oracle Cloud Infrastructure Object Storage credentials and your Oracle Cloud
Infrastructure Object Storage tenancy URL. You also need to create a bucket to hold the backups.
The manual backup configuration tasks are a one-time operation.
After you define your credentials and your tenancy URL, you can initiate backups without doing
the same operations again unless that URL, the credentials, or the bucket change. For example,
while connected to your database to define your backup bucket, run the following command.
Set the database default bucket property to your Oracle Cloud Infrastructure Object Storage
tenancy with the ALTER DATABASE PROPERTY SET default_bucket equals bucket name.
If you have not already defined the credential for Object Storage access, such as staging and
loading data into the database from object store, you will need to create a credential for exiting
the object store. Object store is the location where the user-identified backups are stored. In
this example, we created an object store credential named DEF_CRED_NAME. And we set it as
a default credential with the ALTER DATABASE PROPERTY SET command.
To create a manual backup from the Database console page, select Create Manual Backup.
When you click the Create Manual Backup, the first time you will have to provide the name of
the object store bucket you created on the previous slide, then proceed with the backup. Again,
manual backups are not required or necessary since the Autonomous Database automatically
performs daily incremental backups of the database.
Now let's explore Autonomous Database using SQL Developer. Oracle SQL Developer is a free,
integrated development environment that simplifies the development and management of
Oracle Database in both traditional and cloud deployments. SQL Developer offers complete end-
to-end development of your PL/SQL applications, a worksheet for running query and scripts, a
DBA console for managing the database, a reports interface, a complete data modeling solution,
and a migration platform for moving your third-party databases to Oracle.
For DBAs, the DBA console provides a GUI and step-by-step guide for many DBA tasks commonly
perform. SQL Developer behaves exactly the same for Autonomous Database as it does where
any other Oracle database. The nature of the Oracle Autonomous Database is to automate and
simplify many DBA tasks, and so many DBA tasks may be disabled when using SQL Developer for
administration purposes against an Autonomous Database. These are account permissions,
restrictions, and not SQL Developer restrictions. The following slides provide an overview of the
most common DBA tasks performed with SQL Developer against an Autonomous Database.
Connect to the Autonomous Database with SQL Developer, as described in a previous module,
with the admin account or another account if admin is not authorized. But note that some
commands are privileged and may not run. One of the most common questions around
Autonomous Database is, which database options are enabled and which are not? To answer
this question, run the following command in SQL Developer and examine the output in the query
results bar lower right-- select star from b dollar option.
To perform and view DBA operations from the view dropdown on the top left of SQL Developer,
select DBA to go into DBA view mode. Once on a DBA view, click on the Connections button, the
green plus sign, and select Define Connection to the Autonomous Database. If you have not
defined the connection, go back to the Connections view and define one as described in the
connectivity module.
Once connected, open the selected connection, in this case, admin_high, and then open up the
Database Configuration. Then select Initialization Parameters, which will list all the Oracle
database initialization parameters and what their corresponding settings are.
Further down in the Database Configuration section, select Services, which will list all the
connection services defined for the autonomous instance you're connected to and by selecting
any particular one the properties of the service.
To view which database features are in use in the current running instance, select New Database
Features Usage from Database Configuration options. This will list in alphabetical order all the
database features and whether they are currently used or not. You can always sort by column
and value in the results set as needed.
To get detailed information about workloads and performance, under Database Status, select
Instance Viewer. This section shows top CPU usage, memory, and storage usage, processing
running, sessions, and other important information about the current workload on the
database.
To view which is the active resource plan on the instance, for example, to determine whether
this is an ADW or an ADB instance, select the Settings option under Resource Manager. In this
case, the active resource plan is OLTP_PLAN indicating it is an Autonomous Transaction
Processing instance.
Any information related to users, roles, privileges, and more will be under the User section. In
this case, we have selected Users under the DBA view and then selected the admin user. On the
right-hand screen, you can see all the information about the admin user. Here we selected Roles,
and you can see all the roles that have been granted to the admin user. You can view this
information for any user in the Autonomous Database. Notice that when you click on users
defined that are locked, for example, the GG admin user that would give users part of the Golden
Gate implementation but by default are locked down in Autonomous Database.
To review storage information, select the Storage section. Note that although we will be able to
view storage information about the autonomous instance, storage is automatically managed by
Autonomous Database. And these parameters are allocated but storage cannot be changed by
the user or the administrator.
Let's look at Autonomous Database for experienced users. Autonomous Database configures
and optimizes the database for you. You do not need to perform administration operations for
configuring the database. SQL commands used for database administration, such as CREATE
TABLESPACE, are not visible in the service. Similarly, other administration interfaces and utilities,
such as RMAN, are not available. The default data and temporary tablespaces for the database
are configured automatically. Adding, removing, or modifying tablespaces is not allowed.
Parallelism is enabled by default. Degree of parallelism for SQL statements is set based on the
number of OCPUs in the system and the database service the user is connecting to. Compression
is enabled by default. Autonomous Data Warehouse uses Hybrid Columnar Compression for all
tables by default. You can also use different compression methods for your tables by specifying
the compression clause in your CREATE TABLE and ALTER TABLE. Autonomous databases use
Unicode AL32UTF8 as the database character set. Oracle Database Result Cache is enabled by
default for all SQL statements. Changing the result cache mode is not allowed.
In Autonomous Database, direct access to the database node and the local file system are not
allowed. Database links to other databases are not allowed for enhanced security. Database
links from other databases to Autonomous Database are allowed. Calling PL/SQL programs using
database links is not supported. Parallel DML is enabled by default. If you do not want to run
DML operations in parallel, you can disable parallel DML in your session using the following SQL
command-- ALTER SESSION DISABLE PARALLEL DML.
Autonomous Database configures the database initialization parameters based on the compute
and storage capacity you provision. You do not need to set any initials as your parameters to
start using the service. But you can modify some parameters if you need to.
Many SQL statements used for resource management in the database are not available in ADB
because these functions are managed automatically by the Oracle Autonomous Database. If you
run a command that is restricted, you will get an ORA-01031 insufficient privilege error message.
The statements that are not allowed are ADMINISTER KEY MANAGEMENT, ALTER PROFILE,
ALTER TABLESPACE, CREATE DATABASE LINK, CREATE PROFILE, CREATE TABLESPACE, and DROP
TABLESPACE.
The CREATE TABLE and the other table statements have clauses that are not allowed. Most of
the clauses relate to storage and index management, as these are automatically managed by
ADB. If your database application uses any of these clauses, they will not work and need to be
modified.
Several data types are not supported in ADB, including LONG, LONG RAW, media types, and
spatial. This is a point-in-time restriction, and may change at any time. Several ALTER statements
and CREATE USER statements are restricted as well. Please refer to the list on this slide.
Here is what we covered today. We started out by going over how to manage users and
privileges, followed by how to start, stop, and scale ADB. We then had an overview of how to
use REST APIs to manage ADB, as well as Oracle Cloud Infrastructure to manage ADB. We then
covered how to do backups in ADB and how to explore ADB with SQL Developer. And we finished
off by understanding ADB for experienced users.
5. Monitoring ADB
Welcome to Monitoring Autonomous Database module. In this module, we will cover different
monitoring tasks for the Oracle Autonomous Database. When you have completed this video,
you will be able to describe how to monitor ADB using the OCI Service Console and understand
ADB consumer groups.
Let's start by migrating options and considerations. The easiest way to monitor performance in
ADB is through the Cloud Console. When you log into the console, you will be presented with
the Overview page. The Overview page shows real time and historical information about the
utilization of the service. The components on this page are storage. This chart shows the total
and used storage capacity of the service. It indicates what percentage of the space is currently
in use.
CPU utilization. This chart shows the historical CPU utilization of the service. Running SQL
statements. This chart shows the average number of running SQL statements historically.
Average SQL statement response time. This chart shows the average response time of SQL
statements historically. And SQL statements executed per second. This chart shows the SQL
statements executed per second.
From the Oracle Cloud Infrastructure Console for your database, select Service Console button.
Once in the Service Console, you will notice several metrics are displayed in graphical form. The
first graphic displays the total number of disk consumed from total allocated amount. In this
case, 24% of the one terabyte allocated is consumed.
The second graphic displays CPU utilization for the last eight days. However, the retention period
is modifiable by the user. The third graphic displays the average number of running SQL
statements for the last eight days. The retention period is also modifiable by users. The last
graphic displays the average SQL response time in seconds.
On the same page, there is an Activity button on the left. Click on the button. The Activity page
has two main tabs, monitor and monitored SQL. The Monitor tab, also the landing tab for
activity, shows real time and historical information about the utilization of the service. The
components on this page are database activity. This chart shows the average number of sessions
in the database using CPU or waiting on wait events.
CPU utilization. This chart shows the CPU utilization of each consumer group. Running
statements. This chart shows the average number of running SQL statements in each consumer
group. And queued statements. This chart shows the average number of queued SQL statements
in each consumer group.
In the monitor screen, you can view real time activity statistics or historical time period defined
activity. Select the type of monitoring you wish to do. Regardless of whether you're analyzing
real time or time period data, the type of information displayed in the graphics is the same. The
first chart shows database wait events, including application, concurrency, CPU, schedule, user
IO, and other wait events.
The second chart shows CPU utilization by database defined service name. As a reminder,
specific service names are predefined in autonomous database. In this case, high, medium, and
low are autonomous database service names. The third and fourth charts displays the number
of running and queued SQL statements also by defined service name, showing the number of
statements running per service.
To analyze specific SQL statements, click on the Monitored SQL tab. This tab displays a SQL
completed or running in chronological order. Upon scrolling the table to the right, you can see
the consumer group and parallel degree that was used by the SQLs. Right click on any row and
select Show Details for more details on any executed SQL. Once you click on Monitored SQL,
information on execution, queued, and completed SQL is displayed in chronological order,
showing the status, the SQL text, duration, and other information.
From this screen, you have the ability to cancel any running statement by selecting the
statement and clicking on Cancel Execution. This will stop that running SQL statement. For
detailed offline analysis of the SQL statements that are running or ran on this instance, download
the report by clicking Download Report. For a detailed explanation of any particular SQL
statement, select that statement, running or complete, and then select Show Details. This will
show substantial detail about the running transaction.
Now let's look at ADB consumer groups. The priority of user requests in autonomous database
is determined by the database service the user is connected with. Users are required to select a
service when connecting to the database. The service names are in the format database name
and then low, medium, high, TP, or TPURGENT. These services mapped to the low, medium,
high, TP, and TPURGENT resource manager consumer groups.
As a user, you need to pick the database service based on your performance and concurrency
requirements. When you create a user, the administrator assigns a user connection information
with the resources they want the user to have. This is how you manage available resources for
users. An administrator can change the connection information for a user to change the
resources available for that user.
The distinguishing characteristics of how to determine which service to use is provided by the
following guidance. TPURGENT, the highest priority application connection service for time
critical transaction processing operations. This connection service supports manual parallelism.
TP, a typical application connection service for transaction processing operations. This
connection service does not run with parallelism.
High. A high priority application connection service for reporting and batch operations. All
operations run in parallel and are subject to queueing. A typical application connection service
reporting and batch operations. All operations run in parallel and are subject to queuing. Low.
A lowest priority application connection service for reporting or batch processing operations.
This connection service does not run with parallelism.
Notice that each service differs in how parallelism is allocated for transactions, with some being
manual, some being hard coded, and some pegged to the CPU account allocated to the
autonomous environment. Also notice that two of the services are for Autonomous Transaction
Processing only. If number of total concurrent users or concurrent activity is important factor,
please notice that concurrency is limited on certain Autonomous Data Warehouse services.
To check which consumer group you're connected to, run this select statement, which along
with your consumer group information, which resides in V$ RSR underscore consumer
underscore group, will also list other resources consumed by the consumer. The top statement
provides information on the resources allocations of your connected session, and the second
statement provides information on all the groups as listed in the n clause.
Runaway query limits can be set using ADB Service Console or using CS underscore resource
underscore manager package. Users can specify the following limits for consumer groups.
Statement runtime is greater than with a value in seconds. Amount of IO is greater than with a
value in megabytes. Statements exceeding the limits will be canceled. Sessions stay connected.
To change the parameters of any service proceed to the Service Console as described in previous
modules. Once in the Service Console, select Administration on the left hand side. In the
Administration page, select Set Resource Management Rules. A window pops up that lets you
change your parameters defined for each of the predefined groups. For the consumer group,
dropdown select which group you want to modify.
In this case, we're updating the runaway criteria, which will limit how long a transaction can run
or how much IO it can consume before it is terminated. In this example, the transaction can run
for 600 seconds and then will be terminated. Make the change and click Save Changes. After you
save the change, any transaction that reaches a 600 second mark will automatically terminate,
and the user will receive a transaction terminated error message.
Here is what we covered today. We went over how to monitor ADB using the OCI Service
Console, and we went over how to use consumer groups in Autonomous Database.
Let's get started. As was covered in provisioning and connectivity, here are some of the
supported connections to the Autonomous Data Warehouse. You should already know about
the credential wallet file from the section on provisioning and connectivity, as you will be using
it in various types of connections we will cover. The Oracle Call Interface, or OCI, Open Database
Connectivity, or ODBC, and Java Database Connectivity OCI all have specific requirements in
order to connect to the Autonomous Data Warehouse.
First, you will need the credential wallet file and have it unzipped, as the Oracle Net Services
cannot read the zip file. In your sqlnet.ora you need to set the WALLET_LOCATION variable to
point to the wallet file. When the WALLET_LOCATION is set, then Oracle Net Services allows the
application to use the wallet file transparently.
Also, the TNS_ADMIN environment variable needs to point to the credentials file. Both of these
will allow the connections to use the same specific configuration files inside of the wallet they
connect. The Oracle Client 11.2.0.4 or higher is required in order to have the right drivers to
connect to the Autonomous Data Warehouse.
.NET application requirements vary depending on how the application is deployed. If you're
using Visual Studio, you will need to download and install ODT and managed ODP.NET files. If
you're using NuGet, then you can use NuGet package manager to download and install the
manager ODT and ODP.NET files.
If you're using unmanaged ODP.NET, or the Oracle Universal Installer version of Oracle Data
Access components, then you will need to download the appropriate ODAC files instead of the
ODT files.
Once the appropriate files have installed, then you will need to set the WALLET_LOCATION and
the sqlnet.ora to point to where they extracted the wallet file. Depending on the version of
Windows, you may need to enable TLS 1.2.
Slide 7. Depending on your JDBC or UCP connections are set up, you will need to do some of the
following. You will need JDK 8 or higher, specifically the JDK8u161 or higher, installed. Also, there
are specific JAR files needed to be in place. If you're using JDBC Thin Driver 12.2.0.1 ojdbc8.jar,
then you will need the 12.2.0 upc.jar. If you're using the patched JDBC Thin Driver 12.1.0.2
ojdbc7.jar, then you will need the 12.1.0.2 upc.jar.
Once you have the correct drivers installed, then some configuration is needed. If the
TNS_ADMIN variable is set, then copy the ojdbc_properties file from your wallet, direct it to the
TNS_ADMIN directory, and set the oracle.net.wallet_location in the ojdbc.properties file. This
will allow you to just use this service name in the JDBC URL.
If TNS_ADMIN is not set, then you would need to specify TNS_ADMIN as part of the URL and
point it to your wallet directory. What this allows is for you to specify a service name that is in
your tnsnames.ora instead of having to fully spell out all the connection information in the JDBC
URL.
If you need to use the HTTP proxy to get to your Autonomous Data Warehouse, then you will
need to edit the tnsnames.ora file and add the parameters https_proxy and https_proxy_port.
This will also require a JDBC Thin Client 18.1 or higher.
Let's get started with building analytic views. Now that we have looked at how to connect
applications to the Autonomous Data Warehouse, let's see how we can leverage the ADW using
analytic views. Oracle analytic views are new as of 12.2, and provide some unique functionality.
Analytic views allow the complex business models to be viewed into a view. And this greatly
simplifies queries while improving performance.
Analytic views allows for business logic to be stored in the database instead of across a variety
of tools, giving more consistency to your reporting. It allows for simple SELECT statements
against views instead of a massive complex query. Also, since it's stored in the database, the
smart query transformation engine can help with the performance of the view.
Here you can see just how powerful the analytic view can be. Notice it's just not tables in your
database that can be used in analytic view. The analytic view can be built over a variety of
sources, allowing for simple SQL and MDX access to very complex data sets. Your applications
can then query the analytic views instead of trying to code all of the various sources.
Another way of thinking about analytic views is to compare them with the standard relational
views. While you can certainly create a standard view that returns just about any data,
aggregation, calculations, and so on, let's look at what is easy and what comes for free with each.
Both can select from tables, other views, external tables, and so on. Both can join dimensions
and fact tables.
Only analytic views define calculations using business model based syntax that works at any level
of aggregation and across any dimension; include system generated hierarchical columns that
simplify SQL generation; automatically return multiple levels of aggregation without multiple
passes, unions, or other complicated SQL; automatically expand filters to access data for prior
or future periods, parents, ancestors, children, and descendants; automatically densify time
hierarchies for time series calculations; and automatically order calculations.
Here are a listing of the various PL/SQL packages available to be used with analytic views. Here
are a list of the permissions with analytic [AUDIO OUT] Here are the data dictionary views for
analytic views. The great part of analytic views is that they have descriptive metadata. You have
the ability to have descriptive names, measure formatting, and it's user and application
extensible. So not only are you getting consistent calculations, but descriptions and formats as
well.
Let's take a look at an example. If you wanted to look at health insurance coverage rates in the
United States, the analysis could include coverage rates by time, counties, and states. It would
need to allow for geographic comparisons, measure improvement over time, interactive data
visualization tools for end users.
With analytic views, the application could be quite simple-- just five SQL statements using two
hierarchies and one analytic view. Data could be leveraged by most any application, and it's all
stored in the database. Let's take a look at one of the SQL statements in more detail. First, the
fact data is selected from the analytic view. So we pull percent insured from the insured_av.
Notice we didn't have to specify how to do that calculation. Also, that data is not stored. It's
calculated. If you have ever worked with hierarchies, notice we just specified the dimensions for
the query without joining or grouping by. Again, making this very simple to just state what levels
you want the data to be returned, we just stated hierarchies and the time and geography
hierarchies.
The member_name syntax makes it much easier to select instead of trying to figure out the
various columns you would need from the hierarchies. Instead of trying to do various group by
and joins, you can just specify the level in the WHERE clause. Since the data is calculated and not
stored, you can quickly adjust the level and let the Autonomous Data Warehouse quickly
determine the new values.
If you wanted to drill further, you just specify a different level instead of rewriting the SQL
statement, making parameters much easier to pass in. No longer do you need to know the
format for a calculation. You can just select the column, and the database will process the data
accordingly and return the result.
If you want to dive deeper and get a chance to try our analytic views, you can go to
LiveSQL.Oracle.com and try out the tutorials yourself.
How to use machine learning. One of the features that comes with the Autonomous Data
Warehouse is Oracle Machine Learning. Let's look at how you can use Oracle Machine Learning.
Oracle Machine Learning, or OML, is a SQL notebook interface for data scientists to perform
machine learning in Oracle Autonomous Data Warehouse. OML is a good tool that allows
collaboration on building, evaluating, and deploying predictive models and analytical
methodologies.
In the Autonomous Data Warehouse, changes made by one user are immediately updated for
other team members, allowing for simultaneous collaboration. There are 30-plus parallel and
scalable built-in machine learning algorithms available to be used out of the box.
When using Oracle Machine Learning, there is a set of steps that need to be completed in order.
Some of these steps are done behind the scenes with defaults. But they can also be done
manually. We will go through these in more detail.
The first step is to create a user login. This can be found in the service console of your
Autonomous Data Warehouse. You have the ability to log in as admin, but you won't really be
able to use OML as admins, since it cannot create notebooks. The best approach is to create a
new user to log in as.
After creating the user, you will need to log into the Oracle Machine Learning. The first option is
to click on the button on the same screen you used to create and manage OML users. The second
is via the email you received when you created your OML account. The third would be if your
admin provided you with a direct link to OML.
A workspace is an area where you can store your projects. Each workspace can be shared with
other users, so they can collaborate with you. For collaboration with other users, you can
provide different levels of permission such as viewer, developer, and manager. You can create
multiple workspaces. By default, you will get a workspace created for you.
You can manage your workspaces as well as permissions via the drop-down menu. When
mapping workspaces, you can edit them and change the name, or add a comment. You can
delete the workspace, or you can change the permissions on the workspace.
When managing workspace permissions, you select a user and then give them one of three roles.
They can either be a manager with the ability to access the workspace, but create, update, and
deleting projects and notebook; a developer with the ability to access a workspace and project,
but create, update, and delete notebooks; or a viewer with access to workspaces, projects, and
notebooks. Keep in mind-- any changes made in the workspace, project, or notebook will be
reflected immediately.
A project is a container for organizing your notebooks. You can create multiple projects.
Permissions cannot be assigned at the project level, only at the workspace level. By default, you
will get a project created for you. You can manage your projects from the drop-down menu.
When creating a project, you can give it a name and choose a workspace for it to reside in, as
well as a comment. The project will inherit the permissions from the workspace it's placed in,
and can only reside in one workspace.
Oracle Machine Learning uses Zeppelin notebooks, a collaborative interface where you can write
code, equations, and text, grade visualizations, and perform data analytics. Notebooks work
with interpreters in the back end. In Oracle Machine Learning, notebooks are available in
projects, where you can create, edit, delete, and even notebooks as templates.
There are two ways to create a notebook. First, you can just click on Run SQL Statements or
Scripts. This will give you a scratch pad to get in and start using OML. When you are done, you
will be able to save them by going under Notebooks, and drilling in, and clicking Save. The second
way to create notebooks is to click on Notebooks and then Create.
A notebook is separated into two sections. The top is the editor and the bottom is the output
window. The editor is where you put your SQL statements and run them. The output window is
for results of the SQL statement. You have the ability to create multiple editors and result
windows in a notebook. This comes in handy when looking at multiple data sets.
At the top of the editor, there is a bar with multiple icons on it. This bar allows you to run this
statement, hide the code editor, hide the output screen, clear the output, clear the notebook,
and export the notebook. In the output window, there is a bar that has several options available.
You can change the format of the output between table, bar chart, pie chart, area chart, line
chart, and scatter chart. Also, you can export the output to CSV or TSV.
When working with a notebook, you can export it to a JSON file and move it to another database
and import it. If you click on the Export button, you will get prompted to save the JSON files.
Save this to a location on your computer.
To import a notebook, go to your notebooks and click on the Import button. Find your JSON file
that contain your notebook. When finished, you will see the notebook was important
successfully, and it will appear in the window.
Now let's look at Analytics Cloud and Data Visualization. Two of the most common tools that use
the Autonomous Data Warehouse is the Oracle Analytics Cloud and Data Visualization Desktop.
Let's take a look at how to set up these tools.
Oracle Analytics Cloud provides the industry's most comprehensive cloud analytics in a single
unified platform, including everything from self-service visualization and powerful in-line data
preparation to enterprise reporting, advanced analytics and self-service analytics that deliver
proactive insights. With support from more than 50 data sources and an extensible open
framework, Oracle Analytics Cloud gives you a complete, connected, collaborative platform that
brings the power of data and analytics to every process interaction and decision in every
environment-- cloud, on-premises, desktop, and data center.
The confusion comes in when someone asks, what is the difference between Oracle Analytics
Cloud and Data Visualization Desktop? Data Visualization Desktop, or DVD, is the desktop
extension of the Oracle Analytics Cloud. If you want to quickly try out Oracle Analytics or run it
isolated from others in the organization, Data Visualization Desktop is a good first step. If you're
looking to build an enterprise solution, Oracle Analytics Cloud would be a better approach, and
then leverage DVD where it makes sense.
Here are the requirements for a machine running DVD. A few things to take note-- you need to
run a machine with Windows or Sierra. Currently, DVD is not supported on other operating
systems like Linux. Also, the user will need admin privileges to install the application.
Now, when using DVD, you may need additional memory and disk space as you work with larger
data sets. These are just the minimums.
To create an Oracle Analytics Cloud instance, click on Analytics, which will bring you to the
Analytics Cloud page. Then click on Create Instance. Creating an Oracle Analytics Cloud instance
is relatively simple. You just need to specify a name for the instance, then which region you want
to run it in. It's best to keep the region the same as where your data is stored, so you're not
traversing the network each time you run a query. Then select if you have licenses or need
licenses as part of the service.
The next two options we'll cover further in the next few slides. But they are edition and feature
set. Then specify the number of OCPUs you need. That's it. And you will have the Oracle Analytics
Cloud instance.
When choosing an edition, here are a few things to keep in mind when it comes to feature sets.
Standard Edition includes just Data Visualization. Essbase includes both Data Visualization and
Essbase. Enterprise edition includes Data Visualization, Essbase, and Business Intelligence.
You can see here highlighted what is included in each feature set. Data Visualization gives you
access to prepare and explore your data across your departments or organization. Business
Intelligence includes everything Data Visualization has, and then adds additional advanced
features like pixel-perfect reports, Day by Day, enterprise data models, and many others.
Essbase is looking at scenarios-based and what-if analysis.
For a full listing of features, or if you're interested in learning more about these, visit the Oracle
Analytics Cloud documentation.
When you are on the Oracle Analytics Cloud instance page, you will see your instance. If you
click on the name, you will get the overview screen, which will have some important information
about your instance. However, if you're looking to log in to your instance, you will want to go to
the menu on the right and choose Oracle Analytics Cloud URL. This will bring up your Oracle
Analytics Cloud home page.
Let's look at connecting to Analytics Cloud and Data Visualization Desktop. Now that the tools
have been set up, let's look at how we can connect them to Autonomous Data Warehouse. On
the left is a screenshot of Oracle Analytics Cloud, and on the right is Data Visualization Desktop.
Once both products are installed and running, you can see from the Create menu, both tools
have some of the same functionality and look.
When connecting to the Autonomous Data Warehouse, we will use the Create Connection, Data
Set, and Project Features. Since the screens and workflows for what we will do will be almost
identical, we're just going to look at OAC. You can use the same steps for DVD.
To create a connection, click on the Create button, and then click on Connection. You will see
there are many more sources than just the Autonomous Data Warehouse. But for today, click
on Autonomous Data Warehouse. There aren't many settings needed to create a connect. First,
specify a name of the connection you will be using when pulling objects from the Autonomous
Data Warehouse instance. Then provide the client credentials. If you already extracted the
wallet, then just specify the cwallet.sso. If you specified the [? SIF ?] file, you would just put the
cwallet.sso from there.
Then supply the username and password to a user that has access to the objects in the database.
For service name, the drop-down will be populated from the service names available. Choose a
correct service name based on what you learned from the monitoring module.
To import objects from the Autonomous Data Warehouse into Oracle Analytics Cloud, click on
Create Data Set. Then select the connection you created earlier. If you skipped the step on
creating a connection, click on the Create Connection button and follow the screens to create it.
Then continue.
After selecting your connection, you will be presented with all the schemas your user has access
to. Click on a schema, and you will see a list of tables under the schema your user has access to.
If you didn't see a specific schema or table, then the user you specified in your connect needs to
get access to that object.
Once you select a table, you will be presented with the columns on the left. You can select
specific columns, or click the Add All button to select all the columns. You will see the columns
get moved to the right side. If you wanted all but one column, then you can add all, then go to
the right side and select the specific column, and choose Remove Selected.
At the top of the window, you will see a name box. This is where you specify the name of the
object you want created in Oracle Analytics Cloud. It can be different from the table name, and
should be meaningful based on what it represents. Once you name it, click Add. After adding the
object, you will get it added to the Data Elements window. Each of the columns will be on display
at the top. And at the bottom will be the details about the specific column.
Remember when working with number data types that you need to review the Treat As
attribute. If it's set to measure, then it can have function applied. But it cannot be used to join.
So if it's an ID column, make sure to change the Treat As, or change the data type. Once you
have done that, you're all set to use it like any other element in Oracle Analytics Cloud.
Here is what we covered today. We started by describing how to use ADW with third party
software; then describing how to build analytic views and explaining how to use Oracle Machine
Learning; and finally, we explained how to install and configure Oracle Analytics Cloud and Data
Visualization Desktop, and how to connect them to the Autonomous Data Warehouse service.
8. Certification Exam
8.1. Topics
# Exam Objective Concepts Possibly Tested in Section
1 Cloud Concepts •Understand terms like Scalability, Elasticity, Agility, fault tolerance,
and disaster recovery, CapEx, OpEx
•Understand and differentiate between IaaS, PaaS and SaaS
•Understand Public, private and hybridcloud models
•Reduce TCO with migrations (on-prem to cloud without re-
architecting)
2 Getting Started •Describe the key features and components of OCI.
with Oracle Cloud •Regions (Multi-AD, One-AD), ADs, FaultDomains, Console, API, CLI
Infrastructure SDKs.
3 Core Oracle Cloud •Understand Core OCI Services: IAM, Compute, Network, Storage,
Infrastructure and Database.
Services
•Understand Cloud Native Services: OKE, OCIR, Fn, Streaming, API-
Gateway.
4 Security and •OCI Security services (VCN, SL, NSG, WAF, Identity, MFA, KMS, Data
Compliance Safe, Audit) and compliance.
5 Pricing, Support •OCI Pricing Model. Understand OCI subscription models, pricing
and Operations calculator, Free tier, OCI SLA.
•OCI Operations. OCI concepts like budget, quota, limits and support
What are two advantages of using Data Pump to migrate your Oracle Databases to Autonomous
Database? (Choose two.)
A. Data Pump can exclude migration of objects like indexes and materialized views that are
not needed by Autonomous Database.
B. Data Pump is platform independent - it can migrate Oracle Databases running on any
platform.
C. Data Pump is faster to migrate database than using RMAN. (*)
D. Data Pump creates the tablespaces used by your Autonomous Database.
Reference:
https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/load-data-
data-pump.html#GUID-30DB1EEA-DB45-49EA-9E97-DF49A9968E24
QUESTION 2
The default eight-day retention period for Autonomous Database performance data can be
modified using which DBMS_WORKLOAD_REPOSITORY subprogram procedure?
A. UPDATE_OBJECT_INFO
B. MODIFY_SNAPSHOT_SETTINGS
C. CREATE_BASELINE_TEMPLATE
D. MODIFY_BASELINE_WINDOW_SIZE
Reference:
https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-
cloud/user/monitorperformance-intro.html#GUID-4A1E8FCA-FAF1-43F5-93E0-559C0145D6C3
QUESTION 3
Reference: https://www.oracle.com/database/what-is-autonomous-database.html
QUESTION 4
Which three statements are true about procedures in the DBMS_CLOUD package? (Choose
three.)
A. The DBMS_CLOUD.PUT_OBJECT procedure copies a file from Cloud Object Storage to the
Autonomous Data Warehouse.
B. The DBMS_CLOUD.CREATE_CREDENTIAL procedure stores Cloud Object Storage credentials
in the Autonomous Data Warehouse database.
C. The DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE procedure validates the source files for an
external table, generates log information, and stores the rows that do not match the format
options specified for the external table in a badfile table on Autonomous Data Warehouse.
D. The DBMS_CLOUD.DELETE_FILE procedure removes the credentials file from the
Autonomous Data Warehouse database. (*)
E. The DBMS_CLOUD.CREATE_EXTERNAL_TABLE procedure creates an external table on files
in the cloud. You can run queries on external data from the Autonomous Data Warehouse.
Reference:
https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/dbmscloud.
html#GUID-930632E1-B7BF-4ECA-8F78-5E5A205C0865
QUESTION 5
https://www.oracle.com/database/technologies/datawarehouse-bigdata/adb-
faqs.html#ARCHITECTURE-BOOKMARK
QUESTION 6
Which two statements are true with regards to Oracle Data Sync? (Choose two.)
A. Data Sync can connect to any jdbc compatible source like MongoDB, RedShift and Sybase.
B. Data Sync can use a normal OCI (thick) client connection to connect to an Oracle database.
C. Data Sync can load your data in parallel in order to speed up the loading process.
D. Data Sync has default drivers available that supported loading data from DB2, Microsoft SQL
Server, MySQL and Teradata.
Reference:
https://www.oracle.com/technetwork/middleware/bicloud/downloads/bicsdatasync2-3-
readme-3715844.pdf (4)
QUESTION 7
Which statement is false about Autonomous Database Oracle Client Credentials (Wallets)?
A. In addition to the Oracle Client Credential Wallet, a user must have a username and password
in order to connect to the Autonomous Database.
B. The Oracle Client Credential file is downloaded as a ZIP file.
C. The Wallet for the Autonomous Database is the same as the Transparent Data Encryption
(TDE) wallet.
D. You MUST have an Oracle Client Credential Wallet in order to connect to the Autonomous
Database.
QUESTION 8
What is the predefined role that exists in Autonomous Database that includes common
privileges that are used by a Data Warehouse developer?
A. ADBDEV
B. ADMIN
C. DWROLE
D. ADWC
Reference: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-
cloud/user/manage-usersprivileges.html#GUID-50450FAD-9769-4CF7-B0D1-EC14B465B873
QUESTION 9
Which two system privileges does a user need to create analytic views? (Choose two.)
A. CREATE ANALYTIC MEASURE
B. CREATE ANALYTIC LEVEL
C. CREATE ANALYTIC HIERARCHY
D. CREATE ANALYTIC VIEW
E. CREATE ATTRIBUTE DIMENSION
Reference: https://docs.oracle.com/en/database/oracle/oracle-database/19/dwhsg/overview-
analyticviews.html#GUID-6F948948-6AE6-4A89-8AAC-5B8952CEF41D
QUESTION 10
What are three methods to load data into the Autonomous Database? (Choose three.)
A. Oracle GoldenGate
B. Transportable Tablespace
C. RMAN Restore
D. Oracle Data Pump
E. SQL*Loader
Reference: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-
cloud/user/loaddata.html#GUID-1351807C-E3F7-4C6D-AF83-2AEEADE2F83E