Oracle Autonomous
Oracle Autonomous
Oracle Autonomous
E85417-50
September 2019
Oracle Cloud Using Oracle Autonomous Data Warehouse,
E85417-50
Copyright © 2018, 2019, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify,
license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means.
Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-
specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience x
Documentation Accessibility x
Related Documents x
Conventions xi
iii
Prepare for JDBC Thin Connections 2-8
Using Applications with Support for Wallets 2-8
Download Client Credentials (Wallets) 2-9
Connect to Autonomous Data Warehouse Using Oracle Database Tools 2-10
Connect with Oracle SQL Developer (18.2 or later) 2-10
Connect with Oracle SQL Developer (earlier than Version 18.2) 2-12
Connect with SQL*Plus 2-14
Connect with Oracle SQLcl Cloud Connection 2-15
Connect with Built-in SQL Developer Web 2-16
About SQL Developer Web 2-16
Access SQL Developer Web as ADMIN 2-17
Provide SQL Developer Web Access to Database Users 2-18
JDBC Thin Connections and Wallets 2-20
JDBC Thin Driver Connection Prerequisites 2-20
Using a JDBC URL Connection String with JDBC Thin Driver 2-21
Using a JDBC Connection with 18.3 JDBC Driver 2-22
Connecting Using JDBC Thin Driver 12.2 or Older 2-24
JDBC Thin Connections with an HTTP Proxy 2-26
Oracle Call Interface (OCI) Connections and Wallets 2-27
Predefined Database Service Names for Autonomous Data Warehouse 2-27
Connect with Oracle Data Visualization Desktop 2-28
Connect with Oracle Analytics Cloud 2-28
Connect Applications to Autonomous Data Warehouse 2-28
Connect with Microsoft .NET and Visual Studio 2-28
Connect with JDBC Thin Driver and UCP 2-29
Connect with Python, Node.js, and other Scripting Languages 2-29
Connect with Oracle Cloud Infrastructure FastConnect 2-32
Access Autonomous Data Warehouse with Service Gateway 2-32
Access Autonomous Data Warehouse with VCN Transit Routing 2-33
Restrict Access Using a Network Access Control List 2-33
Enable and Disable Application Continuity 2-33
Using Database Links with Autonomous Data Warehouse 2-35
Create Database Links 2-35
Drop Database Links 2-38
iv
Load Data – List Credentials 3-5
Load Data – Delete Credentials 3-5
Import Data Using Oracle Data Pump on Autonomous Data Warehouse 3-6
Export Your Existing Oracle Database to Import into Autonomous Data
Warehouse 3-6
Import Data Using Oracle Data Pump Version 18.3 or Later 3-7
Import Data Using Oracle Data Pump (Versions 12.2.0.1 and Earlier) 3-9
Access Log Files for Data Pump Import 3-10
Use Oracle GoldenGate to Replicate Data to Autonomous Data Warehouse 3-11
Load Data from Local Files Using SQL*Loader 3-11
Managing DML Performance and Compression 3-11
v
Access Oracle Application Express App Builder 7-5
Use Web Services with Oracle Application Express 7-6
Send Email from Oracle Application Express 7-7
Restrictions and Limitations for Oracle Application Express with Autonomous Data
Warehouse 7-8
vi
Add Existing Database User Account to Oracle Machine Learning 11-7
vii
About Work Requests 16-1
viii
Managing Partitions, Indexes, and Materialized Views B-6
Restrictions for Database Features B-6
Restrictions for Oracle XML DB B-6
Restrictions for Oracle Text B-7
Restrictions for Oracle Spatial and Graph B-8
Restrictions for Oracle Application Express B-9
List of Restricted and Removed Oracle Features B-10
ix
Preface
Preface
This document describes how to manage, monitor, and use Oracle Autonomous Data
Warehouse and provides references to related documentation.
Audience
This document is intended for Oracle Cloud users who want to manage and monitor
Oracle Autonomous Data Warehouse.
This document is also intended for developers and end users who want to load and
query data in Oracle Autonomous Data Warehouse.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.
Related Documents
Autonomous Data Warehouse is built using Oracle Database 18c. Many database
concepts and features of this service are further documented here:
Oracle Database 18c
For additional information, see these Oracle resources:
• Getting Started with Oracle Cloud
• Oracle Cloud Infrastructure Object Storage Documentation
• Oracle Data Integration Platform Cloud
• Oracle Cloud Infrastructure Object Storage
• GoldenGate Real-Time Data Replication in Cloud
• Using Oracle GoldenGate Cloud Service
• Getting Started with Oracle Analytics Cloud
• User’s Guide for Oracle Data Visualization Desktop
x
Preface
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated with an
action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for which you
supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code in
examples, text that appears on the screen, or text that you enter.
xi
Part I
Using Autonomous Data Warehouse
This part provides information on using Autonomous Data Warehouse.
Topics
Topics
1-1
Chapter 1
About Autonomous Data Warehouse
in a public cloud. Oracle Analytics Cloud and other Oracle Cloud services provide
support for Autonomous Data Warehouse connections.
Autonomous Data Warehouse is a completely elastic service. When you get started
with Autonomous Data Warehouse, simply specify the number of OCPUs and the
storage capacity in TB's for the data warehouse. At any time, you can scale, increase
or decrease, either the OCPUs or the storage capacity. When you make resource
changes for your Autonomous Data Warehouse, the data warehouse resources
automatically shrink or grow, without requiring any downtime or service interruptions.
Autonomous Data Warehouse includes a cloud-based service console for managing
the service (for tasks such as stopping, starting, or scaling the service), and monitoring
the service (for tasks such as viewing the recent levels of activity on the data
warehouse).
Autonomous Data Warehouse also includes the following:
• Oracle Application Express (APEX): a low-code development platform that
enables you to build scalable, secure enterprise apps with world-class features.
• Oracle REST Data Services (ORDS): a Java Enterprise Edition based data service
that makes it easy to develop modern REST interfaces for relational data and
JSON Document Store.
• Oracle SQL Developer Web: a browser-based interface of Oracle SQL Developer.
• Oracle Machine Learning: a cloud-based notebook application which provides
simple querying, data-visualization, and collaboration capabilities. The notebook is
designed to be used alongside other business intelligence applications.
You can use Autonomous Data Warehouse with Oracle Data Visualization Desktop to
easily create visualizations and projects that reveal trends in your company’s data and
help you answer questions and discover important insights about your business.
The following figure shows the Autonomous Data Warehouse architecture.
Oracle Data
Visualization
Desktop
SQL Service Machine SQL REST Data Application
Developer Console Learning Developer Web Services Express 3rd Party BI on
Oracle Cloud
Infrastructure
Data Integration
Services 3rd Party BI
On-premises
Oracle DI
Platform Cloud Autonomous
Database
3rd Party DI
on Oracle
Cloud
Infrastructure Oracle Object
Storage Cloud
3rd Party DI Data Files for Loading and
On-premises External Access
1-2
Chapter 1
Before You Begin with Autonomous Data Warehouse
1-3
Chapter 1
Key Features of Autonomous Data Warehouse
Key Features
• Managed: Oracle simplifies end-to-end management of the data warehouse:
– Provisioning new databases
– Growing or shrinking storage and compute resources
– Patching and upgrades
– Backup and recovery
• Fully Tuned: “Load and go”:
– Define tables, load data, run queries
– Provides good performance out of the box
– Run your queries using any business analytics tool or cloud service
– Built-in SQL worksheet and notebook also included
• Fully elastic scaling: Scale compute and storage independently to fit your data
warehouse workload with no downtime:
– Size the Autonomous Data Warehouse to the exact compute and storage
required
– Scale the Autonomous Data Warehouse on demand: Independently scale
compute or storage
– Shut off idle compute to save money
• Auto scaling: Enable auto scaling to allow Autonomous Data Warehouse to use
more CPU and IO resources automatically when the workload requires it:
– Specify the number of OCPUs for Autonomous Data Warehouse workloads.
– Enable auto scaling to allow the database to use up to three times more CPU
and IO resources depending on workload requirements.
– Enable auto scaling when you provision an Autonomous Data Warehouse
instance or using Scale Up/Down on the Oracle Cloud Infrastructure Console.
• Autonomous Data Warehouse supports:
– Existing applications, running in the cloud or on-premises
– Connectivity via SQL*Net, JDBC, ODBC
– Third-party data-integration tools
– Oracle cloud services: Oracle Analytics Cloud, Oracle GoldenGate Cloud
Service, Oracle Integration Cloud Service, and others
• High-performance queries and concurrent workloads: Optimized query
performance with preconfigured resource profiles for different types of users.
• Oracle SQL: Autonomous Data Warehouse is compatible with existing
applications that support Oracle Database.
1-4
Chapter 1
Typical Workflow for Using Autonomous Data Warehouse
• Built-in web-based data analysis tool: Web-based notebook tool for designing
and sharing SQL based data-driven, interactive documents.
• Database migration utility: Easily migrate from Amazon AWS Redshift, SQL
Server, and other databases.
1-5
Chapter 1
Use Oracle Machine Learning with Autonomous Data Warehouse
1-6
Chapter 1
Build Reports and Dashboards with Analytics in Autonomous Data Warehouse
account details mailed to a new user. See Work with Oracle Machine Learning for
Data Access, Analysis, and Discovery for details on using the OML application.
Note:
Both SH and SSB are provided as schema-only users, so you cannot unlock
or drop those users or set a password. The storage of the sample data sets
does not count towards your database storage.
1-7
Chapter 1
Security and Authentication in Autonomous Data Warehouse
For more information on the SH schema see Sample Schemas and Schema
Diagrams.
For more information on database services, see Predefined Database Service Names
for Autonomous Data Warehouse.
1-8
Chapter 1
Quickstart Tutorials
Autonomous Data Warehouse uses strong password complexity rules for all users
based on Oracle Cloud security standards. For more information on the password
complexity rules see Create Users with Autonomous Data Warehouse.
You can further restrict connections by specifying a network Access Control List
(ACL). By specifying a network ACL a specific Autonomous Data Warehouse
database only accepts connections from addresses on the ACL and rejects all other
client connections. See Set Access Control List with Autonomous Data Warehouse for
more information.
Quickstart Tutorials
Provides links to the Autonomous Data Warehouse quickstart tutorials.
1-9
Chapter 1
Quickstart Tutorials
1-10
2
Connecting to Autonomous Data
Warehouse
Describes methods to securely connect to Autonomous Data Warehouse.
Topics
2-1
Chapter 2
About Connecting to an Autonomous Data Warehouse Instance
2-2
Chapter 2
About Connecting to an Autonomous Data Warehouse Instance
Client Computer
Oracle Call
Interface (OCI) JDBC “Thin”
Wallet/Keystore
Wallet/Keystore
Oracle Autonomous
Data Warehouse
my_adwc_high = (description = (
address=(protocol=tcps)
(port=1522)
(host=adwc.example.oraclecloud.com))
(connect_data=(service_name=example_high.adwc.oraclecloud.com))
2-3
Chapter 2
Connect Autonomous Data Warehouse Using a Client Application
(security=(ssl_server_cert_dn="CN=adwc.example.oraclecloud.com,
OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood
City,ST=California,C=US")))
Your firewall must allow access to servers within the .oraclecloud.com domain using
port 1522. To connect to Autonomous Data Warehouse, depending upon your
organization's network configuration, you may need to use a proxy server to access
this port or you may need to request that your network administrator open this port.
Note:
By default Application Continuity is disabled.
Topics
• About Connecting to Autonomous Data Warehouse Using a Client Application
• Prepare for Oracle Call Interface (OCI), ODBC, and JDBC OCI Connections
• Prepare for JDBC Thin Connections
• Using Applications with Support for Wallets
2-4
Chapter 2
Connect Autonomous Data Warehouse Using a Client Application
Prepare for Oracle Call Interface (OCI), ODBC, and JDBC OCI
Connections
Preparing for any type of Oracle Call Interface(OCI) connection requires the
installation of client software, downloading client credentials, and configuring certain
files and environment variables.
2-5
Chapter 2
Connect Autonomous Data Warehouse Using a Client Application
To (UNIX/Linux example):
To (Windows example):
5. Create the TNS_ADMIN environment variable and set it to the location of the
credentials file.
Note:
Use this environment variable to change the directory path of Oracle Net
Services configuration files from the default location of ORACLE_HOME
\network\admin to the location of the secure folder containing the
credentials file you saved in Step 2. Set the TNS_ADMIN environment
variable to the directory where the unzipped credentials files are, not to
the credentials file itself.
Note:
Connections through an HTTP proxy are only available with Oracle Client
software version 12.2.0.1 or later.
1. Add the following line to the sqlnet.ora file to enable connections through an
HTTP proxy:
SQLNET.USE_HTTPS_PROXY=on
2. Add the HTTP proxy hostname and port to the connection definitions in
tnsnames.ora. You need to add the https_proxy and https_proxy_port
parameters in the address section of connection definitions. For example, the
2-6
Chapter 2
Connect Autonomous Data Warehouse Using a Client Application
following sets the HTTP proxy to proxyhostname and the HTTP proxy port to 80;
replace these values with your HTTP proxy information:
ADWC1_high =
(description=
(address=
(https_proxy=proxyhostname)(https_proxy_port=80)
(protocol=tcps)(port=1522)(host=adwc.example.oraclecloud.com)
)
(connect_data=(service_name=adwc1_high.adwc.oraclecloud.com)
)
(security=(ssl_server_cert_dn="adwc.example.oraclecloud.com,
OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood
City,ST=California,C=US")
)
)
Note:
Configuring sqlnet.ora and tnsnames.ora for the HTTP proxy may not be
enough depending on your organization's network configuration and security
policies. For example, some networks require a username and password for
the HTTP proxy. In such cases contact your network administrator to open
outbound connections to hosts in the oraclecloud.com domain using port
1522 without going through an HTTP proxy.
2. Copy the entries in the tnsnames.ora file provided in the Autonomous Data
Warehouse wallet to your existing tnsnames.ora file.
2-7
Chapter 2
Connect Autonomous Data Warehouse Using a Client Application
2-8
Chapter 2
Download Client Credentials (Wallets)
Note:
This password protects the downloaded Client Credentials wallet. This
wallet is not the same as the Transparent Data Encryption (TDE) wallet
for the database; therefore, use a different password to protect the Client
Credentials wallet.
2-9
Chapter 2
Connect to Autonomous Data Warehouse Using Oracle Database Tools
Notes:
• Wallet files, along with the Database user ID and password provide
access to data in your Autonomous Data Warehouse database. Store
wallet files in a secure location. Share wallet files only with authorized
users. If wallet files are transmitted in a way that might be accessed by
unauthorized users (for example, over public email), transmit the wallet
password separately and securely.
• For better security, Oracle recommends using restricted permissions on
wallet files. This means setting the file permissions to 600 on Linux/Unix.
Similar restrictions can be achieved on Windows by letting the file owner
have Read and Write permissions while all other users have no
permissions.
• Autonomous Data Warehouse uses strong password complexity rules for
all users based on Oracle Cloud security standards. For more
information on the password complexity rules see Create Users with
Autonomous Data Warehouse.
Topics
2-10
Chapter 2
Connect to Autonomous Data Warehouse Using Oracle Database Tools
Download the latest version of Oracle SQL Developer for your platform from the
Download link on this page: Oracle SQL Developer.
To create a new connection to Autonomous Data Warehouse, do the following:
Obtain your credentials to access Autonomous Data Warehouse. For more
information, see Download Client Credentials (Wallets).
1. Start Oracle SQL Developer and in the connections panel, right-click Connections
and select New Database Connection....
2-11
Chapter 2
Connect to Autonomous Data Warehouse Using Oracle Database Tools
Note:
Versions of SQL Developer before 18.2 require that you enter a
Keystore Password. For more information, see Connect with
Oracle SQL Developer (earlier than Version 18.2).
2-12
Chapter 2
Connect to Autonomous Data Warehouse Using Oracle Database Tools
Note:
Versions of SQL Developer starting with 18.2.0 do not require that you
enter a Keystore Password and do not provide this field. For more
information, see Connect with Oracle SQL Developer (18.2 or later).
2-13
Chapter 2
Connect to Autonomous Data Warehouse Using Oracle Database Tools
• Username: Enter the database username. You can either use the default
administrator database account (ADMIN) provided as part of the service or
create a new schema, and use it.
• Password: Enter the password for the database user.
• Connection Type: Select Cloud PDB.
• Configuration File : Click Browse, and select the client credentials zip file.
• Keystore Password: Enter the password generated while downloading the
client credentials from Autonomous Data Warehouse.
See Download Client Credentials (Wallets).
• Service: Enter the service name. The client credentials file provides the
service names.
Enter password:
Last Successful login time: Wed Apr 03 2019 14:50:39 -07:00
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
SQL>
Note:
The Oracle Wallet is transparent to SQL*Plus because the wallet
location is specified in the sqlnet.ora file. This is true for any Oracle
Call Interface (OCI), ODBC, or JDBC OCI connection.
2-14
Chapter 2
Connect to Autonomous Data Warehouse Using Oracle Database Tools
sql -oci
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
SQL>
When connecting using Oracle Call Interface, the Oracle Wallet is transparent to
SQLcl.
SQLcl with a JDBC Thin Connection
To connect using a JDBC Thin connection, first configure the SQLcl cloud
configuration and then connect to the Autonomous Data Warehouse database.
1. Start SQLcl with the /nolog option.
sql /nolog
2. Configure the SQLcl session to use your Oracle Wallet:
SQL> set cloudconfig directory/client_credentials.zip
Wallet Password: **********
3. Connect to the Autonomous Data Warehouse database:
2-15
Chapter 2
Connect with Built-in SQL Developer Web
For example:
sql /nolog
Topics
2-16
Chapter 2
Connect with Built-in SQL Developer Web
2-17
Chapter 2
Connect with Built-in SQL Developer Web
4. In the SQL Developer Web Sign in page, enter your Username and Password.
5. Click Sign in.
This shows the SQL Developer Worksheet tab.
BEGIN
ORDS_ADMIN.ENABLE_SCHEMA(
2-18
Chapter 2
Connect with Built-in SQL Developer Web
where:
• schema-name is the database schema name in all-uppercase.
• schema-alias is an alias for the schema name to use in the URL to access
SQL Developer Web.
• p_auto_rest_auth specifies the REST /metadata-catalog/ endpoint requires
authorization. REST uses the metadata-catalog to get a list of published
services on the schema. Set this parameter to TRUE.
Note:
Oracle recommends that you use the schema-allias and do not use the
schema name itself to access SQL Developer Web, as described in the
following step, as a security measure to keep the schema name from
being exposed.
2. After enabling user access for the specified schema, the ADMIN provides a user
with the URL to access SQL Developer Web, as follows:
a. Select the Autonomous Data Warehouse instance.
b. On the instance details page click Service Console.
c. Click Development.
d. Right-click SQL Developer Web and choose Copy URL.
The copied URL is the same as the URL the ADMIN enters to access SQL
Developer Web. For example:
https://dbname.adb.us-ashburn-1.example.com/ords/admin/_sdw/?nav=worksheet
e. To provide a user with access to SQL Developer Web you need to edit the
copied URL to use the alias for the schema specified with the parameter
p_url_mapping_pattern in step 1.
For a user to access SQL Developer Web the part of the copied URL with
"admin" is replaced with the "schema-alias".
For example, after editing you would provide this URL for the user to login:
https://dbname.adb.us-ashburn-1.example.com/ords/schema-alias/_sdw/?nav=worksheet
2-19
Chapter 2
JDBC Thin Connections and Wallets
To access SQL Developer Web a user pastes the URL into their browser and then
enters the schema's Username and Password in the Sign-in dialog.
Topics
2-20
Chapter 2
JDBC Thin Connections and Wallets
dbname_high= (description=
(address=(protocol=tcps)(port=1522)
(host=adw.example.oraclecloud.com))
(connect_data=(service_name=adw_jdbctest_high.oraclecloud.com))
(security=(ssl_server_cert_dn="CN=adw.oraclecloud.com,OU=Oracle
US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))
Set the location of tnsnames.ora with the property TNS_ADMIN in one of the following
ways:
• As part of the connection string (only with the 18.3 or newer JDBC driver)
• As a system property, -Doracle.net.tns_admin
• As a connection property (OracleConnection.CONNECTION_PROPERTY_TNS_ADMIN)
Using the 18.3 JDBC driver, the connection string includes the TNS alias and the
TNS_ADMIN connection property.
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/wallet_dbname"
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=C:\\Users\\test\
\wallet_dbname"
2-21
Chapter 2
JDBC Thin Connections and Wallets
Note:
If you are using 12.2.0.1 or older JDBC drivers, then the connection string
contains only the TNS alias. To connect using older JDBC drivers:
• Set the location of the tnsnames.ora, either as a system property with
-Doracle.net.tns_admin or as a connection property
(OracleConnection.CONNECTION_PROPERTY_TNS_ADMIN).
• Set the wallet or JKS related connection properties in addition to
TNS_ADMIN.
For example, in this case you set the TNS alias in the DB_URL without the
TNS_ADMIN part as:
DB_URL=”jdbc:oracle:thin:@dbname_high”
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/
wallet_dbname"
oracle.net.wallet_location=(SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=${TNS_ADMIN}))
Note:
You do not modify the file ojdbc.properties. The value of TNS_ADMIN
determines the wallet location.
2-22
Chapter 2
JDBC Thin Connections and Wallets
4. Compile and Run: Compile and run the sample to get a successful connection.
Make sure you have oraclepki.jar , osdt_core.jar, and osdt_cert.jar, in the
classpath. For example:
java –classpath
./lib/ojdbc8.jar:./lib/ucp.jar:./lib/oraclepki.jar:./lib/
osdt_core.jar:./lib/osdt_cert.jar:. UCPSample
Note:
The auto-login wallet part of Autonomous Data Warehouse downloaded
client credentials zip file removes the need for your application to use
username/password authentication.
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/
wallet_dbname"
3. Set JKS related connection properties: Add the JKS related connection
properties to ojdbc.properties file. The keyStore and truststore password are
the password specified when you're downloading the client credentials .zip file
from the Autonomous Data Warehouse service console.
To use SSL connectivity instead of Oracle Wallet, specify the keystore and
truststore files and their respective password in the ojdbc.properties file as
follows:
2-23
Chapter 2
JDBC Thin Connections and Wallets
Note:
Make sure to comment the wallet related property in
ojdbc.properties. For example:
4. Compile and Run: Compile and run the sample to get a successful connection.
For example:
DB_URL="jdbc:oracle:thin:@dbname_high”
3. Set the wallet location: Add the OraclePKIProvider at the end of the provider list
in the file java.security (this file is part of your JRE install located
at $JRE_HOME/jre/lib/security/java.security) which typically looks
like:
security.provider.14=oracle.security.pki.OraclePKIProvider
4. Compile and Run: Compile and run the sample to get a successful connection.
Make sure to have oraclepki.jar , osdt_core.jar, and osdt_cert.jar, in the
2-24
Chapter 2
JDBC Thin Connections and Wallets
classpath. Also, you need to pass the connection properties. Update the
properties with the location where tnsnames.ora and wallet files are located.
java –classpath
./lib/ojdbc8.jar:./lib/ucp.jar:./lib/oraclepki.jar:./lib/
osdt_core.jar:./lib/osdt_cert.jar:.
-Doracle.net.tns_admin=/users/test/wallet_dbname
-Doracle.net.ssl_server_dn_match=true
-Doracle.net.ssl_version=1.2 (Not required for 12.2)
-Doracle.net.wallet_location= “(SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=/users/test/wallet_dbname)))”
UCPSample
Note:
These are Windows system examples. Add a \ continuation character if you
are setting –D properties on multiple lines on UNIX ( Linux or a Mac).
DB_URL="jdbc:oracle:thin:@dbname_high”
3. Compile and Run: Compile and run the sample to get a successful connection.
You need to pass the connection properties as shown below. Update the
properties with the location where tnsnames.ora and JKS files are placed. If
you want to pass these connection properties programmatically then refer to
DataSourceForJKS.java. For example:
java
-Doracle.net.tns_admin=/users/test/wallet_dbname
-Djavax.net.ssl.trustStore=truststore.jks
-Djavax.net.ssl.trustStorePassword=**********
-Djavax.net.ssl.keyStore=keystore.jks
-Djavax.net.ssl.keyStorePassword=************
-Doracle.net.ssl_server_dn_match=true
-Doracle.net.ssl_version=1.2 // Not required for 12.2
2-25
Chapter 2
JDBC Thin Connections and Wallets
ADWC1_high =
(description=
(address=
(https_proxy=proxyhostname)(https_proxy_port=80)
(protocol=tcps)(port=1522)(host=adw.example.oraclecloud.com)
)
(connect_data=(service_name=adwc1_high.adw.oraclecloud.com)
)
(security=(ssl_server_cert_dn="adw.example.oraclecloud.com,OU=Oracle BMCS
US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")
)
)
Notes:
• JDBC Thin client versions earlier than 18.1 do not support connections
through HTTP proxy.
• Successful connection depends on specific proxy configurations and the
performance of data transfers would depend on proxy capacity. Oracle
does not recommend using this feature in Production environments
where performance is critical.
• Configuring tnsnames.ora for the HTTP proxy may not be enough
depending on your organization's network configuration and security
policies. For example, some networks require a username and password
for the HTTP proxy.
• In all cases, contact your network administrator to open outbound
connections to hosts in the oraclecloud.com domain using the relevant
port without going through an HTTP proxy.
2-26
Chapter 2
Oracle Call Interface (OCI) Connections and Wallets
When WALLET_LOCATION is used, Oracle Net Services automatically uses the wallet.
The wallet is used transparently to the application. See Prepare for Oracle Call
Interface (OCI), ODBC, and JDBC OCI Connections for information on setting
WALLET_LOCATION.
2-27
Chapter 2
Connect with Oracle Data Visualization Desktop
Choose whichever database service offers the best balance of performance and
concurrency.
Note:
When connecting for replication purposes, use the low database service
name. For example use this service with Oracle GoldenGate connections.
Topics
2-28
Chapter 2
Connect Applications to Autonomous Data Warehouse
2-29
Chapter 2
Connect Applications to Autonomous Data Warehouse
cd /home/myuser/instantclient_18_5
ln -s libclntsh.so.18.1 libclntsh.so
If there is no other Oracle software on your system that will be impacted, add
Instant Client to the runtime link path. For example:
Alternatively set the library path in each shell that runs your application. For
example:
export LD_LIBRARY_PATH=/home/myuser/
instantclient_18_5:$LD_LIBRARY_PATH
Note:
The Linux Instant Client download files are available as .zip files
or .rpm files. You can use either version.
2-30
Chapter 2
Connect Applications to Autonomous Data Warehouse
directory and use the instructions on the following page: Installing DBD-
Oracle.
Enable Oracle Network Connectivity and Obtain the Security Credentials (Oracle
Wallet)
1. Obtain client security credentials to connect to an Autonomous Data Warehouse
instance. You obtain a zip file containing client security credentials and network
configuration settings required to access your Autonomous Data Warehouse
database. You must protect this file and its contents to prevent unauthorized
database access. Obtain the client security credentials file as follows:
• ADMIN user: Click DB Connection. See Download Client Credentials
(Wallets).
• Other user (non-administrator): Obtain the Oracle Wallet from the
administrator for your Autonomous Data Warehouse.
2. Extract the client credentials (wallet) files:
a. Unzip the client credentials zip file.
b. If you are using Instant Client, make a network/admin subdirectory
hierarchy under the Instant Client directory if necessary. Then move the files
to this subdirectory. For example depending on the architecture or your client
system and where you installed Instant Client, the files should be in the
directory:
C:\instantclient_12_2\network\admin
or
/home/myuser/instantclient_18_5/network/admin
or
/usr/lib/oracle/18.5/client64/lib/network/admin
• If you are using a full Oracle Client move the file to $ORACLE_HOME/
network/admin.
c. Alternatively, put the unzipped wallet files in a secure directory and set the
TNS_ADMIN environment variable to that directory name.
Note:
From the zip file, only these files are required: tnsnames.ora,
sqlnet.ora, cwallet.sso, and ewallet.p12.
3. If you are behind a proxy follow the steps in “Connections with an HTTP Proxy”, in
Prepare for Oracle Call Interface (OCI), ODBC, and JDBC OCI Connections.
2-31
Chapter 2
Connect with Oracle Cloud Infrastructure FastConnect
2-32
Chapter 2
Access Autonomous Data Warehouse with VCN Transit Routing
2-33
Chapter 2
Enable and Disable Application Continuity
supply when you enable or disable Application Continuity using one of these two
techniques:
• Look in the tnsnames.ora file in wallet.zip file you use to connect to your
service. For example:
service_name=nvthp2ht_adb1_high.adwc.oraclecloud.com
• Use a command similar to the following on your database and identify the service
where you want to enable or disable Application Continuity:
Note:
By default Application Continuity is disabled.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
service_name => 'nvthp2ht_adb1_high.adwc.oraclecloud.com'
);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
service_name => 'nvthp2ht_adb1_high.adwc.oraclecloud.com'
);
END;
/
2-34
Chapter 2
Using Database Links with Autonomous Data Warehouse
Topics
To use database links with Autonomous Data Warehouse the target database must be
configured to use TCP/IP with SSL (TCPS) authentication. Autonomous Databases
use TCP/IP with SSL (TCPS) authentication by default, so you do not need to do any
additional configuration in your target database for an Autonomous Transaction
Processing or Autonomous Data Warehouse database. Other databases must be
configured to use TCP/IP with SSL (TCPS) authentication. See Configuring Secure
Sockets Layer Authentication for more information.
To ensure security, the database link port is restricted to the range 1521-1525. You
specify the target database port when you create a database link with
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
To use database links the target database must be accessible. Some databases,
including Autonomous Databases may limit access, for example using Access Control
Lists. If access to the target database is limited and does not include the database
where you are creating the database link, this would prevent using database links with
those target databases. See Set Access Control List with Autonomous Data
Warehouse for more information.
To create database links to a target database do the following:
1. Copy your target database wallet, cwallet.sso, containing the certificates for
the target database to Object Store.
Note:
The wallet file, along with the Database user ID and password provide
access to data in the target Oracle database. Store wallet files in a
secure location. Share wallet files only with authorized users.
2. Create credentials to access your Object Store where you store the cwallet.sso.
See CREATE_CREDENTIAL Procedure for information about the username and
password parameters for different object storage services.
3. Upload the target database wallet to the data_pump_dir directory, or to another
directory where you have write privileges, using DBMS_CLOUD.GET_OBJECT.
2-35
Chapter 2
Using Database Links with Autonomous Data Warehouse
For example:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/cwallet.sso',
directory_name => 'DATA_PUMP_DIR');
END;
/
Note:
The credential_name you use in this step is the credentials for the
Object Store. In the next step create the credentials to access the target
database.
Note:
Supplying the credential_name parameter is optional if you use the
same credentials on the target database. If credential_name is not
supplied in the following step, or is supplied with a NULL value, the
database link is created using the connected user credentials (the link is
created and uses the credentials, username and password, of the user
who is accessing the link).
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password'
);
END;
/
2-36
Chapter 2
Using Database Links with Autonomous Data Warehouse
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'adb.eu-frankfurt-1.oraclecloud.com',
port => '1522',
service_name => 'example_medium.adwc.example.oraclecloud.com',
ssl_server_cert_dn =>
'CN=adwc.example.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle
Corporation,L=Redwood City,ST=California,C=US',
credential_name => 'DB_LINK_CRED',
directory_name => 'DATA_PUMP_DIR');
END;
/
Note:
For complete information on supported versions, see Client Server
Interoperability Support Matrix for Different Oracle Versions (Doc ID
207303.1)
• To list the database links accessible to the connected user, use the ALL_DB_LINKS
view. See ALL_DB_LINKS for more information.
2-37
Chapter 2
Using Database Links with Autonomous Data Warehouse
BEGIN
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK(
db_link_name => 'SALESLINK' );
END;
/
2-38
3
Loading Data with Autonomous Data
Warehouse
Describes packages and tools to load data with Autonomous Data Warehouse.
Topics
For the fastest data loading experience Oracle recommends uploading the source files
to a cloud-based object store, such as Oracle Cloud Infrastructure Object Storage,
before loading the data into your Autonomous Data Warehouse. Oracle provides
support for loading files that are located locally in your data center, but when using this
method of data loading you should factor in the transmission speeds across the
Internet which may be significantly slower.
For more information on Oracle Cloud Infrastructure Object Storage, see Putting Data
into Object Storage and Overview of Object Storage.
For a tutorial on data loading using Oracle Cloud Infrastructure Object Storage, see
Loading Your Data.
Note:
If you are not using ADMIN user, ensure the user has the necessary privileges
for the operations the user needs to perform. See Manage User Privileges
with Autonomous Data Warehouse for more information.
3-1
Chapter 3
Load Data from Files in the Cloud
Topics
• Load Data - Create Credentials and Copy Data into an Existing Table
• Load Data – Monitor and Troubleshoot Loads
• Load Data – List Credentials
• Load Data – Delete Credentials
Load Data - Create Credentials and Copy Data into an Existing Table
For data loading from files in the Cloud, you need to first store your object storage
credentials in your Autonomous Data Warehouse and then use the procedure
DBMS_CLOUD.COPY_DATA to load data.
The source file in this example, channels.txt, has the following data:
S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. Once you store the credentials
you can then use the same credential name for all data loads.
For detailed information about the parameters, see CREATE_CREDENTIAL
Procedure.
3-2
Chapter 3
Load Data from Files in the Cloud
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand
character (&) as a special character. If you have the ampersand
character in your password use the SET DEFINE OFF command in those
tools as shown in the example to disable the special character and get
the credential created properly.
2. Load data into an existing table using the procedure DBMS_CLOUD.COPY_DATA. For
example:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/channels.txt',
format => json_object('delimiter' value ',')
);
END;
/
3-3
Chapter 3
Load Data from Files in the Cloud
Query these tables to see information about ongoing and completed data loads. For
example:
Using this SELECT statement with a WHERE clause predicate on the TYPE column, shows
load operations with the type COPY.
The LOGFILE_TABLE column shows the name of the table you can query to look at the
log of a load operation. For example, the following query shows the log of the load
operation:
select * from COPY$21_LOG;
The column BADFILE_TABLE shows the name of the table you can query to look at the
rows that got errors during loading. For example, the following query shows the
rejected records for the load operation:
select * from COPY$21_BAD;
Depending on the errors shown in the log and the rows shown in the specified
BADFILE_TABLE table you can correct the error by specifying the correct format options
in DBMS_CLOUD.COPY_DATA.
Note:
The LOGFILE_TABLE and BADFILE_TABLE tables are stored for two days for
each load operation and then removed automatically.
3-4
Chapter 3
Load Data from Files in the Cloud
Note:
For Parquet and Avro files, when the format parameter type is set to the
value parquet or avro the BADFILE_TABLE table is always empty.
CREDENTIAL_NAME USERNAME
---------------------------–----------------------------- --------------------
COMMENTS
---------------------------–----------------------------- --------------------
ADB_TOKEN user_name@example.com
{"comments":"Created via DBMS_CLOUD.create_credential"}
DEF_CRED_NAME user_name@example.com
{"comments":"Created via DBMS_CLOUD.create_credential"}
For example, to remove the credential named DEF_CRED_NAME, run the following
command:
BEGIN
DBMS_CLOUD.DROP_CREDENTIAL('DEF_CRED_NAME');
END;
For more information about the DBMS_CLOUD procedures and parameters, see
Summary of DBMS_CLOUD Subprograms.
3-5
Chapter 3
Import Data Using Oracle Data Pump on Autonomous Data Warehouse
Topics
Oracle Data Pump Export provides several export modes, Oracle recommends using
the schema mode for migrating to Autonomous Data Warehouse. You can list the
schemas you want to export by using the schemas parameter.
For a faster migration, export your schemas into multiple Data Pump files and use
parallelism. You can specify the dump file name format you want to use with the
dumpfile parameter. Set the parallel parameter to at least the number of CPUs you
have in your Autonomous Data Warehouse database.
The exclude and data_options parameters ensure that the object types not required in
Autonomous Data Warehouse are not exported and table partitions are grouped
together so that they can be imported faster during the import to Autonomous Data
Warehouse. If you want to migrate your existing indexes, materialized views, and
materialized view logs to Autonomous Data Warehouse and manage them manually,
you can remove those object types from the exclude list which will export those object
3-6
Chapter 3
Import Data Using Oracle Data Pump on Autonomous Data Warehouse
types too. Similarly, if you want to migrate your existing partitioned tables as-is without
converting them into non-partitioned tables and manage them manually you can
remove the data_options argument which will export your partitioned tables as-is. For
more information, see Managing Partitions, Indexes, and Materialized Views.
The following example exports the SH schema from a source Oracle Database for
migration to an Autonomous Data Warehouse database with 16 CPUs:
expdp sh/sh@orcl \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materialized_
zonemap,db_link \
data_options=group_partition_table_data \
parallel=16 \
schemas=sh \
dumpfile=export%u.dmp
You can use other Data Pump Export parameters, like compression, depending on
your requirements. For more information on Oracle Data Pump Export see Oracle
Database Utilities.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password'
);
END;
/
For more information on the credentials for different Cloud Object Storage
services, see CREATE_CREDENTIAL Procedure.
3-7
Chapter 3
Import Data Using Oracle Data Pump on Autonomous Data Warehouse
2. Run Data Pump Import with the dumpfile parameter set to the list of file URLs on
your Cloud Object Storage and the credential parameter set to the name of the
credential you created in the previous step. For example:
impdp admin/password@ADWC1_high \
directory=data_pump_dir \
credential=def_cred_name \
dumpfile= https://objectstorage.us-ashburn-1.oraclecloud.com/n/adwc/b/
adwc_user/o/export%u.dmp \
parallel=16 \
partition_options=merge \
transform=segment_attributes:n \
transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materiali
zed_zonemap,db_link
For the best import performance use the HIGH database service for your import
connection and set the PARALLEL parameter to the number of CPUs in your
Autonomous Data Warehouse as shown in the example.
For information on which database service name to connect to run Data Pump
Import, see Managing Concurrency and Priorities on Autonomous Data
Warehouse.
For the dump file URL format for different Cloud Object Storage services, see
DBMS_CLOUD Package File URI Formats.
This example shows the recommended parameters for importing into your
Autonomous Data Warehouse.
In this example:
• Partitioned tables are converted into non-partitioned tables.
• Storage attributes for tables are ignored.
• Index-organized tables are converted into regular tables.
• Indexes created for primary key and unique key constraints will be renamed to
the constraint name.
• Indexes, clusters, indextypes, materialized views, materialized view logs, and
zone maps are excluded during Data Pump Import.
Note:
To perform a full import or to import objects that are owned by other
users, you need the DATAPUMP_CLOUD_IMP role.
3-8
Chapter 3
Import Data Using Oracle Data Pump on Autonomous Data Warehouse
Import Data Using Oracle Data Pump (Versions 12.2.0.1 and Earlier)
You can import data from Data Pump files into your Autonomous Data Warehouse
using Data Pump client versions 12.2.0.1 and earlier by setting the
default_credential parameter.
Data Pump Import versions 12.2.0.1 and earlier do not have the credential parameter.
If you are using an older version of Data Pump Import you need to define a default
credential property for Autonomous Data Warehouse and use the
default_credential keyword in the dumpfile parameter.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password'
);
END;
/
For more information on the credentials for different Cloud Object Storage
services, see CREATE_CREDENTIAL Procedure.
2. Set the credential as the default credential for your Autonomous Data Warehouse,
as the ADMIN user. For example:
alter database property set default_credential = 'ADMIN.DEF_CRED_NAME'
3. Run Data Pump Import with the dumpfile parameter set to the list of file URLs on
your Cloud Object Storage, and set the default_credential keyword. For
example:
impdp admin/password@ADWC1_high \
directory=data_pump_dir \
dumpfile=default_credential:https://objectstorage.us-
ashburn-1.oraclecloud.com/n/adwc/b/adwc_user/o/export%u.dmp \
parallel=16 \
partition_options=merge \
transform=segment_attributes:n \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materiali
zed_zonemap,db_link
For the best import performance use the HIGH database service for your import
connection and set the PARALLEL parameter to the number of CPUs in your
Autonomous Data Warehouse as shown in the example.
For information on which database service name to connect to run Data Pump
Import, see Managing Concurrency and Priorities on Autonomous Data
Warehouse.
3-9
Chapter 3
Import Data Using Oracle Data Pump on Autonomous Data Warehouse
For the dump file URL format for different Cloud Object Storage services, see
DBMS_CLOUD Package File URI Formats.
This example shows the recommended parameters for importing into your
Autonomous Data Warehouse.
In this example:
• Partitioned tables are converted into non-partitioned tables.
• Storage attributes for tables are ignored.
• Indexes, clusters, indextypes, materialized views, materialized view logs, and
zone maps are excluded during Data Pump Import.
Note:
To perform a full import or to import objects that are owned by other users,
you need the DATAPUMP_CLOUD_IMP role.
To access the log file you need to move the log file to your Cloud Object Storage using
the procedure DBMS_CLOUD.PUT_OBJECT. For example, the following PL/SQL block
moves the file import.log to your Cloud Object Storage:
BEGIN
DBMS_CLOUD.PUT_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/
adwc/b/adwc_user/o/import.log',
directory_name => 'DATA_PUMP_DIR',
file_name => 'import.log');
END;
/
3-10
Chapter 3
Use Oracle GoldenGate to Replicate Data to Autonomous Data Warehouse
3-11
Chapter 3
Managing DML Performance and Compression
At any point in time you can use the ALTER TABLE MOVE statement to compress these
tables without impacting queries accessing them. For example, the following statement
compresses the table SALES using Hybrid Columnar Compression.
ALTER TABLE sales MOVE COLUMN STORE COMPRESS FOR QUERY HIGH;
3-12
4
Querying External Data with Autonomous
Data Warehouse
Describes packages and tools to query data with Autonomous Data Warehouse.
Although queries on external data will not be as fast as queries on database tables,
you can use this approach to validate and quickly start running queries on your source
files.
Note:
If you are not using ADMIN user, ensure the user has the necessary privileges
for the operations the user needs to perform. See Manage User Privileges
with Autonomous Data Warehouse for more information.
Topics
The source file in this example, channels.txt, has the following data:
S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password'
);
4-1
Chapter 4
Query External Data
END;
/
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. Once you store the credentials
you can then use the same credential name for creating external tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create an external table on top of your source files using the procedure
DBMS_CLOUD.CREATE_EXTERNAL_TABLE.
For example:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'CHANNELS_EXT',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/channels.txt',
format => json_object('delimiter' value ','),
column_list => 'CHANNEL_ID NUMBER,
CHANNEL_DESC VARCHAR2(20),
CHANNEL_CLASS VARCHAR2(20),
CHANNEL_CLASS_ID NUMBER,
CHANNEL_TOTAL VARCHAR2(13),
CHANNEL_TOTAL_ID NUMBER'
);
END;
/
If there are any rows in the source files that do not match the format options you
specified, the query reports an error. You can use DBMS_CLOUD parameters, like
rejectlimit, to suppress these errors. As an alternative, you can also validate the
external table you created to see the error messages and the rejected rows so that
you can change your format options accordingly.
4-2
Chapter 4
Query External Data with Parquet or Avro Source Files
Note:
The steps to use external tables are very similar for Parquet and Avro. These
steps show working with a Parquet format source file.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password'
);
END;
/
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. Once you store the credentials
you can then use the same credential name for creating external tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create an external table for Parquet or Avro on top of your source files using the
procedure DBMS_CLOUD.CREATE_EXTERNAL_TABLE. By default, the columns created
in the external table automatically map their data types to Oracle data types for the
fields found in the source file(s) and the external table column names match the
source field names:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'sales_extended_ext',
credential_name =>'DEF_CRED_NAME',
4-3
Chapter 4
Query External Data with Parquet or Avro Source Files
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/adwc/b/
adwc_user/o/sales_extended.parquet',
format => '{"type":"parquet", "schema": "first"}'
);
END;
/
Note:
If the column_list parameter is specified, then you provide the column
names and data types for the external table and the schema value, if
specified is ignored. Using column_list you can limit the columns in the
external table. If column_list is not specified then the schema default
value is first.
3. You can now run queries on the external table you created in the previous step:
DESC sales_extended_ext;
Name Null? Type
-------------- ----- --------------
PROD_ID NUMBER(10)
CUST_ID NUMBER(10)
TIME_ID VARCHAR2(4000)
CHANNEL_ID NUMBER(10)
PROMO_ID NUMBER(10)
QUANTITY_SOLD NUMBER(10)
AMOUNT_SOLD NUMBER(10,2)
GENDER VARCHAR2(4000)
4-4
Chapter 4
Validate External Data
CITY VARCHAR2(4000)
STATE_PROVINCE VARCHAR2(4000)
INCOME_LEVEL VARCHAR2(4000)
This query shows values for rows in the external table. If you want to query this
data frequently, after examining the data you can load it into a table with
DBMS_CLOUD.COPY_DATA.
See CREATE_EXTERNAL_TABLE Procedure for Parquet or Avro Files and
COPY_DATA Procedure for Parquet or Avro Files for more information.
Before validating an external table you need to create the external table. To create an
external table use the procedure DBMS_CLOUD.CREATE_EXTERNAL_TABLE (see Query
External Data for more details):
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name => 'CHANNELS_EXT');
END;
/
This procedure scans your source files and validates them using the format options
specified when you create the external table.
The validate operation, by default, scans all the rows in your source files and stops
when a row is rejected. If you want to validate only a subset of the rows, use the
rowcount parameter. When the rowcount parameter is set the validate operation scans
rows and stops either when a row is rejected or when the specified number of rows are
validated without errors.
For example, the following validate operation scans 100 rows and stops when a row is
rejected or when 100 rows are validated without errors:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
4-5
Chapter 4
Validate External Data
If you do not want the validate to stop when a row is rejected and you want to see all
rejected rows, set the stop_on_error parameter to FALSE. In this case
VALIDATE_EXTERNAL_TABLE scans all rows and reports all rejected rows.
If you want to validate only a subset of rows use the rowcount parameter. When
rowcount is set and stop_on_error is set to FALSE, the validate operation scans rows
and stops either when the specified number of rows are rejected or when the specified
number of rows are validated without errors. For example, the following example
scans 100 rows and stops when 100 rows are rejected or when 100 rows are validated
without errors:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name => 'CHANNELS_EXT',
rowcount=>100
stop_on_error=>FALSE);
END;
/
Using this SQL statement with a WHERE clause on the TYPE column displays all of the
load operations with type VALIDATE.
The LOGFILE_TABLE column shows the name of the table you can query to look at the
log of a validate operation. For example, the following query shows the log of the
above validate operation:
4-6
Chapter 4
Validate External Data
The column BADFILE_TABLE shows the name of the table you can query to look at the
rows that got errors during validation. For example, the following query shows the
rejected records for the above validate operation:
Depending on the errors shown in the log and the rows shown in the BADFILE_TABLE,
you can correct the error by dropping the external table using the DROP TABLE
command and recreating it by specifying the correct format options in
DBMS_CLOUD.CREATE_EXTERNAL_TABLE.
Note:
The LOGFILE_TABLE and BADFILE_TABLE tables are stored for two days for
each validate operation and then removed automatically.
4-7
5
Creating Dashboards, Reports, and
Notebooks with Autonomous Data
Warehouse
Autonomous Data Warehouse includes built-in support for data analytics using Oracle
Machine Learning (OML) a browser-based interactive data analysis environment for
data scientists.
Topics
5-1
Chapter 5
Work with Oracle Machine Learning for Data Access, Analysis, and Discovery
• Open the Oracle Cloud Infrastructure console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation list click Autonomous Data
Warehouse.
1. Select an Autonomous Data Warehouse instance and on the details page click
Service Console.
5-2
Chapter 5
Work with Analytics and Visualization
2. Click Development.
3. On the Development page click OML Notebooks.
4. Enter your username and password.
The Administrator creates OML users. See Create and Update User Accounts for
Oracle Machine Learning for details.
5. Click Sign In.
This shows Oracle Machine Learning user application.
Oracle Machine Learning allows you to access your data in Autonomous Data
Warehouse and build notebooks with the following:
• Data Ingestion and Selection
• Data Viewing and Discovery
• Data Graphing, Visualization, and Collaboration
• Data Analysis
You can also create and run SQL statements and create and run SQL scripts that
access your data in Autonomous Data Warehouse.
Topics
• Using Oracle Analytics Cloud with Autonomous Data Warehouse
• Working with Data Visualization Desktop in Autonomous Data Warehouse
5-3
Chapter 5
Work with Analytics and Visualization
• Data flow: Analysts can prepare, transform and aggregate data, and then run
machine-learning models at scale.
• Data discovery: Subject matter experts can easily collaborate with other business
users, blending intelligent analysis at scale, machine learning, and statistical
modeling.
• Data visualization: Analysts can visualize any data, on any device, on premises
and in the cloud.
• Data collaboration: Large organizations and small teams can share data more
simply, as you don't need to manage or consolidate multiple versions of
spreadsheets.
• Data-driven: Application developers can utilize interfaces that enable them to
extend, customize, and embed rich analytic experiences in the application flow.
For information on connecting from Oracle Analytics Cloud to Autonomous Data
Warehouse, see Connect with Oracle Analytics Cloud.
You can find out more information on Oracle Analytics Cloud in Visualizing Data and
Building Reports in Oracle Analytics Cloud.
5-4
Chapter 5
Work with Analytics and Visualization
5-5
6
Moving Data from Autonomous Data
Warehouse to Other Oracle Databases
Oracle Data Pump offers very fast bulk data and metadata movement between
Autonomous Data Warehouse and other Oracle databases.
To export data and move the data from Autonomous Data Warehouse to other Oracle
databases, do the following:
1. Use Data Pump Export to export to a directory on Autonomous Data Warehouse.
2. Move the dump file set from the directory on Autonomous Data Warehouse to your
Cloud Object Store.
3. Depending on the target database you may need to download the dump files from
the Cloud Object Store.
4. Run Data Pump Import with the dump files.
5. Perform any required clean up such as removing the dump file set from Cloud
Object Store.
Topics
• Use Data Pump to Create a Dump File Set on Autonomous Data Warehouse
• Move Dump File Set from Autonomous Data Warehouse to Your Cloud Object
Store
• Download Dump Files, Run Data Pump Import, and Clean Up Object Store
6-1
Chapter 6
Use Data Pump to Create a Dump File Set on Autonomous Data Warehouse
Note:
The Autonomous Data Warehouse Service Console provides a link for
Oracle Instant Client. To access this link from the Service Console click
Development and select Download Oracle Instant Client.
1. Run Data Pump Export with the dumpfile parameter set, the filesize parameter set
to less than 5G, and the directory parameter set. For example, the following shows
how to export a schema named SALES in an Autonomous Data Warehouse named
ADWC1 with 16 OCPUs:
For the best export performance use the HIGH database service for your export
connection and set the PARALLEL parameter to the number of OCPUs in your
Autonomous Data Warehouse. For information on which database service name
to connect to run Data Pump Export, see Managing Concurrency and Priorities on
Autonomous Data Warehouse.
After the export is finished you can see the generated dump files by running the
following query:
For example, the output from this query shows the generated dump files and the
export log file:
OBJECT_NAME BYTES
------------------------------ ----------
exp01.dmp 12288
exp02.dmp 8192
exp03.dmp 1171456
exp04.dmp 348160
export.log 1663
6-2
Chapter 6
Move Dump File Set from Autonomous Data Warehouse to Your Cloud Object Store
Notes:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password'
);
END;
/
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. Once you store the credentials
you can then use the same credential name.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
3. Move the dump files from the Autonomous Data Warehouse database to your
Cloud Object Store by calling DBMS_CLOUD.PUT_OBJECT.
6-3
Chapter 6
Download Dump Files, Run Data Pump Import, and Clean Up Object Store
For example:
BEGIN
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/exp01.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp01.dmp');
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/exp02.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp02.dmp');
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/exp03.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp03.dmp');
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adwc/b/adwc_user/o/exp04.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp04.dmp');
END;
/
Note:
This step is not needed if you are importing the data to an Autonomous
Data Warehouse database or to an Autonomous Transaction Processing
database.
2. Run Data Pump Import to import the dump file set to the target database.
If you are importing the data to another Autonomous Data Warehouse database,
see Import Data Using Oracle Data Pump on Autonomous Data Warehouse.
If you are importing the data to an Autonomous Transaction Processing database,
see Import Data Using Oracle Data Pump on Autonomous Transaction
Processing.
6-4
Chapter 6
Download Dump Files, Run Data Pump Import, and Clean Up Object Store
3. Perform post import clean up tasks. If you are done importing the dump files to
your target database(s) then drop the bucket containing the data or remove the
dump files from the Cloud Object Store bucket, and remove the dump files from
the location where you downloaded the dump files to run Data Pump Import.
For detailed information on Oracle Data Pump Import parameters see Oracle
Database Utilities.
6-5
7
Creating Applications with Oracle
Application Express in Autonomous
Database
You can create applications with Oracle Application Express on Autonomous Data
Warehouse.
Topics
7-1
Chapter 7
Access Oracle Application Express Administration Services
7-2
Chapter 7
Access Oracle Application Express Administration Services
Note:
If you already created a workspace, the Application Express workspace
sign-in page appears instead. To open Administration Services, click
Administration Services link.
4. In the Password field, enter the password for the Autonomous Data Warehouse
ADMIN user.
See Change the Administrator Password in Autonomous Data Warehouse to
change the password.
7-3
Chapter 7
Create Oracle Application Express Workspaces in Autonomous Data Warehouse
7-4
Chapter 7
Access Oracle Application Express App Builder
3. On the Create Workspace page, in the Database User field, enter a new database
username or choose an existing user from the list.
4. In the Password field, provide a strong password if the database user is a new
user. If the user is an existing database user you do not enter a password. See
Create Users with Autonomous Data Warehouse to learn more about the default
password complexity rules.
5. (optional) In the Workspace Name field, change the name of the workspace that
was automatically populated.
6. Click Create Workspace.
7-5
Chapter 7
Use Web Services with Oracle Application Express
7-6
Chapter 7
Send Email from Oracle Application Express
• All web services must be secured. Only HTTPS services are supported on the
default port (443). Your Application Express instance is pre-configured with an
Oracle Wallet that contains more than 90 of the most common trusted root and
intermediate SSL certificates. The APEX_WEB_SERVICE package automatically takes
advantage of this Oracle Wallet without additional configuration from application
developers. This Oracle Wallet is centrally managed and therefore you cannot
consume 3rd party web services that are protected using self-signed SSL
certificates.
• Each Autonomous Data Warehouse instance is preconfigured with a network
access control list (ACL) to permit outbound web service calls from Application
Express. No further configuration by application developers is necessary.
• Your Application Express instance does not require an outbound web proxy.
• There is a limit of 50,000 outbound web service requests per Application Express
workspace in a 24-hour period.
To learn more, see:
• APEX_WEB_SERVICE in Oracle Application Express API Reference
• Managing Web Source Modules in Oracle Application Express App Builder User's
Guide
7-7
Chapter 7
Restrictions and Limitations for Oracle Application Express with Autonomous Data Warehouse
For example:
BEGIN
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_HOST_ADDRESS', 'smtp.us-
phoenix-1.oraclecloud.com');
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_USERNAME',
'ocid1.user.oc1.username');
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_PASSWORD', 'password');
COMMIT;
END;
/
5. Send a test email using APEX SQL Workshop, SQL Commands specifying one of
the approved senders from Step 3 as "From". For example:
BEGIN
APEX_MAIL.SEND(p_from => 'alice@example.com',
p_to => 'bob@example.com',
p_subj => 'Email from Oracle Autonomous Database',
p_body => 'Sent using APEX_MAIL');
END;
/
Note:
There is a limit of 5,000 emails per workspace in a 24-hour period. Oracle
Cloud Infrastructure Email Delivery may impose additional limitations.
7-8
Chapter 7
Restrictions and Limitations for Oracle Application Express with Autonomous Data Warehouse
7-9
8
Developing RESTful Services in
Autonomous Database
You can develop and deploy RESTful Services with native Oracle REST Data
Services (ORDS) support on an Autonomous Data Warehouse database. Autonomous
Data Warehouse also supports SODA for REST; this allows you to use Autonomous
Data Warehouse database as a simple JSON document store.
Topics:
• About Oracle REST Data Services in Autonomous Database
• Develop Oracle REST Data Services with Autonomous Database
• Develop SODA for REST with Autonomous Database
Note:
The Oracle REST Data Services (ORDS) application in Autonomous Data
Warehouse is preconfigured and fully managed. It is not possible to connect
a customer-managed ORDS application to Autonomous Data Warehouse.
See Oracle REST Data Services for information on using Oracle REST Data Services.
8-1
Chapter 8
Develop SODA for REST with Autonomous Database
• Oracle Application Express (APEX): With APEX you can use the RESTful Services
development pages to build and maintain your services and REST enabled
objects. You can use the APEX SQL Workshop to access your Oracle RESTful
Services and REST Enabled objects. See How to Access RESTful Services for
more information.
The Autonomous Data Warehouse ADMIN account is REST Enabled. This allows for
REST Services to be published in the ADMIN schemas and allows you to access SQL
Developer Web using the ADMIN database user account. Oracle recommends you
create an application schema account for your RESTful Services and REST Enabled
objects. Services are secured using Database Authentication and your REST Enabled
schema.
The authenticated database user is only permitted access if the schema is REST
enabled and the URL mapping for the request points to their own schema, the user is
not authenticated when a request points to any other database schema. For example,
the following request authenticated as REST enabled schema HR, is accessible:
GET /ords/hr/module/service/
However, when authenticated as REST enabled schema SCOTT, the same request:
GET /ords/hr/module/service/
Any database user whose credentials are correct and meets the above rules is
authenticated and granted the ORDS, mid-tier, role: SQL Developer. This enables the
user to access any endpoint that requires the SQL Developer role.
See REST-Enable a Database Table in Quick Start Guide for information on how to
enable a table for REST access.
/ords/schema/soda/latest/*
8-2
Chapter 8
Develop SODA for REST with Autonomous Database
Where schema corresponds to the REST enabled database schema (for example,
"admin").
The following examples use the cURL command line tool (http://curl.haxx.se/) to
submit REST requests to the Autonomous Data Warehouse database. However, other
3rd party REST clients and libraries should work as well.
This command creates a new collection named "fruit" in the ADMIN schema:
These commands insert three JSON documents into the fruit collection:
{"items":[{"id":"6F7E5C60197E4C8A83AC7D7654F2E375"...
{"items":[{"id":"83714B1E2BBA41F7BA4FA93B109E1E85"...
{"items":[{"id":"BAD7EFA9A2AB49359B8F5251F0B28549"...
{
"items": [
{
"id":"6F7E5C60197E4C8A83AC7D7654F2E375",
"etag":"57215643953D7C858A7CB28E14BB48549178BE307D1247860AFAB2A958400E16",
"lastModified":"2019-07-12T19:00:28.199666Z",
"created":"2019-07-12T19:00:28.199666Z",
"value":{"name":"orange", "count":42}
8-3
Chapter 8
Develop SODA for REST with Autonomous Database
}
],
"hasMore":false,
"count":1
}
And finally, the following sample SQL query accesses the fruit collection:
SELECT
f.json_document.name,
f.json_document.count,
f.json_document.color
FROM fruit f;
These examples show a small subset of the SODA and SQL/JSON features. For more
information see:
See SODA for REST for information on Simple Oracle Document Access (SODA).
See SODA for REST HTTP Operations for information on the SODA for REST HTTP
operations.
8-4
9
Creating and Managing Directories
Each Autonomous Data Warehouse database includes a predefined data_pump_dir
directory where you can place files. For example you can use this directory for Data
Pump import and export operations. To create additional directories use the database
CREATE DIRECTORY command. Use the database DROP DIRECTORY command to drop
directories. Use DBMS_CLOUD.LIST_FILES to list the contents of a directory.
Topics:
• Create Directory in Autonomous Database
• Drop Directory in Autonomous Database
• List Contents of Directory in Autonomous Database
• Copy Files Between Object Store and a Directory in Autonomous Database
CREATE DIRECTORY creates the database directory object and also creates the file
system directory if it does not already exist. If the file system directory exists then
CREATE DIRECTORY only creates the database directory object. For example, the
following command creates the database directory named staging and creates the
file system directory stage:
You can also create subdirectories. For example, the following command creates the
database directory object sales_staging and the file system directory stage/
sales:
When you create subdirectories you do not have to create the initial file system
directory. For example, in the previous example if the directory stage does not exist
then the CREATE DIRECTORY command creates both directories stage and stage/
sales.
To add a directory, you must have the CREATE ANY DIRECTORY system privilege. The
ADMIN user is granted the CREATE ANY DIRECTORY system privilege. The ADMIN user
can grant CREATE ANY DIRECTORY system privilege to other users.
9-1
Chapter 9
Drop Directory in Autonomous Database
Notes:
/u03/dbfs/7C149E35BB1000A45FD/data/stage
• You can create a directory in the root file system to see all the files with
the following commands:
After you create the ROOT_DIR directory, use the following command to
list all files:
For example, the following command drops the database directory object staging:
The DROP DIRECTORY command does not delete files in the directory. If you want to
delete the directory and the files in the directory, first use the procedure
DBMS_CLOUD.DELETE_FILE to delete the files. See DELETE_FILE Procedure for more
information.
To drop a directory, you must have the DROP ANY DIRECTORY system privilege. The
ADMIN user is granted the DROP ANY DIRECTORY system privilege. The ADMIN user
can grant DROP ANY DIRECTORY system privilege to other users.
9-2
Chapter 9
List Contents of Directory in Autonomous Database
Notes:
For example, to list the contents of the stage directory, run the following query:
To run DBMS_CLOUD.LIST_FILES with a user other than ADMIN you need to grant read
privileges on the directory to that user. See LIST_FILES Function for more information.
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
9-3
Chapter 9
Copy Files Between Object Store and a Directory in Autonomous Database
To run DBMS_CLOUD.GET_OBJECT with a user other than ADMIN you need to grant write
privileges on the directory to that user.
To run DBMS_CLOUD.PUT_OBJECT with a user other than ADMIN you need to grant read
privileges on the directory to that user.
See GET_OBJECT Procedure and PUT_OBJECT Procedure for more information.
9-4
Part II
Managing and Monitoring Autonomous
Data Warehouse
This part provides information on managing Autonomous Data Warehouse.
Topics
Topics
• Open the Oracle Cloud Infrastructure console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation list click Autonomous Data
Warehouse.
1. Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
2. Click Create Autonomous Database.
3. Provide basic information for the Autonomous Database.
• Choose a compartment. See Compartments for information on using and
managing compartments.
• Display name Specify a user-friendly description or other information that
helps you easily identify the resource. The display name does not have to be
unique.
• Database name Specify the database name; it must consist of letters and
numbers only. The maximum length is 14 characters.
10-1
Chapter 10
Provision Autonomous Data Warehouse
Note:
The same database name cannot be used for multiple Oracle
Autonomous Database databases in the same tenancy in the same
region.
4. Choose a workload type. Select the workload type for your database from the
choices:
• Data Warehouse
• Transaction Processing
To create an Autonomous Data Warehouse instance select Data Warehouse.
5. Configure the database.
• CPU core count Specify the number of CPU cores for your Autonomous Data
Warehouse database.
• Storage (TB) Specify the storage you wish to make available to your
Autonomous Data Warehouse database, in terabytes.
• Auto Scaling Select to allow the system to automatically use up to three times
more CPU and IO resources to meet workload demand. See Use Auto Scaling
for more information.
6. Create administrator credentials. Set the password for the Autonomous Data
Warehouse Admin user. The password must meet the strong password complexity
criteria based on Oracle Cloud security standards. For more information on the
password complexity rules see Create Users with Autonomous Data Warehouse.
• Username This is a read only field.
• Password Set the password for the Autonomous Data Warehouse Admin
user. The password must meet the strong password complexity criteria based
on Oracle Cloud security standards. For more information on the password
complexity rules see Create Users with Autonomous Data Warehouse.
• Confirm password Enter the same password again to confirm your new
password.
7. Choose a license type
• Bring Your Own License
My organization already owns Oracle database software licenses. Bring my
existing database software licenses to the database cloud service (details).
• License Included
Subscribe to new database software licenses and the database cloud service.
(Optional) Click Show Advanced Options to enter additional options.
If you want to use Tags, enter the TAG KEY and VALUE. Tagging is a metadata
system that allows you to organize and track resources within your tenancy. Tags
are composed of keys and values which can be attached to resources.
8. Click Create Autonomous Database.
On the Oracle Cloud Infrastructure console the State shows Provisioning... until the
new Autonomous Data Warehouse database is available.
10-2
Chapter 10
Start Autonomous Data Warehouse
Note:
When an Autonomous Data Warehouse instance is started, Autonomous
Data Warehouse CPU billing is initiated based on full-hour cycles of usage.
Note:
When an Autonomous Data Warehouse instance is stopped, the following
details apply:
• Tools are no longer able to connect to a stopped instance.
• Autonomous Data Warehouse in-flight transactions and queries are
stopped.
• Autonomous Data Warehouse CPU billing is halted based on full-hour
cycles of usage.
10-3
Chapter 10
Terminate Autonomous Data Warehouse
Note:
Terminating an Autonomous Data Warehouse database permanently deletes
the instance and removes all automatic backups. You cannot recover a
terminated database.
• Open the Oracle Cloud Infrastructure console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation list click Autonomous Data
Warehouse.
• On the Autonomous Databases page select an Autonomous Data Warehouse
instance from the links under the Name column.
1. From the Details page click Scale Up/Down.
10-4
Chapter 10
Add CPU or Storage Resources or Enable Auto Scaling
2. On the Scale Up/Down prompt, select the change in resources for your scale
request.
• Click up arrow to select a value for CPU Core Count. The default is no
change.
• Click up arrow to select a value for Storage (TB). The default is no change.
10-5
Chapter 10
Remove CPU or Storage Resources or Disable Auto Scaling
operating with auto scaling disabled. The additional resources are available until
you disable auto scaling by deselecting Auto Scaling. See Use Auto Scaling for
more information.
4. Click Update to change your resources.
When you click Update with a resource change for CPU Core Count, Storage, or to
enable or disable Auto Scaling, the Lifecycle State changes to Scaling in Progress....
After the Lifecycle State changes to Available the changes apply immediately.
Note:
If auto scaling is disabled while more CPUs are in use than the specified
CPU Core Count, then Autonomous Data Warehouse scales the number of
CPU cores in use down to the CPU Core Count number.
10-6
Chapter 10
Use Auto Scaling
To see the average number of OCPUs used during an hour you can use the "Number
of OCPUs allocated" graph on the Overview page on the Autonomous Data
Warehouse service console. See Console Overview for more information.
Enabling auto scaling does not change the concurrency and parallelism settings for
the predefined services. See Managing Concurrency and Priorities on Autonomous
Data Warehouse for more information.
See Add CPU or Storage Resources or Enable Auto Scaling for the steps to enable
auto scaling.
10-7
11
Managing Users on Autonomous Data
Warehouse
This section describes administration tasks for managing users on Autonomous Data
Warehouse.
Topics
This creates new_user with connect privileges. This user can now connect to
Autonomous Data Warehouse and run queries. To grant additional privileges to users,
see Manage User Privileges with Autonomous Data Warehouse.
Note:
The administrator needs to provide the credentials wallet to the user
new_user. See Connecting to Autonomous Data Warehouse.
Autonomous Data Warehouse requires strong passwords; the password you specify
must meet the default password complexity rules.
• The password must be between 12 and 30 characters long and must include at
least one uppercase letter, one lowercase letter, and one numeric character.
Note, the password limit is shown as 60 characters in some help tooltip popups.
Limit passwords to a maximum of 30 characters.
11-1
Chapter 11
Remove Users with Autonomous Data Warehouse
For more information about the ALTER USER command, see SQL Language Reference.
See Provide SQL Developer Web Access to Database Users to add users for SQL
Developer Web.
See Create Oracle Application Express Workspaces in Autonomous Data Warehouse
for information on creating APEX workspaces.
See Create and Update User Accounts for Oracle Machine Learning to add user
accounts for Oracle Machine Learning.
Note:
This removes all user_name objects and the data owned by user_name is
deleted.
Topics
11-2
Chapter 11
Manage the Administrator Account on Autonomous Data Warehouse
11-3
Chapter 11
Manage User Privileges with Autonomous Data Warehouse
2. On the Admin Password dialog box enter the new password and confirm.
3. Click Update.
This operation also unlocks the ADMIN account if it was locked.
Note:
The password for the default administrator account, ADMIN, has the same
password complexity rules mentioned in the section Create Users with
Autonomous Data Warehouse.
11-4
Chapter 11
Manage User Privileges with Autonomous Data Warehouse
2. As the ADMIN user grant DWROLE. For example, the following command grants
DWROLE to the user ADBUSER:
Note:
Granting UNLIMITED TABLESPACE privilege allows a user to use all the
allocated storage space. You cannot selectively revoke tablespace
access from a user with the UNLIMITED TABLESPACE privilege. You can
grant selective or restricted access only after revoking the privilege.
11-5
Chapter 11
Create and Update User Accounts for Oracle Machine Learning
Topics:
• Create User
• Add Existing Database User Account to Oracle Machine Learning
Create User
An administrator creates a new user account and user credentials for Oracle Machine
Learning in the User Management interface.
Note:
You must have the administrator role to access the Oracle Machine Learning
User Management interface.
5. In the Username field, enter a username for the account. Using the username, the
user will log in to an Oracle Machine Learning instance.
6. Enter a name in the First Name field.
11-6
Chapter 11
Create and Update User Accounts for Oracle Machine Learning
This creates a new database user and grants the required privileges to use Oracle
Machine Learning.
Note:
With a new database user, an administrator needs to issue grant commands
on the database to grant table access to the new user for the tables
associated with the user's Oracle Machine Learning notebooks.
Note:
You must have the administrator role to access the Oracle Machine Learning
User Management interface.
11-7
Chapter 11
Create and Update User Accounts for Oracle Machine Learning
Note:
Initially, the Role field shows the role None for existing database users.
After adding a user the role Developer is assigned to the user.
5. Select a user. To select a user select a name in the User Name column. For
example, select ANALYST1.
Selecting the user shows the Oracle Machine Learning Edit User page.
6. Enter a name in the First Name field. (Optional)
7. Enter the last name of the user in the Last Name field. (Optional)
8. In the Email Address field, enter the email ID of the user.
Making any change on this page adds the existing database user with the required
privileges as a Oracle Machine Learning user.
9. Click Save.
This grants the required privileges to use the Oracle Machine Learning application. In
Oracle Machine Learning this user can then access any tables the user has privileges
to access in the database.
11-8
12
Managing and Monitoring Performance of
Autonomous Data Warehouse
This section describes managing and monitoring the performance of Autonomous
Data Warehouse.
Topics
• Monitor the Performance of Autonomous Data Warehouse
• Managing Concurrency and Priorities on Autonomous Data Warehouse
• Manage CPU/IO Shares on Autonomous Data Warehouse
• Manage Runaway SQL Statements on Autonomous Data Warehouse
• Manage Optimizer Statistics on Autonomous Data Warehouse
• Monitor Autonomous Data Warehouse with Performance Hub
• Monitor the Performance of Autonomous Data Warehouse with Oracle
Management Cloud
Topics
12-1
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
Note:
You can bookmark the service console URL and go to that URL directly
without logging in to the Oracle Cloud Infrastructure console. If you logout
and use the bookmark, then to see the service console you need to enter the
ADMIN username, the password, and click Sign in. See Change the
Administrator Password in Autonomous Data Warehouse if you need to
change the password for the ADMIN user.
Console Overview
The Overview page shows real-time and historical information about the utilization of
the service.
12-2
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
12-3
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
– Auto Scaling Enabled: For databases with auto scaling enabled, for each
hour the chart shows the average number of OCPUs used during that hour if
that value is higher than the number of OCPUs provisioned. If the number of
OCPUs used is not higher than the number of OCPUs provisioned, then the
chart shows the number of OCPUs allocated for that hour.
– Stopped Database: If the database was stopped for the full hour the chart
shows 0 OCPUs allocated for that hour.
• SQL statement response time (s)
This chart shows the average response time, in seconds, of SQL statements
historically. This chart shows hourly data. A data point shows the average SQL
statement response time for that hour. For example, a data point at 10:00 shows
the average SQL statement response time, in seconds, for the hour from
9:00-10:00.
The default retention period for performance data is eight days. So, the CPU
utilization, running statements, and average SQL response time charts show data for
the last eight days by default.
Note:
The retention time can be changed by modifying the Automatic Workload
Repository retention setting with the PL/SQL procedure
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS. Be aware that
increasing the retention time will result in more storage usage for
performance data. See Oracle® Database PL/SQL Packages and Types
Reference.
Console Activity
The Activity page shows real-time and historical information about the utilization of
the service.
To access detailed information about the service performance click the Activity tab in
the service console.
Note:
The default view in this tab is real-time. This view shows performance data
for the last hour.
12-4
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
allowed to use which is two times the number of OCPUs. For example, if the
database has four (4) OCPUs, the percentage in this graph is based on 8 CPUs.
For databases with auto scaling enabled the utilization percentage is reported with
respect to the maximum number of CPUs the database is allowed to use, which is
six times the number of OCPUs. For example, if the database has four OCPUs
with auto scaling enabled the percentage in this graph is based on 24 CPUs.
See Managing Concurrency and Priorities on Autonomous Data Warehouse for
detailed information on consumer groups.
• Running Statements
This chart shows the average number of running SQL statements in each
consumer group. See Managing Concurrency and Priorities on Autonomous Data
Warehouse for detailed information about consumer groups.
• Queued Statements
This chart shows the average number of queued SQL statements in each
consumer group. See Managing Concurrency and Priorities on Autonomous Data
Warehouse for detailed information on consumer groups.
To see earlier data click Time period. The default retention period for performance
data is eight days. So, this view shows information for the last eight days by default.
12-5
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
Note:
The retention time can be changed by changing the Automatic Workload
Repository retention setting with the PL/SQL procedure
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS. Be aware that
increasing the retention time results in more storage usage for performance
data. See Oracle® Database PL/SQL Packages and Types Reference.
In the time period view you can use the calendar to look at a specific time in the past
eight days.
12-6
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
You can also use the time slider to change the period for which performance data is
shown.
12-7
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
1. To see the detailed SQL Monitor report for a statement, select a statement and
click Show Details. The Overview tab in the pop-up shows general information
for that statement.
12-8
Chapter 12
Monitor the Performance of Autonomous Data Warehouse
Click Plan Statistics tab to see the runtime execution plan of the statement.
Click Parallel tab to see information about the parallel processes, if the statement
uses parallelism.
If you want to download an active SQL Monitor report for a statement, select the
statement in the Monitored SQL page and click Download report. This will save the
active SQL Monitor report to your client. See About Monitoring Database Operations in
Database SQL Tuning Guide.
To cancel a running statement, select that statement in the Monitored SQL list and
click Cancel execution.
12-9
Chapter 12
Managing Concurrency and Priorities on Autonomous Data Warehouse
Overview
Users are required to select a service when connecting to the database. The service
names are in the format:
• databasename_high
• databasename_medium
• databasename_low
These services map to the LOW, MEDIUM, and HIGH consumer groups. For example, if
you provision an Autonomous Data Warehouse service with the name ADW1, your
service names are:
• adw1_high
• adw1_medium
• adw1_low
For example, a user connecting with the adw1_low service uses the consumer group
LOW.
Concurrency
The concurrency level of these consumer groups changes based on the number of
OCPUs you subscribe to. The HIGH consumer group’s concurrency is fixed and does
12-10
Chapter 12
Managing Concurrency and Priorities on Autonomous Data Warehouse
not change based on the number of OCPUs. The MEDIUM and LOW consumer
groups can run more concurrent SQL statements if you scale up the compute capacity
of your service.
Note:
The HIGH consumer group is configured for low concurrency, even a single
query in this consumer group can use all resources in your database. If your
workload has concurrent queries Oracle recommends using the MEDIUM
consumer group. If your concurrency requirements are not met with the
MEDIUM consumer group, you can use the LOW consumer group or you
can scale up your compute capacity and continue using the MEDIUM
consumer group.
For example, for an Autonomous Data Warehouse with 16 OCPUs, the HIGH
consumer group will be able to run 3 concurrent SQL statements when the MEDIUM
consumer group is not running any statements. The MEDIUM consumer group will be
able to run 20 concurrent SQL statements when the HIGH consumer group is not
running any statements. The LOW consumer group will be able to run 1600 concurrent
SQL statements. The HIGH consumer group can run at least 1 SQL statement when
the MEDIUM consumer group is also running statements. When these concurrency
levels are reached for a consumer group new SQL statements in that consumer group
will be queued until one or more running statements finish.
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'update_sales',
job_type => 'STORED_PROCEDURE',
job_action => 'OPS.SALES_PKG.UPDATE_SALES_SUMMARY',
start_date => '28-APR-19 07.00.00 PM Australia/Sydney',
repeat_interval => 'FREQ=DAILY;INTERVAL=2',
end_date => '20-NOV-19 07.00.00 PM Australia/Sydney',
auto_drop => FALSE,
job_class => 'HIGH',
comments => 'My new job');
END;
/
12-11
Chapter 12
Manage CPU/IO Shares on Autonomous Data Warehouse
Note:
To use DBMS_SCHEDULER.CREATE_JOB additional grants for specific roles or
privileges might be required. The ADMIN user and users with DWROLE have the
required CREATE SESSION and CREATE JOB privileges. If a user does not have
DWROLE then grants are required for CREATE SESSION and CREATE JOB
privileges.
See Scheduling Jobs with Oracle Scheduler for more information on Oracle Scheduler
and DBMS_SCHEDULER.CREATE_JOB.
Follow these steps to set CPU/IO shares from the service console:
1. From the Autonomous Data Warehouse details page, click Service Console.
2. On the Service Console click Administration.
3. Click Set Resource Management Rules.
4. Select CPU/IO shares to set CPU/IO share values for consumer groups.
5. Set the CPU/IO share values.
6. Click Save changes.
For example, the following figure shows CPU/IO share values that you can modify:
12-12
Chapter 12
Manage Runaway SQL Statements on Autonomous Data Warehouse
Click Load Default Values to load the default values; then click Save changes to
apply the populated values.
You can also change the default values using the PL/SQL procedure
cs_resource_manager.update_plan_directive:
For example, running the following script with the ADMIN user sets CPU/IO shares to
8, 2, and 1 for consumer groups HIGH, MEDIUM, and LOW respectively. This will
allow the consumer group HIGH to use 4 times more CPU/IO resources compared to
the consumer group MEDIUM and 8 times CPU/IO resources compared to the
consumer group LOW:
BEGIN
cs_resource_manager.update_plan_directive(consumer_group => 'HIGH',
shares => 8);
cs_resource_manager.update_plan_directive(consumer_group => 'MEDIUM',
shares => 2);
cs_resource_manager.update_plan_directive(consumer_group => 'LOW',
shares => 1);
END;
/
12-13
Chapter 12
Manage Runaway SQL Statements on Autonomous Data Warehouse
1. From the Autonomous Data Warehouse details page, click Service Console.
2. On the Service Console click Administration.
3. Click Set Resource Management Rules.
4. Select the Run-away criteria tab to set rules for consumer groups.
5. Select the Consumer group: HIGH, MEDIUM, or LOW.
6. Set runaway criteria values:
• Query run time (seconds)
• Amount of IO (MB)
7. Click Save changes.
For example, the following shows values for setting the runtime limit to 120 seconds
and the IO limit to 1000MB for the HIGH consumer group:
When a SQL statement in the specified consumer runs more than the specified
runtime limit or does more IO than the specified amount, then the SQL statement will
be terminated.
Click Load Default Values to load the default values; then click Save changes to
apply the populated values.
You can also use the procedure cs_resource_manager.update_plan_directive to set
these rules. For example, to set a runtime limit of 120 seconds and an IO limit of
1000MB for the HIGH consumer group run the following command when connected to
the database as the ADMIN user:
BEGIN
cs_resource_manager.update_plan_directive(consumer_group => 'HIGH',
io_megabytes_limit => 1000, elapsed_time_limit => 120);
END;
/
12-14
Chapter 12
Manage Optimizer Statistics on Autonomous Data Warehouse
To reset the values and lift the limits, you can set the values to null:
BEGIN
cs_resource_manager.update_plan_directive(consumer_group => 'HIGH',
io_megabytes_limit => null, elapsed_time_limit => null);
END;
/
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS('SH', options=>'GATHER AUTO');
END;
/
This example gathers statistics for all tables that have stale statistics in the SH schema.
Note:
For more information about direct-path loads see Loading Tables.
For more information on optimizer statistics see Database Concepts.
ALTER SESSION
SET OPTIMIZER_IGNORE_HINTS=FALSE;
12-15
Chapter 12
Monitor Autonomous Data Warehouse with Performance Hub
You can also enable PARALLEL hints in your SQL statements by setting
OPTIMIZER_IGNORE_PARALLEL_HINTS to FALSE at the session or system level
using ALTER SESSION or ALTER SYSTEM. For example, the following command enables
PARALLEL hints in your session:
ALTER SESSION
SET OPTIMIZER_IGNORE_PARALLEL_HINTS=FALSE;
12-16
Chapter 12
Monitor Autonomous Data Warehouse with Performance Hub
The Time Range selector is on the top of the Performance Hub page. Use the Select
Duration field to set the time duration. By default, Last hour is selected. You can
choose to view Last 8 hours, Last 24 hours, Last week, or specify a custom time
range using the Custom option.
The Time Range field shows active sessions in chart form for the time period selected.
The active sessions chart displays the average number of active sessions broken
down by CPU, User I/O, and Wait. The active sessions chart also shows the Max
CPU usage.
The sliding box on the time range chart is the time slider. Use the time slider to select
the exact period of time for which data is displayed in the Performance Hub tables and
graphs. This is a subsection of the period of time shown in the Time Range field.
12-17
Chapter 12
Monitor the Performance of Autonomous Data Warehouse with Oracle Management Cloud
You can slide the box to the left or the right to shift the time period under analysis. To
slide the entire box, left-click anywhere inside the box and drag the box to the left or
the right. Widen or narrow the box to increase or decrease the length of time under
analysis. To widen or narrow the box, left-click and hold the handlebar on either side of
the box, then drag to the left or the right to increase or decrease the size of the current
time range box.
Click Refresh to refresh the data in Performance Hub according to the time range
chosen.
Select either ASH Analytics or SQL Monitoring:
• Active Session History (ASH) Analytics:
This shows Active Session History (ASH) analytics charts to explore Active
Session History data. You can drill down into database performance across
multiple dimensions such as Consumer Group, Wait Class, SQL ID, and User
Name. Select an Average Active Sessions dimension and view the top activity for
that dimension for the selected time period.
• SQL Monitoring:
The SQL statements are only monitored if they've been running for at least five
seconds or if they're run in parallel. The table displays monitored SQL statement
executions by dimensions including Last Active Time, CPU Time, and Database
Time. The table displays currently running SQL statements and SQL statements
that completed, failed, or were terminated. The columns in the table provide
information for monitored SQL statements including Status, Duration, and SQL
ID.
The Status column has the following icons:
– A spinning icon indicates that the SQL statement is executing.
– A green check mark icon indicates that the SQL statement completed its
execution during the specified time period.
– A red cross icon indicates that the SQL statement did not complete, either due
to an error, or due to the session being terminated.
– A clock icon indicates that the SQL statement is queued.
To terminate a running or queued SQL statement, click Kill Session.
Select the link in the SQL ID column to go to the corresponding Real-time SQL
Monitoring page. This page provides additional details to help you tune the
selected SQL statement.
See Active Session History (ASH) in Oracle Database Concepts for more information
on Active Session History.
12-18
Chapter 12
Monitor the Performance of Autonomous Data Warehouse with Oracle Management Cloud
12-19
13
Backing Up and Restoring Autonomous
Data Warehouse
This section describes backup and recovery tasks on Autonomous Data Warehouse.
Topics
Manual Backups
You do not have to do any manual backups for your database as Autonomous Data
Warehouse backs up your database automatically. You can do manual backups using
the cloud console; for example if you want to take a backup before a major change to
make restore and recovery faster. The manual backups are put in your Cloud Object
Storage bucket. When you initiate a point-in-time recovery Autonomous Data
Warehouse decides which backup to use for faster recovery.
Recovery
You can initiate recovery for your Autonomous Data Warehouse database using the
cloud console. Autonomous Data Warehouse automatically restores and recovers your
database to the point-in-time you specify.
Listing Backups
The list of backups available for recovery is shown on the Autonomous Data
Warehouse details page under Backups.
13-1
Chapter 13
Restore and Recover your Autonomous Data Warehouse Database
13-2
Chapter 13
Restore and Recover your Autonomous Data Warehouse Database
• From the Oracle Cloud Infrastructure left navigation list click Autonomous Data
Warehouse.
• On the Autonomous Databases page select an Autonomous Data Warehouse
instance from the links under the Name column.
1. On the details page, from the Actions drop-down list, select Restore to display
the Restore prompt.
13-3
Chapter 13
Restore and Recover your Autonomous Data Warehouse Database
• SELECT BACKUP: Select a backup from the list of backups. Limit the number
of backups you see by specifying a period using the FROM and TO calendar
fields.
3. Click Restore.
13-4
Chapter 13
Restore and Recover your Autonomous Data Warehouse Database
Note:
Restoring Autonomous Data Warehouse puts the database in the
unavailable state during the restore operation. You cannot connect to a
database in that state. The only lifecycle management operation
supported in unavailable state is terminate.
5. At this point you can connect to your Autonomous Data Warehouse instance and
check your data to validate that the restore point you specified was correct:
• If the restore point you specified was correct and you want to open your
database in read-write mode click Stop and after the database stops, click
Start to start the database. After stopping and starting, the database opens in
read-write mode.
13-5
Chapter 13
Manual Backups on Autonomous Data Warehouse
Note:
After the instance is opened in read-write mode some of your
backups are invalidated.
• After checking your data if you find that the restore date you specified was not
the one you needed you can initiate another restore operation to another point
in time.
Note 1:
When you restore your database and open it in read-write mode, all backups
between the date the restore completes and the date you specified for the
restore operation - the restore time - are invalidated. After the database is
opened in read-write mode after a restore, you cannot initiate further restore
operations to any point in time between the restore time and restore
completion time. You can only initiate new restore operations to a point in
time older than the restore time or more recent than the time when the actual
restore succeeded.
For example, assume you initiate a restore operation on Oct 8, 2018, 2 pm
and specify Oct 1, 2018, 2 pm as the point in time to restore to and the
restore completes on Oct 8, 2018, 2:30 pm. If you open your database in
read-write mode at this point, backups between Oct 8, 2018, 2:30 pm
and Oct 1, 2018, 2 pm will be invalidated. You will not be able to restore to
any date between Oct 1, 2018, 2 pm and Oct 8, 2018, 2:30 pm. If you initiate
a restore to a point in time between these dates the restore will fail with an
error.
Note 2:
The restore operation also restores the DATA_PUMP_DIR directory and user
defined directories to the timestamp you specified for the restore; files that
were created after that timestamp would be lost.
Topics:
• Configure Manual Backups on Autonomous Data Warehouse
• Perform Manual Backups on Autonomous Data Warehouse
13-6
Chapter 13
Manual Backups on Autonomous Data Warehouse
2. On your Oracle Cloud Infrastructure Object Storage, create a bucket to hold the
backups. The format of the bucket name is backup_databasename. Where
databasename is lowercase.
a. Open the Oracle Cloud Infrastructure service console from MyServices
through the Services menu or the Services Dashboard tile.
b. Select Object Storage from the menu.
c. Select Object Storage from the submenu.
d. Create a bucket in a compartment by clicking Create Bucket.
Manual backups are only supported with buckets created in the standard storage
tier, make sure you pick Standard as the storage tier when creating your bucket.
For information on the Standard Object Storage Tier, see Overview of Object
Storage.
For example, if you provision an Autonomous Data Warehouse instance named
ADWC1, the bucket name should be backup_adwc1. Following the same example,
the URL of this bucket would be (the bucket name is lowercase):
https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/adwc/
backup_adwc1
3. Create a credential for your Oracle Cloud Infrastructure Object Storage account
using DBMS_CLOUD.CREATE_CREDENTIAL. See CREATE_CREDENTIAL Procedure.
Note that you need to do this using the ADMIN user. For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
13-7
Chapter 13
Manual Backups on Autonomous Data Warehouse
To list the current value for the default bucket, use the following command:
SELECT PROPERTY_VALUE
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME='DEFAULT_BUCKET';
13-8
Chapter 13
Manual Backups on Autonomous Data Warehouse
2. In the Create Manual Backup dialog enter a name in the Name field.
3. In the Create Manual Backup dialog click Create.
Note 1:
Each manual backup creates a full backup on your Oracle Cloud
Infrastructure Object Storage bucket and the backup can only be used by the
Autonomous Data Warehouse instance when you initiate a point-in-time-
recovery.
13-9
Chapter 13
Manual Backups on Autonomous Data Warehouse
Note 2:
The retention period for manual backups is the same as automatic backups
which is 60 days.
Note 3:
While backing up a database, the database is fully functional; however
during the backup lifecycle management operations are not allowed. For
example, stopping the database is not allowed during the backup.
13-10
14
Cloning or Moving a Database with
Autonomous Data Warehouse
Autonomous Data Warehouse provides cloning where you can choose to clone either
the full database or only the database metadata. You can also move an Autonomous
Data Warehouse database to a different Oracle Cloud Infrastructure compartment.
Topics
• Open the Oracle Cloud Infrastructure console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation list click Autonomous Data
Warehouse.
1. Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
2. Choose your Compartment. See Compartments for information on using and
managing compartments.
3. Select an Autonomous Data Warehouse instance from the list in your
compartment.
4. On the Details page, from the Actions drop-down list, select Create Clone.
Create Clone is only enabled only when the instance state is Available or
Available Needs Attention.
5. In the Create Autonomous Database Clone page, enter the following
information:
6. Choose clone type from the choices:
• Full Clone: creates a new database with the source database’s data and
metadata.
• Metadata Clone: creates a new database with the source database’s
metadata without the data.
7. Provide basic information for the Autonomous Database.
14-1
Chapter 14
Clone Autonomous Data Warehouse
Note:
The same database name cannot be used for multiple Oracle
Autonomous Database databases in the same tenancy in the same
region.
14-2
Chapter 14
Clone Autonomous Data Warehouse
If you want to use Tags, enter the TAG KEY and VALUE. Tagging is a metadata
system that allows you to organize and track resources within your tenancy. Tags
are composed of keys and values which can be attached to resources.
On the Oracle Cloud Infrastructure console the State shows Provisioning... until the
new Autonomous Data Warehouse database is available.
For the clone operation, note the following:
• Oracle Machine Learning workspaces, projects, and notebooks of the source
database are not cloned to the new database.
• If there is an ongoing clone operation on a source database, you cannot initiate a
new clone operation on the database being cloned until the ongoing operation
completes.
• You can only clone an Autonomous Data Warehouse instance to the same
tenancy and the same region as the source database.
• If you define a network Access Control List (ACL) on the source database, the
network ACL is cloned to the new database.
See Cloning an Autonomous Database for information on using the API.
14-3
Chapter 14
Move an Autonomous Data Warehouse Database to a Different Compartment
Note:
• Open the Oracle Cloud Infrastructure console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation list click Autonomous Data
Warehouse.
1. Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
2. Choose your Compartment. See Compartments for information on using and
managing compartments.
3. Select an Autonomous Data Warehouse instance from the list in your
compartment.
4. On the Details page, from the Actions drop-down list, select Move Resource.
5. In the Move Resource to a Different Compartment page, select the new
compartment.
14-4
Chapter 14
Move an Autonomous Data Warehouse Database to a Different Compartment
14-5
15
Security with Access Control Lists and
Change License Type
This section describes setting an Access Control List (ACL) for Autonomous Data
Warehouse and describes how you can change your license type.
Topics
15-1
Chapter 15
Update License Type with Autonomous Data Warehouse
An IP address specified in a network ACL entry should be the public IP address of the
client that is visible on the public internet that you want to grant access. For example,
for an Oracle Cloud Infrastructure VM, this is the IP address shown in the Public IP
field on the Oracle Cloud Infrastructure console for that VM.
Access Control List Notes:
• If you have an Oracle Cloud Infrastructure network that is configured to use a
service gateway to access your database, you cannot use the public IP addresses
of the client machines in that network in your ACL definition. If you want to only
allow connections coming through a service gateway you need to use the IP
address of the service gateway in your ACL definition. If you want to only allow
connections coming through a service gateway you need to add an ACL definition
with the CIDR source type with the value 240.0.0.0/4.
See Access to Oracle Services: Service Gateway for more information.
• When you restore a database the existing ACLs are not overwritten by the restore.
• The network ACLs apply to the database connections and Oracle Machine
Learning notebooks. If an ACL is defined, if you try to login to Oracle Machine
Learning from a client whose IP not specified on the ACL this shows the "login
rejected based on access control list set by the administrator" error.
• Oracle Application Express (APEX), RESTful services, and SQL Developer Web
are not subject to ACLs.
• The Autonomous Data Warehouse Service console is not subject to ACLs.
• If you have a private subnet in your VCN that is configured to access the public
internet through a NAT Gateway, you need to enter the public IP address of the
NAT Gateway in your ACL definition. Clients in the private subnet do not have
public IP addresses. See NAT Gateway for more information.
15-2
16
Work Requests and Managing Events and
Notifications
Oracle Cloud Infrastructure Events enable you to create automation based on state
changes for resources. Oracle Cloud Infrastructure work requests allow you to monitor
long-running operations such as cloning or backing up an Autonomous Data
Warehouse database.
Topics
16-1
Chapter 16
About Work Requests
Click the link under Operation to see more details for a work request.
16-2
Chapter 16
About Work Requests
16-3
Part III
Appendixes
Part III contains the Appendixes.
Topics
Topics
Note:
To run DBMS_CLOUD subprograms with a user other than ADMIN you need to
grant EXECUTE privileges to that user. For example, run the following
command as ADMIN to grant privileges to adwc_user:
Topics
• COPY_DATA Procedure
• COPY_DATA Procedure for Parquet or Avro Files
• CREATE_CREDENTIAL Procedure
• CREATE_EXTERNAL_TABLE Procedure
• CREATE_EXTERNAL_TABLE Procedure for Parquet or Avro Files
A-1
Appendix A
Summary of DBMS_CLOUD Subprograms
• DELETE_ALL_OPERATIONS Procedure
• DELETE_FILE Procedure
• DELETE_OBJECT Procedure
• DROP_CREDENTIAL Procedure
• GET_OBJECT Procedure
• LIST_FILES Function
• LIST_OBJECTS Function
• PUT_OBJECT Procedure
• VALIDATE_EXTERNAL_TABLE Procedure
COPY_DATA Procedure
This procedure loads data into existing Autonomous Data Warehouse tables from files
in the Cloud.
Syntax
DBMS_CLOUD.COPY_DATA (
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
schema_name IN VARCHAR2 DEFAULT,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the target table on the Autonomous Data Warehouse
database. The target table needs to be created before you run
COPY_DATA.
credential_name The name of the credential to access the Cloud Object Storage.
file_uri_list Comma-delimited list of source file URIs. You can use wildcards in
the file names in your URIs. The character "*" can be used as the
wildcard for multiple characters, the character "?" can be used as
the wildcard for a single character.
The format of the URIs depend on the Cloud Object Storage service
you are using, for details see DBMS_CLOUD Package File URI
Formats.
schema_name The name of the schema where the target table resides. The default
value is NULL meaning the target table is in the same schema as
the user running the procedure.
A-2
Appendix A
Summary of DBMS_CLOUD Subprograms
Parameter Description
field_list Identifies the fields in the source files and their data types. The
default value is NULL meaning the fields and their data types are
determined by the target table definition. This argument's syntax is
the same as the field_list clause in regular Oracle external
tables. For more information about field_list see Oracle®
Database Utilities.
For an example using field_list, see
CREATE_EXTERNAL_TABLE Procedure.
format The options describing the format of the source files. For the list of
the options and how to specify the values see DBMS_CLOUD
Package Format Options.
For Parquet and Avro file format options, see DBMS_CLOUD
Package Format Options for Parquet and Avro.
Syntax
DBMS_CLOUD.COPY_DATA (
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
schema_name IN VARCHAR2 DEFAULT,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the target table on the Autonomous Data Warehouse
database. The target table needs to be created before you run
COPY_DATA.
credential_name The name of the credential to access the Cloud Object Storage.
file_uri_list Comma-delimited list of source file URIs. You can use wildcards in
the file names in your URIs. The character "*" can be used as the
wildcard for multiple characters, the character "?" can be used as
the wildcard for a single character.
The format of the URIs depend on the Cloud Object Storage service
you are using, for details see DBMS_CLOUD Package File URI
Formats.
schema_name The name of the schema where the target table resides. The default
value is NULL meaning the target table is in the same schema as
the user running the procedure.
A-3
Appendix A
Summary of DBMS_CLOUD Subprograms
Parameter Description
field_list Ignored for Parquet and Avro files.
The fields in the source match the external table columns by name.
Source data types are converted to the external table column data
type.
For Parquet files, see DBMS_CLOUD Package Parquet to Oracle
Data Type Mapping for details on mapping.
For Avro files, see DBMS_CLOUD Package Avro to Oracle Data
Type Mapping for details on mapping.
format The options describing the format of the source files. For Parquet
and Avro files, only two options are supported: see DBMS_CLOUD
Package Format Options for Parquet and Avro.
Usage Notes
As with other data files, Parquet and Avro data loads generate logs that are viewable
in the tables dba_load_operations and user_load_operations. Each load
operation adds a record to dba[user]_load_operations that indicates the table
containing the logs.
The log table provides summary information about the load.
Note:
For Parquet or Avro files, when the format parameter type is set to the value
parquet or avro, the BADFILE_TABLE table is always empty.
• For Parquet files, PRIMARY KEY constraint errors throw an ORA error.
• If data for a column encounters a conversion error, for example, the
target column is not large enough to hold the converted value, the value
for the column is set to NULL. This does not produce a rejected record.
CREATE_CREDENTIAL Procedure
This procedure stores Cloud Object Storage credentials in the Autonomous Data
Warehouse database.
Use stored credentials for data loading or for querying external data residing in the
Cloud, where you use DBMS_CLOUD procedures with a credential_name parameter.
This procedure is overloaded. Use the Oracle Cloud Infrastructure-related parameters,
including: user_ocid, tenancy_ocid, private_key, and fingerprint only when you
are using Oracle Cloud Infrastructure native authentication.
Syntax
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name IN VARCHAR2,
username IN VARCHAR2,
password IN VARCHAR2 DEFAULT NULL);
A-4
Appendix A
Summary of DBMS_CLOUD Subprograms
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name IN VARCHAR2,
user_ocid IN VARCHAR2,
tenancy_ocid IN VARCHAR2,
private_key IN VARCHAR2,
fingerprint IN VARCHAR2);
Parameters
Parameter Description
credential_name The name of the credential to be stored.
username The username and password arguments together specify your
object storage credentials. See below for what to specify for the
username and password for different object stores.
password The username and password arguments together specify your
object storage credentials.
user_ocid Specifies the user's OCID. See Where to Get the Tenancy's OCID
and User's OCID for details on obtaining the User's OCID.
tenancy_ocid Specifies the tenancy's OCID. See Where to Get the Tenancy's
OCID and User's OCID for details on obtaining the Tenancy's
OCID.
private_key Specifies the generated private key. Private keys generated with a
passphrase are not supported. You need to generate the private
key without a passphrase. See How to Generate an API Signing
Key for details on generating a key pair in PEM format.
fingerprint Specifies a fingerprint. After a generated public key is uploaded to
the user's account the fingerprint is displayed in the console. Use
the displayed fingerprint for this argument. See How to Get the
Key's Fingerprint and How to Generate an API Signing Key for more
details.
Usage Notes
• This operation stores the credentials in the database in an encrypted format.
• You can see the credentials in your schema by querying the user_credentials
table.
• The ADMIN user can see all the credentials by querying the dba_credentials table.
• You only need to create credentials once unless your object store credentials
change. Once you store the credentials you can then use the same credential
name for DBMS_CLOUD procedures that require a credential_name parameter.
• This procedure is overloaded. If you provide one of the key based authentication
attributes, user_ocid, tenancy_ocid, private_key, or fingerprint, the call is
assumed to be an Oracle Cloud Infrastructure Native credential.
• Private keys generated with a passphrase are not supported. You need to
generate the private key without a passphrase. See How to Generate an API
Signing Key for more information.
A-5
Appendix A
Summary of DBMS_CLOUD Subprograms
Amazon S3 Credentials
If your source files reside in Amazon S3 the username is your AWS access key ID and
the password is your AWS secret access key. See AWS Identity and Access
Management.
CREATE_EXTERNAL_TABLE Procedure
This procedure creates an external table on files in the Cloud. This allows you to run
queries on external data from Autonomous Data Warehouse.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
credential_name The name of the credential to access the Cloud Object Storage.
file_uri_list Comma-delimited list of source file URIs. You can use wildcards
in the file names in your URIs. The character "*" can be used as
the wildcard for multiple characters, the character "?" can be used
as the wildcard for a single character.
The format of the URIs depend on the Cloud Object Storage
service you are using, for details see DBMS_CLOUD Package
File URI Formats.
A-6
Appendix A
Summary of DBMS_CLOUD Subprograms
Parameter Description
column_list Comma-delimited list of column names and data types for the
external table.
field_list Identifies the fields in the source files and their data types. The
default value is NULL meaning the fields and their data types are
determined by the column_list parameter. This argument's syntax
is the same as the field_list clause in regular Oracle external
tables. For more information about field_list see Oracle®
Database Utilities.
format The options describing the format of the source files. For the list of
the options and how to specify the values see DBMS_CLOUD
Package Format Options.
For Parquet or Avro format files, see
CREATE_EXTERNAL_TABLE Procedure for Parquet or Avro
Files.
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'WEATHER_REPORT_DOUBLE_DATE',
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'&base_URL/
Charlotte_NC_Weather_History_Double_Dates.csv',
format => json_object('type' value 'csv', 'skipheaders' value
'1'),
field_list => 'REPORT_DATE DATE''mm/dd/yy'',
REPORT_DATE_COPY DATE ''yyyy-mm-dd'',
ACTUAL_MEAN_TEMP,
ACTUAL_MIN_TEMP,
ACTUAL_MAX_TEMP,
AVERAGE_MIN_TEMP,
AVERAGE_MAX_TEMP,
AVERAGE_PRECIPITATION',
column_list => 'REPORT_DATE DATE,
REPORT_DATE_COPY DATE,
ACTUAL_MEAN_TEMP NUMBER,
ACTUAL_MIN_TEMP NUMBER,
ACTUAL_MAX_TEMP NUMBER,
AVERAGE_MIN_TEMP NUMBER,
AVERAGE_MAX_TEMP NUMBER,
AVERAGE_PRECIPITATION NUMBER');
END;
/
A-7
Appendix A
Summary of DBMS_CLOUD Subprograms
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
credential_name The name of the credential to access the Cloud Object Storage.
file_uri_list Comma-delimited list of source file URIs. You can use wildcards
in the file names in your URIs. The character "*" can be used as
the wildcard for multiple characters, the character "?" can be used
as the wildcard for a single character.
The format of the URIs depend on the Cloud Object Storage
service you are using, for details see DBMS_CLOUD Package
File URI Formats.
column_list (Optional) This field, when specified, overrides the format-
>schema parameter which specifies that the schema, columns
and data types, are derived automatically. See the format
parameter for details.
When the column_list is specified for a Parquet or Avro source,
the column names must match those columns found in the file.
Oracle data types must map appropriately to the Parquet or Avro
data types.
For Parquet files, see DBMS_CLOUD Package Parquet to Oracle
Data Type Mapping for details.
For Avro files, see DBMS_CLOUD Package Avro to Oracle Data
Type Mapping for details.
field_list Ignored for Parquet or Avro files.
The fields in the source match the external table columns by
name. Source data types are converted to the external table
column data type.
For Parquet files, see DBMS_CLOUD Package Parquet to Oracle
Data Type Mapping for details.
For Avro files, see DBMS_CLOUD Package Avro to Oracle Data
Type Mapping for details.
A-8
Appendix A
Summary of DBMS_CLOUD Subprograms
Parameter Description
format For Parquet and Avro, there are only two supported parameters.
See DBMS_CLOUD Package Format Options for Parquet and
Avro for details.
Examples Avro
Examples Parquet
DELETE_ALL_OPERATIONS Procedure
This procedure clears either all data load operations logged in the
user_load_operations table in your schema or clears all the data load operations
of the specified type, as indicated with the type parameter.
Syntax
DBMS_CLOUD.DELETE_ALL_OPERATIONS (
type IN VARCHAR DEFAULT NULL);
Parameters
Parameter Description
type Specifies the type of operation to delete. Type values can be found in
the TYPE column in the user_load_operations table.
If no type is specified all rows are deleted.
Usage Note
• DBMS_CLOUD.DELETE_ALL_OPERATIONS does not delete currently running operations
(operations in a "Running" status).
A-9
Appendix A
Summary of DBMS_CLOUD Subprograms
DELETE_FILE Procedure
This procedure removes the specified file from the specified directory on Autonomous
Data Warehouse.
Syntax
DBMS_CLOUD.DELETE_FILE (
directory_name IN VARCHAR2,
file_name IN VARCHAR2);
Parameters
Parameter Description
directory_name The name of the directory on the Autonomous Data Warehouse
instance.
file_name The name of the file to be removed.
Note:
To run DBMS_CLOUD.DELETE_FILE with a user other than ADMIN you need to
grant write privileges on the directory that contains the file to that user. For
example, run the following command as ADMIN to grant write privileges
to adwc_user:
DELETE_OBJECT Procedure
This procedure deletes the specified object on object store.
Syntax
DBMS_CLOUD.DELETE_OBJECT (
credential_name IN VARCHAR2,
location_uri IN VARCHAR2);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
location_uri Object or file URI for the object to delete. The format of the URI
depends on the Cloud Object Storage service you are using, for
details see DBMS_CLOUD Package File URI Formats.
A-10
Appendix A
Summary of DBMS_CLOUD Subprograms
DROP_CREDENTIAL Procedure
This procedure removes an existing credential from Autonomous Data Warehouse.
Syntax
DBMS_CLOUD.DROP_CREDENTIAL (
credential_name IN VARCHAR2);
Parameters
Parameter Description
credential_name The name of the credential to be removed.
GET_OBJECT Procedure
This procedure reads an object from Cloud Object Storage and copies it to
Autonomous Data Warehouse. The maximum file size allowed in this procedure is 5
gigabytes (GB).
Syntax
DBMS_CLOUD.GET_OBJECT (
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2 DEFAULT NULL,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
object_uri Object or file URI. The format of the URI depends on the Cloud
Object Storage service you are using, for details see
DBMS_CLOUD Package File URI Formats.
directory_name The name of the directory on the Autonomous Data Warehouse
database.
1
file_name Specifies the name of the file to create. If file name is not specified,
the file name is taken from after the last slash in the object_uri
parameter. For special cases, for example when the file name
contains slashes, use the file_name parameter.
startoffset The offset, in bytes, from where the procedure starts reading.
endoffset The offset, in bytes, until where the procedure stops reading.
A-11
Appendix A
Summary of DBMS_CLOUD Subprograms
Parameter Description
compression Specifies the compression used to store the object. When
compression is set to ‘AUTO’ the file is uncompressed (the value
‘AUTO’ implies the object specified with object_uri is
compressed with Gzip).
Note:
To run DBMS_CLOUD.GET_OBJECT with a user other than ADMIN you need to grant WRITE
privileges on the directory to that user. For example, run the following command as ADMIN to
grant write privileges to adwc_user:
Example
BEGIN DBMS_CLOUD.GET_OBJECT(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
adwc/b/adwc_user/o/cwallet.sso',
directory_name => 'DATA_PUMP_DIR');
END;
/
LIST_FILES Function
This function lists the files and their sizes in the specified directory on Autonomous
Data Warehouse.
Syntax
DBMS_CLOUD.LIST_FILES (
directory_name IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter Description
directory_name The name of the directory on the Autonomous Data Warehouse
database.
This is a pipelined function that returns a row for each file. For example, use the
following query to use this function:
A-12
Appendix A
Summary of DBMS_CLOUD Subprograms
Note:
To run DBMS_CLOUD.LIST_FILES with a user other than ADMIN you need to
grant read privileges on the directory to that user. For example, run the
following command as ADMIN to grant read privileges to adwc_user:
LIST_OBJECTS Function
This function lists objects and their sizes in the specified location on object store.
Syntax
DBMS_CLOUD.LIST_OBJECTS (
credential_name IN VARCHAR2,
location_uri IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
location_uri Object or file URI. The format of the URI depends on the Cloud
Object Storage service you are using, for details see
DBMS_CLOUD Package File URI Formats.
This is a pipelined function that returns a row for each object. For example, use the
following query to use this function:
PUT_OBJECT Procedure
This procedure copies a file from Autonomous Data Warehouse to the Cloud Object
Storage. The maximum file size allowed in this procedure is 5 gigabytes (GB).
Syntax
DBMS_CLOUD.PUT_OBJECT (
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2);
A-13
Appendix A
Summary of DBMS_CLOUD Subprograms
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
object_uri Object or file URI. The format of the URI depends on the Cloud
Object Storage service you are using, for details see
DBMS_CLOUD Package File URI Formats.
directory_name The name of the directory on the Autonomous Data Warehouse
database.
1
Note:
To run DBMS_CLOUD.PUT_OBJECT with a user other than ADMIN you need to grant read
privileges on the directory to that user. For example, run the following command as ADMIN to
grant read privileges to adwc_user:
VALIDATE_EXTERNAL_TABLE Procedure
This procedure validates the source files for an external table, generates log
information, and stores the rows that do not match the format options specified for the
external table in a badfile table on Autonomous Data Warehouse.
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
schema_name The name of the schema where the external table resides. The default
value is NULL meaning the external table is in the same schema as
the user running the procedure.
rowcount Number of rows to be scanned. The default value is NULL meaning all
the rows in the source files are scanned.
A-14
Appendix A
DBMS_CLOUD Package File URI Formats
Parameter Description
stop_on_error Determines if the validate should stop when a row is rejected. The
default value is TRUE meaning the validate stops at the first rejected
row. Setting the value to FALSE specifies that the validate does not
stop at the first rejected row and validates all rows up to the value
specified for the rowcount parameter.
If the external table refers to Parquet or Avro file(s) the validate stops
at the first rejected row. When the external table specifies the format
parameter type set to the value parquet or avro, the parameter
stop_on_error is effectively always TRUE. Thus, the table badfile will
always be empty for an external table referring to Parquet or Avro
file(s).
https://objectstorage.region.oraclecloud.com/n/object_storage_namespace/b/
bucket/o/filename
or
https://swiftobjectstorage.region.oraclecloud.com/v1/object_storage_namespace/bucket/
filename
https://objectstorage.us-phoenix-1.oraclecloud.com/n/idy4veans2xl/b/
adwc_user/o/channels.txt
For example, the Swift URI for the file channels.txt in the adwc_user bucket in the
adwc object storage name in the Phoenix data center:
https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idy4veans2xl/adwc_user/
channels.txt
You can find the URI from the Oracle Cloud Infrastructure Object Storage "Object
Details" in the right hand side ellipsis menu in the Object Store:
1. From the Oracle Cloud Infrastructure left navigation list click Object Storage →
Object Storage.
A-15
Appendix A
DBMS_CLOUD Package File URI Formats
4. On the Object Details page, the URL Path (URI) field shows the URI to access
the object.
Note:
The source files need to be stored in an Object Storage tier bucket.
Autonomous Data Warehouse does not support buckets in the Archive
Storage tier. See Overview of Object Storage for more information.
Note:
Carefully assess the business requirement for and the security ramifications
of pre‑authenticated access. When you create the pre-authenticated request
URL, note the Expiration and the Access Type to make sure they are
appropriate for your use.
A pre-authenticated request URL gives anyone who has the URL access to
the targets identified in the request for as long as the request is active. In
addition to considering the operational needs of pre-authenticated access, it
is equally important to manage its distribution.
A-16
Appendix A
DBMS_CLOUD Package File URI Formats
For example, a sample pre-authenticated URI for the file channels.txt in the
adwc_user bucket in the adwc object storage name in the Phoenix data center:
https://objectstorage.us-phoenix-1.oraclecloud.com/p/2xN-uDtWJNsiD910UCYGue5/n/
adwc/b/adwc_user/o/channels.txt
You can use a pre-authenticated URL in any DBMS_CLOUD procedure that takes a URL
to access files in Oracle Cloud Infrastructure object store, without the need to create a
credential. However, you need to either specify the credential_name parameter as
NULL or not supply a credential_name parameter.
For example:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/p/
2xN-uDtWJNsiD910UCYGue5/n/adwc/b/adwc_user/o/channels.txt',
format => json_object('delimiter' value ',') );
END;
/
Note:
A list of mixed URLs is valid. If the URL list contains both pre-authenticated
URLs and URLs that require authentication, DBMS_CLOUD uses the specified
credential_name to access the URLs that require authentication and for the
pre-authenticated URLs the specified credential_name is ignored.
A-17
Appendix A
DBMS_CLOUD Package Format Options
https://adwc_user.blob.core.windows.net/adwc/channels.txt
And:
format => json_object('format_option' value 'format_value'))
Examples:
format => json_object('type' VALUE 'CSV')
For example:
format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value
'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true')
Note:
For Parquet and Avro format options, see DBMS_CLOUD Package Format
Options for Parquet and Avro.
A-18
Appendix A
DBMS_CLOUD Package Format Options
format =>
json_object('recordd
elimiter' VALUE
'''\r\n''')
A-19
Appendix A
DBMS_CLOUD Package Format Options
A-20
Appendix A
DBMS_CLOUD Package Format Options
A-21
Appendix A
DBMS_CLOUD Package Format Options for Parquet and Avro
And:
format => json_object('format_option' value 'format_value'))
Examples:
format => json_object('type' VALUE 'CSV')
For example:
format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value
'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true')
A-22
Appendix A
DBMS_CLOUD Package Parquet to Oracle Data Type Mapping
Note:
The external table supports scalar data types only. Complex types, such as
maps, arrays, and structs, are not supported.
A-23
Appendix A
DBMS_CLOUD Package Avro to Oracle Data Type Mapping
Note:
For Avro, the external table supports scalar data types only, with the
following exceptions:
• DBMS_CLOUD supports UNION of types [null, SIMPLE_TYPE], where
SIMPLE_TYPE is: INT, LONG, FLOAT, DOUBLE, STRING, BYTES.
• DBMS_CLOUD does not support UNION of multiple types, for example [null,
INT, DOUBLE]
• Files that contain arrays and/or maps of simple types can be read. These
columns are skipped.
• Files that contain arrays and/or maps of complex types, for example an
array of records, cannot be read and report an error. For example:
querying an Avro file with unsupported types shows:
A-24
Appendix A
DBMS_CLOUD Package Parquet and AVRO to Oracle Column Name Mapping
A-25
Appendix A
DBMS_CLOUD Package Parquet and AVRO to Oracle Column Name Mapping
A-26
Appendix A
DBMS_CLOUD Package Parquet and AVRO to Oracle Column Name Mapping
Notes:
A-27
Appendix A
Summary of DBMS_CLOUD_ADMIN Subprograms
Topics
• CREATE_DATABASE_LINK Procedure
• DISABLE_APP_CONT Procedure
• DROP_DATABASE_LINK Procedure
• ENABLE_APP_CONT Procedure
• GRANT_TABLESPACE_QUOTA Procedure
CREATE_DATABASE_LINK Procedure
This procedure creates a database link to a target database in the schema calling the
API. You first need to upload the wallet (cwallet.sso) containing the certificates for
the target database using DBMS_CLOUD.GET_OBJECT and then create the database link
using the wallet.
Syntax
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name IN VARCHAR2,
hostname IN VARCHAR2,
port IN NUMBER,
service_name IN VARCHAR2,
ssl_server_cert_dn IN VARCHAR2,
credential_name IN VARCHAR2,
directory_name IN VARCHAR2 DEFAULT);
Parameters
Parameter Description
db_link_name The name of the database link to create.
hostname The hostname for the target database.
port The port for the target database. To ensure security, ports are
restricted to: 1521-1525.
service_name The service_name for the database to link to. For a target
Autonomous Data Warehouse the service name consists of three
parts:
• database_name: the name of your database.
• priority is one of: _high | _medium| _low
• adwc.oraclecloud.com
You can find the service names in the tnsnames.ora file in the
wallet.zip that you download from Autonomous Data
Warehouse for your connection.
A-28
Appendix A
Summary of DBMS_CLOUD_ADMIN Subprograms
Parameter Description
ssl_server_cert_dn The DN value found in the server certificate.
credential_name The name of a stored credential created with
DBMS_CLOUD.CREATE_CREDENTIAL. This is the credentials to
access the target database.
Supplying this argument is optional. If credential_name is not
supplied or is supplied with a NULL value, the database link uses
the connected user credentials (the link is created and uses the
credentials, username and password, of the user who is
accessing the link).
directory_name The directory for the stored cwallet.sso file. The default
value for this parameter is 'data_pump_dir'.
Usage Notes
To run DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK with a user other than ADMIN you
need to grant EXECUTE and CREATE DATABASE LINK privileges to that user. For
example, run the following command as ADMIN to grant privileges to adwc_user:
Example
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(credential_name => 'DB_LINK_CRED',
username => 'adwc_user', password => 'password');
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'adb.eu-frankfurt-1.oraclecloud.com',
port => '1522',
service_name => 'example_medium.adwc.example.oraclecloud.com',
ssl_server_cert_dn => 'CN=adwc.example.oraclecloud.com,OU=Oracle
BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US',
credential_name => 'DB_LINK_CRED');
END;
/
DISABLE_APP_CONT Procedure
This procedure disables database application continuity for the session associated
with the specified service name in Autonomous Data Warehouse.
Syntax
DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
service_name IN VARCHAR2);
A-29
Appendix A
Summary of DBMS_CLOUD_ADMIN Subprograms
Parameters
Parameter Description
service_name The service_name for the Autonomous Data Warehouse service. The
service name consists of three parts:
• database_name: the name of your database.
• priority is one of: _high | _medium| _low
• adwc.oraclecloud.com
You can find the service names in the tnsnames.ora file in the
wallet.zip that you download from Autonomous Data Warehouse
for your connection.
Usage Notes
See Overview of Application Continuity for more information on Application Continuity.
Example
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
service_name => 'nv123abc1_adb1_high.adwc.oraclecloud.com' );
END;
/
NAME FAILOVER_RESTORE
DRAIN_TIMEOUT
------------------------------------------------------- -----------------
-------------
nv123abc1_adb1_high.adwc.oraclecloud.com
NONE 0
DROP_DATABASE_LINK Procedure
This procedure drops a database link.
Syntax
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK(
db_link_name IN VARCHAR2);
Parameters
Parameter Description
db_link_name The name of the database link to drop.
A-30
Appendix A
Summary of DBMS_CLOUD_ADMIN Subprograms
Example
BEGIN
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK(
db_link_name => 'SALESLINK' );
END;
/
Usage Notes
After you are done using a database link and you run
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK, to ensure security of your Oracle database
remove any stored wallet files. For example:
• Remove the wallet file in Object Store.
• Use DBMS_CLOUD.DELETE_FILE to remove the wallet file from the data_pump_dir
directory or from the user defined directory where the wallet file was uploaded.
ENABLE_APP_CONT Procedure
This procedure enables database application continuity for the session associated with
the specified service name in Autonomous Data Warehouse.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
service_name IN VARCHAR2);
Parameters
Parameter Description
service_name The service_name for the Autonomous Data Warehouse service.
The service name consists of three parts:
• database_name: the name of your database.
• priority is one of: _high | _medium| _low
• adwc.oraclecloud.com
You can find the service names in the tnsnames.ora file in the
wallet.zip that you download from Autonomous Data Warehouse
for your connection.
Usage Notes
See Overview of Application Continuity for more information on Application Continuity.
Example
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
service_name => 'nvthp2ht_adb1_high.adwc.oraclecloud.com'
);
A-31
Appendix A
Summary of DBMS_CLOUD_ADMIN Subprograms
END;
/
NAME FAILOVER_RESTORE
DRAIN_TIMEOUT
------------------------------------------------------- -----------------
-------------
nvthp2ht_adb1_high.adwc.oraclecloud.com
LEVEL1 300
GRANT_TABLESPACE_QUOTA Procedure
This procedure grants a storage quota to a specified database user. When a
tablespace quota is granted to a user Autonomous Data Warehouse limits the storage
space used by that user to the specified quota. Using the value UNLIMITED specifies
unlimited tablespace privilege.
Syntax
DBMS_CLOUD_ADMIN.GRANT_TABLESPACE_QUOTA(
username IN VARCHAR2,
tablespace_quota IN VARCHAR2);
Parameters
Parameter Description
user_name The database username to grant the tablespace quota to.
tablespace_quota The quota to assign to the specified user in bytes. For kilobytes,
megabytes, gigabytes, and terabytes you can specify K, M, G, or T
after the numeric value respectively.
Alternatively you can specify UNLIMITED, which is equivalent to
GRANT UNLIMITED TABLESPACE with SQL.
Usage Notes
See System Privileges (Organized by the Database Object Operated Upon -
TABLESPACE) for more information.
See Manage User Privileges with Autonomous Data Warehouse for information on
privileges granted with the role DWROLE.
Examples
BEGIN
DBMS_CLOUD_ADMIN.GRANT_TABLESPACE_QUOTA(
username => 'ADBUSER',
tablespace_quota => '10G'
A-32
Appendix A
Summary of DBMS_CLOUD_ADMIN Subprograms
);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.GRANT_TABLESPACE_QUOTA(
username => 'ADBUSER',
tablespace_quota => 'UNLIMITED'
);
END;
/
A-33
B
Autonomous Data Warehouse for
Experienced Oracle Database Users
This appendix provides information on using Autonomous Data Warehouse for
experienced Oracle Database users.
Topics
B-1
Appendix B
Restrictions for Database Initialization Parameters
See VLDB and Partitioning Guide for more information on parallel DML operations.
APPROX_FOR_AGGREGATION
APPROX_FOR_COUNT_DISTINCT
APPROX_FOR_PERCENTILE
AWR_PDB_AUTOFLUSH_ENABLED
FIXED_DATE
MAX_IDLE_TIME
NLS_CALENDAR
NLS_COMP
NLS_CURRENCY
NLS_DATE_FORMAT
NLS_DATE_LANGUAGE
NLS_DUAL_CURRENCY
NLS_ISO_CURRENCY
NLS_LANGUAGE
NLS_LENGTH_SEMANTICS
NLS_NCHAR_CONV_EXCP
NLS_NUMERIC_CHARACTERS
NLS_SORT
NLS_TERRITORY
NLS_TIMESTAMP_FORMAT
NLS_TIMESTAMP_TZ_FORMAT
OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES (Allowed only with ALTER SESSION)
OPTIMIZER_IGNORE_HINTS
OPTIMIZER_IGNORE_PARALLEL_HINTS
PLSCOPE_SETTINGS
PLSQL_CCFLAGS
PLSQL_DEBUG
PLSQL_OPTIMIZE_LEVEL
PLSQL_WARNINGS
STATISTICS_LEVEL (Allowed only with ALTER SESSION)
TIME_ZONE: (Allowed only with ALTER SESSION)
For more information on initialization parameters see Oracle Database Reference. For
more information on TIME_ZONE, see Database SQL Language Reference.
B-2
Appendix B
Restrictions for SQL Commands
Note:
If you try to use a restricted SQL command the system reports:
This error indicates that you are not allowed to run the SQL command in
Autonomous Data Warehouse.
The following SQL statements are not available in Autonomous Data Warehouse:
• ADMINISTER KEY MANAGEMENT
• ALTER PROFILE
• ALTER TABLESPACE
• CREATE DATABASE LINK
Note:
Use DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK to create database links
in Autonomous Data Warehouse. See Create Database Links for more
information.
• CREATE PROFILE
• CREATE TABLESPACE
• DROP TABLESPACE
B-3
Appendix B
Restrictions for SQL Commands
Clause Comment
physical_properties Ignored
logging_clause Ignored
inmemory_table_clause Ignored
ilm_clause Ignored
organization index Ignored
organization external Ignored
cluster Ignored
B-4
Appendix B
Restrictions for Data Types
Clause Comment
LOB_storage_clause Ignored
Note:
For more information on CREATE TABLE, see Database SQL Language
Reference.
Clause Comment
physical_attributes_clause Ignored
logging_clause Ignored
inmemory_table_clause Ignored
ilm_clause Ignored
allocate_extent_clause Ignored
deallocate_unused_clause Ignored
shrink_clause Ignored
alter_iot_clauses Ignored
modify_LOB_storage_clause Ignored
Note:
For more information on ALTER TABLE, see Database SQL Language
Reference.
B-5
Appendix B
Managing Partitions, Indexes, and Materialized Views
Topics:
• Restrictions for Oracle XML DB
• Restrictions for Oracle Text
• Restrictions for Oracle Spatial and Graph
• Restrictions for Oracle Application Express
• List of Restricted and Removed Oracle Features
Note:
If you migrate tables containing XMLType columns to Autonomous Data
Warehouse using Oracle Data Pump, you need to convert to Non-Schema
Binary XML prior to using Oracle Data Pump Export (expdp).
B-6
Appendix B
Restrictions for Database Features
For details on Oracle XML DB, see Oracle XML DB Developer's Guide.
B-7
Appendix B
Restrictions for Database Features
Table B-2 (Cont.) Oracle Text Restricted Features with Autonomous Database
1
This is supported if you grant the privilege to create a trigger to the user (GRANT CREATE TRIGGER).
You must also disable parallel DML at the session level (ALTER SESSION DISABLE PARALLEL
DML).
For details on Oracle Text, see Oracle Text Application Developer's Guide.
Note:
Autonomous Database does not include Oracle Spatial and Graph 3-
Dimensional geometry types and related operators, functions, or utilities.
Table B-3 Oracle Spatial and Graph Supported Features with Autonomous
Database
B-8
Appendix B
Restrictions for Database Features
Table B-3 (Cont.) Oracle Spatial and Graph Supported Features with
Autonomous Database
For details on Oracle Spatial and Graph, see Oracle Spatial and Graph Developer's
Guide.
B-9
Appendix B
Restrictions for Database Features
B-10
C
Migrating Amazon Redshift to Autonomous
Data Warehouse
The SQL Developer Amazon Redshift Migration Assistant, available with SQL
Developer 18.3 and later versions provides a framework for easy migration of Amazon
Redshift environments on a per-schema basis.
This section describes the steps and the workflow for both an online migration of
Amazon Redshift and for the generation of scripts for a scheduled, manual migration
that you run at a later time.
Topics
• Autonomous Data Warehouse Redshift Migration Overview
• Connect to Amazon Redshift
• Connect to Autonomous Data Warehouse
• Start the Cloud Migration Wizard
• Review and Finish the Amazon Redshift Migration
• Use Generated Amazon Redshift Migration Scripts
• Perform Post Migration Tasks
C-1
Appendix C
Connect to Amazon Redshift
Download Amazon Redshift JDBC Driver and Add the Third Party Driver
1. Download an Amazon Redshift JDBC driver to access Amazon Redshift. Consult
the Amazon Redshift documentation for the location of the most recent JDBC
driver. For more information, see Configure a JDBC Connection.
Note:
Use the Amazon Redshift JDBC Driver JDBC 4.2–compatible driver.
2. Store the Amazon Redshift JDBC driver in a local directory where SQL Developer
can access the Amazon Redshift JDBC driver.
3. Add the Amazon Redshift JDBC driver as third party to SQL Developer before
making a connection. Within SQL Developer, go to Tools > Preferences >
Database > Third Party JDBC Drivers (for Mac, this is Oracle SQL Developer >
Preferences Database > Third Party JDBC Drivers).
4. Click Add Entry and select the path to the Amazon Redshift JDBC Driver that you
download.
5. Click OK to add the Amazon Redshift JDBC driver that you download.
C-2
Appendix C
Connect to Amazon Redshift
C-3
Appendix C
Connect to Autonomous Data Warehouse
• For more details for configuring a JDBC Connection and obtaining the Amazon
Redshift JDBC URL, see AWS: Configure a JDBC Connection.
• For more details for configuring security options for the connection ( in case of
"Amazon [500150] connection error"), see AWS: Configure Security options for
Connection ( in case of "Amazon [500150] connection error").
• If you deployed your Amazon Redshift environment within a Virtual Private Cloud
(VPC) you have to ensure that your cluster is accessible from the Internet. See
http://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-authorize-cluster-
access.html for details of how to enable public Internet access.
• If your Amazon Redshift client connection to the database appears to hang or
times out when running long queries, see http://docs.aws.amazon.com/redshift/
latest/mgmt/connecting-firewall-guidance.html for possible solutions to address
this issue.
Test the connection before you save it.
C-4
Appendix C
Start the Cloud Migration Wizard
For more information, see Connecting with Oracle SQL Developer (18.2 or later).
Test the connection before you save it.
The Cloud Migration Wizard is an easy set of steps. The Cloud Migration Wizard
guides you to:
• Identify the schemas in your Amazon Redshift database that you want to migrate.
• Identify the target Autonomous Data Warehouse.
• Define whether you want to migrate the metadata (DDL), the data, or both.
• Choose to migrate your system online, to generate scripts, or both.
C-5
Appendix C
Start the Cloud Migration Wizard
C-6
Appendix C
Start the Cloud Migration Wizard
https://s3-us-west-2.amazonaws.com/adwc/folder_name
https://s3-us-west-2.amazonaws.com/my_bucket
The wizard verifies the entire path including my_bucket. An attempt is made to write a
test file, if it is not accessible there is a prompt. In case, my_bucket does not exist,
there is an error reported:
Validation Failed
Then the code generation creates the following path for the DBMS_CLOUD.COPY_DATA
function:
S3 Bucket Example 2
If you provide the following S3 Bucket URI :
https://s3-us-west-2.amazonaws.com/my_bucket/another_folder
The wizard verifies the entire path including my_bucket. An attempt is made to write a
test file, if it is not accessible there is a prompt. In case, my_bucket does not exist,
there is an error reported:
Validation Failed
In this case the another_folder does not have to exist. The migration creates the
another_folder bucket inside my_bucket.
Then the code generation creates the following path for the DBMS_CLOUD.COPY_DATA
function:
C-7
Appendix C
Start the Cloud Migration Wizard
Note:
Use the ADMIN user or a user with admin role.
The Amazon Redshift Migration Assistant allows you to do an online migration right
away, to generate all scripts necessary for a migration, or both. If you chose to store
the scripts in a local directory you have to specify the local directory (the directory
must be writable by the user).
C-8
Appendix C
Start the Cloud Migration Wizard
Note:
– If Include Data from Step 1 and Migrate Now are both unselected,
you are opting for just generation of all required SQL Scripts for
manual migration.
– If Include Data from Step 1 is unchecked and Migrate Now is
selected, then all selected schemas and their tables will be deployed
in Autonomous Data Warehouse but data will not be loaded into
tables.
– If Include Data from Step 1 and Migrate Now are both selected,
then all selected schemas and their tables will be deployed in
Autonomous Data Warehouse and data will be loaded into tables.
• Directory: Specify the director to store the generated scripts necessary for the
migration; this saves the scripts in a local directory.
C-9
Appendix C
Start the Cloud Migration Wizard
Output Directory: Enter the path or click Select Directory to select the directory or
folder for the migration.
Maximum Number of Threads: Enter the number of parallel threads to enable when
loading data to tables in Autonomous Data Warehouse.
Use Scheduler: Select this option to enable the scheduler for migration. You can
schedule jobs for data load migration operations from the AWS S3 bucket to
Autonomous Data Warehouse. You have the option to run the scheduled jobs
immediately or at a future date and time. To monitor the data load scheduled jobs, use
the Scheduler node in the Connections navigator.
Migration Execution Choice:
• Immediate runs the scheduler as soon as the Redshift migration is triggered.
• Once runs the scheduler on a future date. You specify the Start Date and Time
Zone. By default, the Start Date displays the current date and time of the local
system. To change the start date, use the calendar icon to double-click and select
the date or use the spinner to highlight the date and then click the field to set it.
Redshift Unload Options: Allow Overwrite: If this option is enabled, the unload
process will overwrite existing files, including the manifest file (lists the data files that
are created by the unload process). By default, unload fails if there are files that can
be overwritten.
ADWC format options: Reject Limit: Enter the number of rows to reject when
loading data to tables in Autonomous Data Warehouse. The migration operation will
error out after the specified number of rows are rejected. The default is 0.
C-10
Appendix C
Review and Finish the Amazon Redshift Migration
If you have chosen an immediate migration, then the dialog of the migration wizard
stays open until the migration is finished. If you select generate scripts, the migration
process generates the necessary scripts in the specified local directory, and does not
run the scripts.
To perform the migration, click Finish
If the selected schema name in AWS Redshift already exists in Autonomous Data
Warehouse, the migration process excludes deploying these selected schemas and
displays a dialog:
C-11
Appendix C
Use Generated Amazon Redshift Migration Scripts
Load Your Amazon Redshift Data into Your Oracle Autonomous Data
Warehouse
The script adwc_dataload.sql contains all the load commands necessary to load your
unloaded Amazon Redshift data straight from S3 into your Autonomous Data
Warehouse.
C-12
Appendix C
Use Generated Amazon Redshift Migration Scripts
and
and
______________________________________________________________
OPERATION ID : 8566
LOGFILE TABLE : COPY$8566_LOG
BADFILE TABLE : COPY$8566_BAD
SOURCE SCHEMA : sample
C-13
Appendix C
Perform Post Migration Tasks
C-14
D
Sample Star Schema Benchmark (SSB)
Queries and Analytic Views
The SSB schema contains the tables: lineorder, customer, supplier, part, and dwdate.
The following is a list of sample queries and analytic views you can use against the
SSB schema. Note that you need to prefix the table names with the schema name
SSB in your queries.
Note:
Both SH and SSB are provided as schema-only users, so you cannot unlock
or drop those users or set a password. And the storage of the sample data
sets does not count towards your database storage.
Topics
D-1
Appendix D
Star Schema Benchmark Queries
D-2
Appendix D
Star Schema Benchmark Queries
D-3
Appendix D
Star Schema Benchmark Analytic Views
SELECT
dwdate_hier.member_name as year,
part_hier.member_name as part,
customer_hier.c_region,
customer_hier.member_name as customer,
lo_quantity,
lo_revenue
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
customer_hier)
WHERE
dwdate_hier.d_year = '1998'
AND dwdate_hier.level_name = 'MONTH'
AND part_hier.level_name = 'MANUFACTURER'
AND customer_hier.c_region = 'AMERICA'
AND customer_hier.level_name = 'NATION'
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order,
customer_hier.hier_order;
SELECT
dwdate_hier.member_name as time,
part_hier.member_name as part,
customer_hier.member_name as customer,
supplier_hier.member_name as supplier,
lo_quantity,
lo_supplycost
D-4
Appendix D
Star Schema Benchmark Analytic Views
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
customer_hier,
supplier_hier)
WHERE
dwdate_hier.d_year = '1998'
AND dwdate_hier.level_name = 'MONTH'
AND part_hier.level_name = 'MANUFACTURER'
AND customer_hier.c_region = 'AMERICA'
AND customer_hier.c_nation = 'CANADA'
AND customer_hier.level_name = 'CITY'
AND supplier_hier.s_region = 'ASIA'
AND supplier_hier.level_name = 'REGION'
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order,
customer_hier.hier_order,
supplier_hier.hier_order;
SELECT
dwdate_hier.member_name as year,
part_hier.member_name as part,
customer_hier.member_name as customer,
supplier_hier.member_name as supplier,
lo_quantity,
lo_revenue,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
customer_hier,
supplier_hier)
WHERE
dwdate_hier.d_yearmonth = 'Apr1998'
AND dwdate_hier.level_name = 'DAY'
AND part_hier.level_name = 'MANUFACTURER'
AND customer_hier.c_region = 'AMERICA'
AND customer_hier.c_nation = 'CANADA'
AND customer_hier.level_name = 'CITY'
AND supplier_hier.level_name = 'REGION'
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order,
customer_hier.hier_order,
supplier_hier.hier_order;
SELECT
dwdate_hier.member_name as year,
part_hier.member_name as part,
supplier_hier.member_name as supplier,
lo_quantity,
lo_extendedprice,
D-5
Appendix D
Star Schema Benchmark Analytic Views
lo_ordtotalprice,
lo_revenue,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
supplier_hier)
WHERE
dwdate_hier.level_name = 'YEAR'
AND part_hier.level_name = 'MANUFACTURER'
AND supplier_hier.level_name = 'SUPPLIER'
AND supplier_hier.s_suppkey = '23997';
SELECT
dwdate_hier.member_name as time,
part_hier.p_container,
part_hier.member_name as part,
lo_quantity,
lo_extendedprice,
lo_ordtotalprice,
lo_revenue,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier)
WHERE
dwdate_hier.member_name = 'June 10, 1998 '
AND dwdate_hier.level_name = 'DAY'
AND part_hier.level_name = 'PART'
AND part_hier.p_size = 32;
SELECT
dwdate_hier.member_name as time,
part_hier.member_name as part,
part_hier.p_name,
part_hier.p_color,
lo_quantity,
lo_revenue,
lo_supplycost,
lo_revenue - lo_supplycost as profit
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier)
WHERE
dwdate_hier.d_yearmonth = 'Aug1996'
AND dwdate_hier.d_dayofweek = 'Friday '
AND dwdate_hier.level_name = 'DAY'
AND part_hier.level_name = 'PART'
AND part_hier.p_color in ('ivory','coral')
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order;
D-6