SAP HANA EIM Installation and Configuration Guide en
SAP HANA EIM Installation and Configuration Guide en
SAP HANA EIM Installation and Configuration Guide en
SAP HANA Smart Data Integration and SAP HANA Smart Data Quality 2.0 SP03
Document Version: 1.0 – 2020-05-30
1 Installation and Configuration Guide for SAP HANA Smart Data Integration and SAP HANA
Smart Data Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Components to Install, Deploy, and Configure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Deployment Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Deployment in High Availability Scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Deployment with SAP HANA Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Deployment with the SAP HANA Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Administration Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
7.1 Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
7.2 Configuring SSL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
Configure SSL for SAP HANA (CA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
Configure SSL for SAP HANA (Self-Signed). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
Configure SSL for SAP HANA On-Premise [Command Line Batch]. . . . . . . . . . . . . . . . . . . . . . 509
Connect to SAP HANA On-Premise with SSL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
Configure the Adapter Truststore and Keystore Using the Data Provisioning Agent
Configuration Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
Change the Agent Configuration Tool SSL Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Reconfigure an Existing Agent for SSL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
Troubleshoot the SSL Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
7.3 Update JCE Policy Files for Stronger Encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
7.4 Authorizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Activating and Executing Task Flowgraphs and Replication Tasks. . . . . . . . . . . . . . . . . . . . . . . 520
7.5 Communication Channel Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
8 SQL and System Views Reference for Smart Data Integration and Smart Data Quality. . . . . . 523
8.1 SQL Statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
ALTER ADAPTER Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .525
ALTER AGENT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .527
ALTER REMOTE SOURCE Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . 528
ALTER REMOTE SUBSCRIPTION Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . 532
CANCEL TASK Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
CREATE ADAPTER Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
CREATE AGENT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
CREATE AGENT GROUP Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . 539
CREATE AUDIT POLICY Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . 540
CREATE REMOTE SOURCE Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . 542
CREATE REMOTE SUBSCRIPTION Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . 543
CREATE VIRTUAL PROCEDURE Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . 548
DROP ADAPTER Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
DROP AGENT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
DROP AGENT GROUP Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . 552
DROP REMOTE SUBSCRIPTION Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . 553
GRANT Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554
PROCESS REMOTE SUBSCRIPTION EXCEPTION Statement [Smart Data Integration]. . . . . . . . 556
SESSION_CONTEXT Function [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .557
START TASK Statement [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
8.2 System Views. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
ADAPTER_CAPABILITIES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . 564
ADAPTER_LOCATIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . .565
ADAPTERS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
AGENT_CONFIGURATION System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . 566
AGENT_GROUPS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
AGENTS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
M_AGENTS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
M_REMOTE_SOURCES System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . 568
M_REMOTE_SUBSCRIPTION_COMPONENTS System View [Smart Data Integration]. . . . . . . . 568
M_REMOTE_SUBSCRIPTION_STATISTICS System View [Smart Data Integration]. . . . . . . . . . . 569
M_REMOTE_SUBSCRIPTIONS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . 570
M_SESSION_CONTEXT System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . . . . . .571
REMOTE_SOURCE_OBJECT_COLUMNS System View [Smart Data Integration]. . . . . . . . . . . . 572
REMOTE_SOURCE_ OBJECT_DESCRIPTIONS System View [Smart Data Integration]. . . . . . . . 572
REMOTE_SOURCE_OBJECTS System View [Smart Data Integration]. . . . . . . . . . . . . . . . . . . . 573
This guide describes the main tasks and concepts necessary for the initial installation and configuration of SAP
HANA smart data integration and SAP HANA smart data quality.
For information about the capabilities available for your license and installation scenario, refer to the Feature
Scope Description (FSD) for your specific SAP HANA version on the SAP HANA Platform page.
For information about the ongoing administration and operation of SAP HANA smart data integration and SAP
HANA smart data quality, refer to the Administration Guide for SAP HANA Smart Data Integration and SAP
HANA Smart Data Quality.
For information about administration of the overall SAP HANA system, refer to the SAP HANA Administration
Guide.
Related Information
1.1 Overview
SAP HANA smart data integration and SAP HANA smart data quality provide tools to access source data and
provision, replicate, and transform that data in SAP HANA on-premise or in the cloud.
SAP HANA smart data integration and SAP HANA smart data quality let you enhance, cleanse, and transform
data to make it more accurate and useful. You can efficiently connect to any source to provision and cleanse
Capabilities include:
● A simplified landscape; that is, one environment in which to provision and consume data
● Access to more data formats including an open framework for new data sources
● In-memory performance, which means increased speed and decreased latency
SAP HANA smart data Real-time, high-speed data provisioning, bulk data movement, and federation. Provides built-in
integration adapters plus an SDK so you can build your own adapters.
● Replication Editor in SAP Web IDE and SAP HANA Web-based Development Workbench or
Web IDE, which lets you set up batch or real-time data replication scenarios in an easy-to-
use web application
● Transformations presented as nodes in SAP Web IDE and SAP HANA Web-based Develop
ment Workbench, which let you set up batch or real-time data transformation scenarios
● Data Provisioning Agent, a lightweight component that hosts data provisioning adapters,
enabling data federation, replication, and transformation scenarios for on-premise or in-
cloud deployments
● Data Provisioning adapters for connectivity to remote sources
● An Adapter SDK to create custom adapters
● Monitors for Data Provisioning Agents, remote subscriptions, and data loads
SAP HANA smart data Real-time, high-performance data cleansing, address cleansing, and geospatial data enrich
quality ment. Provides an intuitive interface to define data transformation flowgraphs in SAP Web IDE
and SAP HANA Web-based Development Workbench.
This guide introduces you to features delivered in SAP HANA smart data integration and SAP HANA smart data
quality 2.0 Service Packs only. For information about features delivered in subsequent patches, as well as other
information, such as fixed and known issues, refer to the SAP Note for each of the relevant patches. The
Central SAP Note for SAP HANA smart data integration and SAP HANA smart data quality is the best place to
access this information, with links to the SAP Notes for every Service Pack and Patch.
Related Information
Central SAP Note for SAP HANA smart data integration and SAP HANA smart data quality
These diagrams represent common deployment architectures for using smart data integration and smart data
quality with SAP HANA.
In all deployments, the basic components are the same. However, the connections between the components
may differ depending on whether SAP HANA is deployed on-premise, in the cloud, or behind a firewall.
Outbound Connections
Data Provisioning Agent When SAP HANA is deployed on-premise, the 5050
Data Provisioning Server within SAP HANA con
nects to the agent using the TCP/IP protocol.
Inbound Connections
Data Provisioning Agent When SAP HANA is deployed in the cloud or be 80xx
hind a firewall, the Data Provisioning Agent con
nects to the SAP HANA XS engine using the 43xx
HTTP/S protocol.
Note
When the agent connects to SAP HANA in
the cloud over HTTP/S, data is automati
cally gzip compressed to minimize the re
quired network bandwidth.
Related Information
SAP HANA smart data integration and SAP HANA smart data quality include a number of components that you
must install, deploy, and configure.
Within this guide, the steps to install and deploy the components appear in the section Configure [SDI/SDQ].
Component Description
Data Provisioning Server The Data Provisioning Server is a native SAP HANA process. It is built as an index server var
iant, runs in the SAP HANA cluster, and is managed and monitored just like other SAP HANA
services. It provides out-of-the-box native connectivity for many sources and connectivity to
the Data Provisioning Agent.
The Data Provisioning Server is installed with, but must be enabled in, the SAP HANA
Server.
Data Provisioning Agent The Data Provisioning Agent is a container running outside the SAP HANA environment, and
it is managed by the Data Provisioning Server. It provides connectivity for all those sources
where the driver cannot run inside the Data Provisioning Server. Through the Data Provi
sioning Agent, the preinstalled Data Provisioning Adapters communicate with the Data Pro
visioning Server for connectivity, metadata browsing, and data access. The Data Provision
ing Agent also hosts custom adapters created using the Adapter SDK.
The Data Provisioning Agent is installed separately from SAP HANA server or client.
HANA_IM_DP delivery unit The HANA_IM_DP delivery unit bundles monitoring and administration capabilities and the
Data Provisioning Proxy for connecting to SAP HANA in the cloud.
The delivery unit includes the Data Provisioning administration application, the Data Provi
sioning Proxy, and the Data Provisioning monitor.
Data Provisioning adminis The Data Provisioning administration application is an XS application that manages the ad
tration application ministrative functions of the Data Provisioning Agent with SAP HANA in the cloud.
Data Provisioning Proxy The Data Provisioning Proxy is an XS application that acts as a proxy to provide communica
tion between the Data Provisioning Agent and the Data Provisioning Server when SAP HANA
runs in the cloud. When SAP HANA is in the cloud, the agent uses HTTP(S) to connect to
the Data Provisioning Proxy in the XS Engine, which eliminates the need to open more ports
in corporate IT firewalls.
Data Provisioning monitor The Data Provisioning monitor is a browser-based interface that lets you monitor agents,
tasks, and remote subscriptions created in the SAP HANA system. You can view the moni
tors by entering the URL of each monitor into a web browser or by accessing the smart data
integration links in the SAP HANA cockpit, a web-based launchpad that is installed with SAP
HANA Server.
You enable Data Provisioning monitoring functionality for agents, data loads, and remote
subscriptions by creating the statistics tables and deploying the HANA_IM_DP delivery unit.
SAP HANA Web-based Devel The SAP HANA Web-based Development Workbench, which includes the Replication Editor
opment Workbench Replica to set up replication tasks, is installed with SAP HANA Server.
tion Editor
SAP HANA Web-based Devel The SAP HANA Web-based Development Workbench Flowgraph Editor provides an interface
opment Workbench Flow to create data provisioning and data quality transformation flowgraphs.
graph Editor
Application function modeler The application function modeler provides an interface to create data provisioning and data
quality transformation flowgraphs.
Common deployment options for SAP HANA systems, Data Provisioning Agents, and source systems are
described.
Landscape Description
SAP HANA on premise vs. SAP HANA in the SAP Cloud Platform Neo
Environment
Using SAP HANA on premise or in the cloud is a choice of deployment. Here are some things to keep in mind
when deciding which deployment to use. If your deployment includes SAP HANA in the cloud and a firewall
between SAP HANA and the Data Provisioning Agent:
● The Data Provisioning Proxy must be deployed. To deploy the proxy, download and deploy the
HANA_IM_DP delivery unit.
● The Data Provisioning Agent must be configured to communicate with SAP HANA using HTTP. Configure
the Agent by using the Data Provisioning Agent Configuration tool.
If your deployment includes the SAP Cloud Platform, SAP HANA service in the Cloud Foundry environment,
configure the Data Provisioning Agent to connect via JDBC WebSockets.
If your deployment includes SAP HANA Cloud, configure the Data Provisioning Agent to connect via JDBC.
For more information about connecting to SAP HANA Cloud, see Deployment with SAP HANA Cloud [page
18].
● You may not have one Data Provisioning Agent registered in multiple SAP HANA instances.
● You may have multiple instances of the Data Provisioning Agent installed on multiple machines. For
example, a developer may want to have a Data Provisioning Agent installed on their computer to work on a
custom adapter.
Related Information
In addition to installing SAP HANA in a multiple-host configuration, you can use agent grouping to provide
automatic failover and load balancing for SAP HANA smart data integration and SAP HANA smart data quality
functionality in your landscape.
In a multiple-host SAP HANA system, the Data Provisioning Server runs only in the active worker host. If the
active worker host fails, the Data Provisioning Server is automatically started in the standby host when it takes
over, and any active replication tasks are resumed.
For more information about installing SAP HANA in a multiple-host configuration, see the SAP HANA Server
Installation and Update Guide.
Agent grouping provides automatic failover for connectivity to data sources accessed through Data
Provisioning Adapters.
When an agent that is part of a group is unreachable for a time longer than the configured heart beat time limit,
the Data Provisioning Server chooses a new active agent within the group, and it resumes replication for any
remote subscriptions active on the original agent.
Initial and batch load requests to a remote source configured on the agent group are routed to the first
available agent in the group.
Restriction
Fail-over is not supported for initial and batch load requests. Restart the initial load following a failure due to
agent unavailability.
For example, with multiple agents in the group, you can choose to have the agent for the initial load selected
randomly, or selected from the list of agents in a round-robin fashion.
Restriction
Load balancing is not supported for change data capture (CDC) operations.
For complete information about configuring agent groups, see the Administration Guide for SAP HANA Smart
Data Integration and SAP HANA Smart Data Quality.
Related Information
Understand the landscape and deployment and configuration process when using SAP HANA smart data
integration with SAP HANA Cloud
When you are using SAP HANA Cloud, you can use SAP HANA smart data integration with tools such as SAP
Web IDE on SAP Cloud Platform to transform and replicate data into the SAP HANA database.
Create an SAP HANA Cloud To create an SAP HANA Cloud instance in the SAP Cloud SAP HANA Cloud Getting
instance Started Guide: Creating SAP
Platform cockpit, you must be working in a global account
HANA Cloud Instances
and have added a quota to the SAP HANA Cloud.
Tip
Add the IP addresses for your Data Provisioning Agent
host systems to the list of whitelisted connections when
creating your SAP HANA Cloud instance.
Enable flowgraph and repli To use the flowgraph and replication task editors in SAP Web Enable Additional Features
cation task editors (Extensions)
IDE on SAP Cloud Platform, you must first enable the SAP
EIM Smart Data Integration Editors extension.
Note
SAP HANA smart data quality functionality, including
the Cleanse, Geocode, and Match nodes, is not available
in the SAP EIM Smart Data Integration Editors extension
in SAP Web IDE on SAP Cloud Platform.
Connect the Data Provision Connect to SAP HANA using JDBC when you are using an Connect to SAP HANA Cloud
ing Agent [Command Line] [page 52]
SAP HANA Cloud instance.
Note
You must use Data Provisioning Agent version 2.4.2.4 or
newer, and must create users for agent administration
and agent messaging.
Configure adapters for your The Data Provisioning Agent includes adapters that allow Register Data Provisioning
data sources
SAP HANA smart data integration to connect to your data Adapters [page 111]
sources. You may need to perform configuration steps on
Configure Data Provisioning
your source system to prepare your source for use with a
Adapters [page 153].
data provisioning adapter.
Create remote sources for Use the SAP HANA database explorer to create remote sour Configure Data Provisioning
your data sources Adapters [page 153]
ces in SAP HANA Cloud.
Configure an HDI grantor Before SAP Web IDE users can create and execute flow- Configure a Grantor for the
service HDI Container [page 121]
graphs and replication tasks, you must configure grantor
privileges for the HDI container.
Design flowgraphs and repli The Modeling Guide, SAP Web IDE addresses SAP HANA Modeling Guide for SAP
cation tasks HANA Smart Data Integra
smart data integration features and tools to accomplish
tion and SAP HANA Smart
these tasks:
Data Quality
● The Replication Editor is for creating real time or batch
replication scenarios for moving data into SAP HANA.
● Transformation nodes can be used for pivoting tables,
capturing changed data, comparing tables, and so on.
Execute and monitor flow- Access the Data Provisioning Monitors from the Catalog Monitoring Data Provisioning
graphs and replication tasks in the SAP HANA Web-based
folder in the SAP HANA database explorer.
Development Workbench
Table 2: Data Provisioning Monitors
Monitor Access
Remote Sources
Right-click Remote Sources Show
Remote Sources .
Subscriptions .
Show Tasks .
Note
You can also access the tasks
monitor from the flowgraph editor
by opening the flowgraph and
Overview .
Understand the landscape and deployment and configuration process when using SAP HANA smart data
integration with SAP Cloud Platform, SAP HANA service.
Context
When you are using the SAP Cloud Platform, SAP HANA service, you can use SAP HANA smart data
integration with tools such as SAP Web IDE on SAP Cloud Platform to transform and replicate data into the
SAP HANA database.
Restriction
SAP HANA service does not support SAP HANA smart data quality functionality.
Procedure
1. Ensure that you are using an SAP HANA service instance that has the SAP HANA Data Provisioning Server
capability enabled.
○ To create an instance with the SAP HANA Data Provisioning Server capability, use the SAP Cloud
Platform cockpit.
For more information, see Create an SAP HANA Service Instance Using the Cloud Cockpit.
○ To enable the SAP HANA Data Provisioning Server capability on an existing instance, use the SAP
HANA Service Dashboard.
For more information, see Enable and Disable Capabilities.
2. Ensure that the flowgraph and replication task editors are available in SAP Web IDE on SAP Cloud Platform.
To use the flowgraph and replication task editors in SAP Web IDE on SAP Cloud Platform, you must first
enable the SAP EIM Smart Data Integration Editors extension. For more information, see Enable Additional
Features (Extensions).
Note
SAP HANA smart data quality functionality, including the Cleanse, Geocode, and Match nodes, is not
available in the SAP EIM Smart Data Integration Editors extension in SAP Web IDE on SAP Cloud
Platform.
3. Connect the Data Provisioning Agent to the SAP HANA service instance via JDBC WebSockets.
For information, see Connect to the SAP HANA Service via JDBC WebSockets [Command Line] [page 56].
4. Configure adapters for your data sources.
The Data Provisioning Agent includes adapters that allow SAP HANA smart data integration to connect to
your data sources. You may need to perform configuration steps on your source system to prepare your
source for use with a data provisioning adapter.
Before SAP Web IDE users can create and execute flowgraphs and replication tasks, you must configure
grantor privileges for the HDI container.
For more information, see Configure a Grantor for the HDI Container [page 121].
7. Design flowgraphs and replication tasks to retrieve data from your remote data sources, transform it, and
persist it in SAP HANA database tables.
The Modeling Guide, SAP Web IDE addresses SAP HANA smart data integration features and tools to
accomplish these tasks:
○ The Replication Editor is for creating real time or batch replication scenarios for moving data into SAP
HANA.
○ Transformation nodes can be used for pivoting tables, capturing changed data, comparing tables, and
so on.
For more information, see the Modeling Guide, SAP Web IDE for SAP HANA Smart Data Integration and
SAP HANA Smart Data Quality.
8. Execute and monitor your SAP HANA smart data integration flowgraphs and replication tasks.
Access the Data Provisioning Monitors from the Catalog folder in the SAP HANA database explorer:
Option Description
Agents Overview Right-click Data Provisioning Agents Show Data Provisioning Agents .
Remote Sources Right-click Remote Sources Show Remote Sources . Click a remote source.
Detailed View
Remote Subscrip Open a system node such as SYSTEM, or a container. Right-click Remote Subscriptions
tions Detailed View Show Remote Subscriptions . Click a remote subscription.
Remote Subscrip Open a subfolder of the system, or a container. Right-click Remote Subscriptions Show
tions Overview Remote Subscriptions .
Task Detailed View Open a system node such as SYSTEM, or a container. Right-click Tasks Show Tasks . Click
a task.
Note
Also, you may access the tasks monitor from the flowgraph editor by opening the flowgraph
and choosing Tools Launch Tasks Overview .
Task Overview Open a system node such as SYSTEM, or a container. Right-click Tasks Show Tasks .
Note
Also, you may access the tasks monitor from the flowgraph editor by opening the flowgraph
and choosing Tools Launch Tasks Overview .
For more details about the information provided in each monitor, see Monitoring Data Provisioning in the
SAP HANA Web-based Development Workbench.
Several tools are available for the administration of SAP HANA smart data integration and SAP HANA smart
data quality.
Tool Description
SAP HANA studio The SAP HANA Administration Console perspective of the SAP HANA
studio is the main tool for general system administration and monitoring
tasks.
Data Provisioning Agent Configuration tool This tool manages Data Provisioning Agents and adapters, and connec
tions to SAP HANA.
SAP HANA cockpit The SAP HANA cockpit is an SAP Fiori Launchpad site that provides you
with a single point-of-access to a range of Web-based applications for
the administration of SAP HANA. You access the SAP HANA cockpit
through a web browser.
Through the SAP HANA cockpit, you can monitor Data Provisioning
Agents, tasks, and remote subscriptions.
SAP HANA Enterprise Semantic Services Ad The SAP HANA Enterprise Semantic Services Administration user inter
ministration tool face is a browser-based application that lets you manage artifacts for se
mantic services. To launch the SAP HANA Enterprise Semantic Services
Administration tool, enter the following URL in a web browser:http://
<your_HANA_instance:port>/sap/hana/im/ess/ui
A list of high-level tasks needed to set up SAP HANA smart data integration.
Related Information
The following tables list common tasks and roles or privileges that an administrator requires to assign to
complete those tasks.
Users may need specific roles and privileges to accomplish tasks when installing and configuring the Data
Provisioning Agent and Data Provisioning Adapters.
Note
Users may also require permissions for accessing a particular database through a data provisioning
adapter. See the “Data Provisioning Adapters” section for more information.
● AGENT ADMIN
● ADAPTER ADMIN
Configure DP Agent to Role: Whoever sets the Data Provisioning Agent to use HTTP
use HTTP (cloud) pro (cloud) in the Data Provisioning Agent Configuration tool re
● sap.hana.im.dp.proxy::Agent
tocol quires this role.
Messaging
Create an Agent or Application privilege: Needed when an administrator wants to create adapters and
adapter when SAP agents from the Data Provisioning Agent Configuration tool
● sap.hana.im.dp.admin::Admin
HANA is in the cloud when SAP HANA is on the cloud (or the agent uses HTTP
istrator
protocol).
Import a delivery unit Role: This role is necessary if you are using SAP HANA Application
using SAP HANA Ap Lifecycle Management to import the data provisioning deliv
● sap.hana.xs.lm.roles::Adminis
plication Lifecycle ery unit.
trator
Management
Monitoring Tasks
Users may need specific roles and privileges to access and perform various tasks through the Data
Provisioning monitors, which can be accessed from the SAP HANA cockpit.
Monitoring Role: The Monitoring role includes the following application privi
leges:
● sap.hana.im.dp.moni
tor.roles::Monitoring ● sap.hana.ide::LandingPage
● sap.hana.im.dp.monitor::Moni
toring
● sap.hana.im.dp.moni
tor.roles::Operations
Application privilege:
● sap.hana.im.dp.moni
tor::ScheduleTask
● sap.hana.im.dp.monitor::Start
Task
● sap.hana.im.dp.monitor::Stop
Task
Process remote sub Object privilege: Must be explicitly granted for a remote source created by an
scription exceptions other user.
● PROCESS REMOTE SUB
SCRIPTION EXCEPTION
Users may need specific roles and privileges to create and manage remote sources and remote subscriptions.
Create a remote System privilege: When a user can create a remote source (has CREATE RE
source MOTE SOURCE system privilege), that user automatically
● CREATE REMOTE SOURCE
has CREATE VIRTUAL TABLE, DROP, CREATE REMOTE SUB
SCRIPTIONS and PROCESS REMOTE SUBSCRIPTION EX
CEPTION privileges; these privileges do not need to be as
signed to the user. However, this only applies to remote sour
ces that the user creates himself. If someone else creates a
remote source, those privileges must be assigned for each
remote source in order to perform those tasks.
Alter a remote source Object privilege: To alter a remote source, a user must have the ALTER object
privilege on the remote source. Examples of altering a re
● ALTER
mote source include:
Drop a remote source Object privilege: This privilege must be explicitly granted for a remote source
created by another user.
● DROP
Search for an object in Object privilege: To search for remote objects such as tables in a remote
a remote source source, a user must have the ALTER object privilege on the
● ALTER on the remote source
to be searched remote source so the system can create a dictionary.
Add a virtual table Object privilege This privilege must be explicitly granted for a remote source
created by another user.
● CREATE VIRTUAL TABLE
Note
When you use SAP Web IDE for SAP HANA, the internal
ObjectOwner of the HDI project must have privileges to
create virtual tables on the remote source.
Create a remote sub Object privilege: This privilege must be explicitly granted for a remote source
scription created by another user.
● CREATE REMOTE SUBSCRIP
TION
Users may need specific roles and privileges to create and run flowgraphs and replication tasks from SAP Web
IDE for SAP HANA, SAP HANA Web-based Development Workbench, or the SAP HANA studio.
Create a flowgraph For SAP HANA Web-based Devel Allows creation of .hdbflowgraph.
opment Workbench and SAP HANA
studio: Tip
Role: When you use SAP Web IDE for SAP HANA, specific
roles or privileges are not required to create flowgraphs.
● sap.hana.xs.ide.roles::Editor
Developer
Object privilege:
● EXECUTE on
"_SYS_REPO"."TEXT_ACCES
SOR" and
"_SYS_REPO"."MULTI_TEXT_A
CCESSOR"
Execute a stored pro Object privilege: Needed on the schema where the stored procedure is lo
cedure cated.
● EXECUTE
Tip
When you use SAP Web IDE for SAP HANA, the Objec
tOwner automatically has all necessary privileges for ex
ecuting stored procedures. When using synonyms, the
granter service must manage the privileges.
Execute a task Object privilege: Needed on the schema where the task is located.
● EXECUTE
Tip
● INSERT
● UPDATE When you use SAP Web IDE for SAP HANA, the Objec
tOwner automatically has all necessary privileges for ex
● SELECT
ecuting tasks.
● DELETE
Use the JIT (just-in- Object privilege: Must be granted to _SYS_REPO. Needed on the schema
time) Data Preview op where the task or stored procedure is located.
● SELECT and EXECUTE with
tion
GRANT OPTION Restriction
The JIT (just-in-time) Data Preview option is not sup
ported in SAP Web IDE for SAP HANA. If you want to use
the JIT Data Preview option, consider using SAP HANA
Web-based Development Workbench.
Use the AFL node or For AFL node in SAP HANA Web-
the Predictive Analysis based Development Workbench
node and Predictive Analysis node in SAP
Web IDE:
● AFL_AREAS
● AFL_FUNCTION_PARAME
TERS
● AFL_FUNCTION_PROPERTIES
● AFL_FUNCTIONS
● AFL_PACKAGES
● AFL_TEXTS
● AFL__SYS_AFL_AFLPAL_EXE
CUTE
Although specific authorizations are not required to use the flowgraph editor, you may need to configure users
if they do not already have access to SAP Web IDE in general. For example, they may need the following roles or
permissions:
For complete information about granting users access to SAP Web IDE, see the necessary configuration tasks
described in Post-Installation Administration Tasks.
Related Information
Enable the Data Provisioning Server to use SAP HANA smart data integration.
By default, the Data Provisioning Server is disabled when you install SAP HANA.
Enable the Server in a Scale-out SAP HANA Database Scenario [page 32]
In a scale-out SAP HANA database scenario, you must enable the Data Provisioning Server only on the
host that runs the master index server.
Enable the Server for the SAP HANA Service [page 32]
To use SAP HANA smart data integration with the SAP Cloud Platform, SAP HANA service, you must
use an instance with the SAP HANA Data Provisioning Server capability enabled.
Next: Download and Deploy the Data Provisioning Delivery Unit [page 33]
Related Information
To enable the Data Provisioning Server on tenants in a multi-database container environment, use the ALTER
DATABASE SQL command.
Enable the Server in a Scale-out SAP HANA Database Scenario [page 32]
Enable the Server for the SAP HANA Service [page 32]
ALTER DATABASE Statement (Tenant Database Management) (SAP HANA SQL and System Views Reference)
In a scale-out SAP HANA database scenario, you must enable the Data Provisioning Server only on the host
that runs the master index server. Slave nodes should not have enabled Data Provisioning Server instances.
Related Information
To use SAP HANA smart data integration with the SAP Cloud Platform, SAP HANA service, you must use an
instance with the SAP HANA Data Provisioning Server capability enabled.
Context
Procedure
● To create an instance with the SAP HANA Data Provisioning Server capability, use the SAP Cloud Platform
cockpit.
Next Steps
For more information about managing an SAP HANA Service instance, see the SAP Cloud Platform, SAP HANA
Service Getting Started Guide
Related Information
Download the Data Provisioning delivery unit. Then, using SAP HANA studio or SAP HANA Application Lifecycle
Management tools, deploy the delivery unit to obtain the following functionality:
Functionality Description
Monitoring The Monitoring application provides a browser-based interface to monitor agents, tasks, and remote
subscriptions created in the SAP HANA system. The monitor application can be accessed from the SAP
HANA cockpit.
Proxy The Proxy application provides a way for the Data Provisioning Agent to communicate with the Data Pro
visioning Server. It is required when SAP HANA is running in the cloud or when the remote sources are
behind a firewall. In this case, the Data Provisioning Agent stays behind the firewall (that is, close to the
remote source) and communicates with SAP HANA (specifically, the dpserver) via the Proxy application
running in the XS engine.
Admin The Admin application provides a way for the Data Provisioning Agent Configuration tool to issue SQL
commands necessary to register the agent and the adapters in the SAP HANA system. This application
is used when SAP HANA is in the cloud and the Data Provisioning Agent is behind a firewall.
Related Information
Download the Data Provisioning delivery unit from the SAP Software Download Center.
Context
The data provisioning delivery unit is available in the same download area as the data provisioning agent.
Procedure
1. Go to the SAP Software Download Center, and navigate to the following location: SAP Software Download
Center Software Downloads Installations & Upgrades By Alphabetical Index (A-Z) H SAP HANA
SDI SAP HANA SDI 2.0 .
2. Click COMPRISED SOFTWARE COMPONENT VERSIONS.
3. Click HANA DP 2.0.
4. Click the ZIP file that you need, and save it to your preferred location.
5. In the HANAIMDP<version number>.ZIP file, find and extract the HANA_IM_DP.tgz file.
This is the delivery unit file that needs to be imported into SAP HANA.
Related Information
SAP HANA smart data integration and all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0
You can deploy the Data Provisioning delivery unit from SAP HANA studio.
Note
When SAP HANA is deployed in a multi-tenant database container configuration, you must import the
delivery unit into the tenant database.
Prerequisites
Ensure that you have been granted the SYSTEM privilege REPO.IMPORT to be able to import the delivery unit.
Procedure
You can deploy the Data Provisioning delivery unit through SAP HANA Application Lifecycle Management
(ALM).
Note
When SAP HANA is deployed in a multi-tenant database container configuration, you must import the
delivery unit into the tenant database.
1. If not already granted, grant the role sap.hana.xs.lm.roles::Administrator to the user name you use to log in
to ALM.
a. In the SAP HANA studio Systems view, expand the name of your SAP HANA server and choose
Security Users System .
b. On the Granted Roles tab, click the green “+” icon in the upper left corner.
c. On the Select Roles dialog, type lm in the search string box.
d. Select the role sap.hana.xs.lm.roles::Administrator and click OK.
2. Access ALM by typing the following URL in a web browser:
<host name>:80<2-digit instance number>/sap/hana/xs/lm
3. Log in to ALM as the user name you authorized in step 1.
The first time you log in, a pop-up window asks you to enter a name for this server.
4. On the ALM Home tab, click the Delivery Units tile.
5. Click the Import tab.
6. Click Browse and navigate to the location where you downloaded the delivery unit, then select
HANAIMDP.tgz and click Open.
7. Click Import.
After successful import, the name HANA_IM_DP (sap.com) appears in the list of delivery units on the left.
The Data Provisioning Agent provides secure connectivity between the SAP HANA database and your on-
premise, adapter-based sources.
Previous: Download and Deploy the Data Provisioning Delivery Unit [page 33]
Before you install the Data Provisioning Agent, plan your installation to ensure that it meets your system
landscape's needs.
Restriction
We do not recommend installing the Data Provisioning Agent directly on the SAP HANA system.
Note
For information about Data Provisioning Agent, operating system, and DBMS compatibility, refer to the
SAP HANA smart data integration Product Availability Matrix (PAM).
Install the Data Provisioning Agent on a supported platform that meets the minimum system requirements or
higher.
You can find a complete list of all SAP HANA components and the respective SAP HANA hardware and software
requirements in the Product Availability Matrix (PAM) on the Support Portal and in the SAP Community
Network.
For the Data Provisioning Agent host system, the following 64-bit platforms are supported:
Software Requirements
For more information about supported Java versions, see the SAP HANA Smart Data Integration Product
Availability Matrix (PAM).
● The system must have GCC 5.x to run the Data Provisioning Agent service, ASEAdapter, and
ASEECCAdapter.
For more information, see SAP Note 2338763 .
SAP Note 2338763 - Linux: Running SAP Applications compiled with GCC 5.x
SAP HANA Smart Data Integration and all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0
The Data Provisioning Agent installation package is available in the component SAP HANA SDI (SAP HANA
smart data integration) on the SAP Software Download Center.
Note
Installation of the Data Provisioning Agent requires the correct version of SAP HANA. Subsequent support
packages or revisions of SAP HANA may require an equivalent update to the Data Provisioning Agent. For
details, see the SAP HANA smart data integration Product Availability Matrix (PAM).
On the SAP Software Download Center , you can find the installation packages in the following locations:
SAP JVM
The SAP JVM is the default Java runtime supported by the Data Provisioning Agent, and is bundled with the
Data Provisioning Agent installation package. However, to obtain any subsequent security patches, you can
independently download the latest releases of the SAP JVM from the same location and update your agent
installation.
SAP Software Download Center Software Downloads Installations & Upgrades By Alphabetical Index (A-
Z) H SAP HANA SDI SAP HANA SDI 1.0 Comprised Software Versions
For more information about changing the Java runtime, see “Reconfigure the Java Runtime Environment”.
Download and Deploy the Data Provisioning Delivery Unit [page 33]
Reconfigure the Java Runtime Environment [page 92]
Before you can install the Data Provisioning Agent on Amazon Web Services (AWS), you must prepare the
environment.
Procedure
Note
3. Log in to the AWS host as ec2-user and start a sudo bash command line.
sudo bash
5. Change to the ec2-user user and extract the Data Provisioning Agent installation program.
su ec2-user
./SAPCAR –xvf IMDB_DPAGENT100_00_2-70000174.SAR
Results
The Java Development Kit is installed and the Data Provisioning Agent installation program is available on the
AWS host. You can continue to install the Data Provisioning Agent from the command line.
When SAP HANA is configured for HTTPS, you need a copy of the server certificate to configure the SAP HANA
Data Provisioning Agent.
Tip
To verify whether the SAP HANA server is configured for HTTPS, examine the port number being used. If
the port number is 80<xx>, the server is using standard HTTP. If the port number is 43<xx>, the server is
using HTTPS.
When SAP HANA is located in the cloud, it is always configured for HTTPS communication.
Context
You can download the SAP HANA server certificate using a web browser.
Tip
If the agent keystore does not have the server certificate required for HTTPS communication, the Data
Provisioning Agent Configuration tool allows you to import the server certificate into the agent keystore
directly. This procedure is required only if you do not want the configuration tool to import the certificate
directly, and you want to import it manually separately.
Procedure
The exact steps to open the certificate information depend on your browser.
○ For Internet Explorer, click the lock icon in the address bar, and click View Certificates.
○ For Chrome, click the lock icon in the address bar, and click Connection Certificate Information .
3. In the Details tab, click Copy to file.
You can install the Data Provisioning Agent on a Windows or Linux host.
The default installation manager is a graphical installation tool. If you cannot or do not want to use a graphical
tool, see Install from the Command Line [page 45].
Prerequisites
To install the Data Provisioning Agent on Windows, you must use the Administrator user or a user in the
Administrators group.
To install the Data Provisioning Agent on Linux, there are extra prerequisites:
Note
The default installation location (/usr/sap/dataprovagent) requires the agent user to have write
access to the /usr/ directory.
Before installation, grant the agent user the appropriate permissions (use sudo to create
the /usr/sap/dataprovagent directory and grant permissions to the user) or choose a different
installation location.
Context
Caution
When you install the Data Provisioning Agent, the agent uses, by default, a non-secure channel when
communicating with the SAP HANA server.
To enable secure communication, you must configure SSL with the Data Provisioning Agent Configuration
tool after installation. For more information, see “Connect to SAP HANA on-premise with SSL” and
“Connect to SAP HANA in the cloud”.
The unique agent name is a string of up to 30 alphanumeric characters that identifies the agent instance
and must be different from any names already used by other agent instances on the same host system.
It is used as a suffix when creating the Windows service, uninstallation entry, and configuration tool
shortcut.
6. On Windows, specify the username (<domain>\<username>) and password to use for the agent service.
The user that runs the agent service must have read and write access to the installation directory so
configuration files can be updated.
Note
On Linux, the agent user who installs is the installation owner. Ideally, you should be logged in as this
user when starting the agent service.
7. To use a custom Java runtime environment instead of the bundled SAP JVM, specify the path to the JRE
installation.
Note
For example:
○ On Windows, C:\Program Files\Java\jre7
○ On Linux, /usr/java/jdk<version>/jre
Note
The Data Provisioning Agent supports only 64-bit Java runtime environments.
Results
● After installing the Data Provisioning Agent, we recommend that you review the installation log file for any
errors and take any necessary corrective actions.
● If you have installed the Data Provisioning Agent on Amazon Web Services (AWS), set the
cloud.deployment parameter.
Open <DPAgent_root>/dpagentconfig.ini in a text editor and set the value:
cloud.deployment=AWS_
Caution
On Linux, we recommend that you do not start the Data Provisioning Agent while logged in as the root
user. Instead, log in with the agent user, and then start the Data Provisioning Agent.
If you accidentally start the agent as the root user, see “Clean an Agent Started by the Root User” in
the Administration Guide.
Next task: Manage Agents from the Data Provisioning Agent Monitor [page 47]
Related Information
If you cannot use or do not want to use the graphical installation manager, you can install the Data Provisioning
Agent using the command-line tool.
Prerequisites
To install the Data Provisioning Agent on Windows, you must use the Administrator user or a user in the
Administrators group.
To install the Data Provisioning Agent on Linux, you must create or use an existing non-root agent user that
has full read and write access to the intended installation location.
Note
The default installation location (/usr/sap/dataprovagent) requires the agent user to have write access
to the /usr/ directory.
Before installation, grant the agent user the appropriate permissions (use sudo to create the /usr/sap/
dataprovagent directory and grant permissions to the user) or choose a different installation location.
Context
Caution
When you install the Data Provisioning Agent, the agent uses, by default, a non-secure channel when
communicating with the SAP HANA server.
To enable secure communication, you must configure SSL with the Data Provisioning Agent Configuration
tool after installation. For more information, see “Connect to SAP HANA on-premise with SSL” and
“Connect to SAP HANA in the cloud”.
Procedure
Results
The Data Provisioning Agent is installed without displaying the graphical installation manager.
Next Steps
Caution
If you created a password XML file for the installation, be sure to delete it after the installation process has
completed. Leaving the password XML file on the server is a security risk.
If you have installed the Data Provisioning Agent on Amazon Web Services (AWS), set the cloud.deployment
parameter.
cloud.deployment=AWS_
Caution
On Linux, we recommend that you do not start the Data Provisioning Agent while logged in as the root
user. Instead, log in with the agent user, and then start the Data Provisioning Agent.
If you accidentally start the agent as the root user, see “Clean an Agent Started by the Root User” in the
Administration Guide.
Related Information
Your system logs the Data Provisioning Agent installation. There are two files written during installation.
● On Windows, %TEMP%\hdb_dataprovagent_<timestamp>
● On Linux, /var/tmp/hdb_dataprovagent_<timestamp>
The default installation paths are specific to the operating system on which the Data Provisioning Agent is
installed.
In this documentation, the variable <DPAgent_root> represents these root installation paths.
Use the Data Provisioning Agent Monitor to perform basic administration tasks such as registering, altering, or
dropping Data Provisioning Agents.
Prerequisites
The user must have the following roles or privileges to manage agents:
Context
Use the following controls in the Agent Monitor table to perform an action.
Procedure
● Select Create Agent to register a new agent with the SAP HANA system.
a. Specify the name of the agent and relevant connection information.
b. If the agent uses a secure SSL connection, select Enable SSL.
c. If you want to assign the agent to an existing agent group, select the group under Agent Group.
d. Click Create Agent.
The agent is removed from the Agent Monitor table. If the agent was assigned to an agent group, it’s also
removed from the agent group.
Configure the Data Provisioning Agent before you can use adapters to connect to data sources, create remote
sources, and so on.
Manage Agents from the Data Provisioning Agent Monitor [page 103]
Use the Data Provisioning Agent Monitor to perform basic administration tasks such as registering,
altering, or dropping Data Provisioning Agents.
Use the command-line configuration tool to connect to the SAP HANA server and configure the Data
Provisioning Agent and adapters.For example, you can use the configuration tool to view the agent and adapter
statuses and versions, manage custom and SAP-delivered adapters, and modify keystore paths.
At each menu in interactive mode, specify the number of the desired action or option and press Enter . At any
screen, you can press b to return to the previous menu or q to quit the configuration tool.
If the selected option requires input, the configuration tool displays any existing or default value in parentheses.
You can accept the existing or default value by pressing Enter to move to the next prompt.
Note
Passwords are hidden from display in command-line interactive mode. If an option has an existing password,
the password displays as “*****”. You do not need to reenter the password unless the password has changed.
Caution
When you are asked for input entry for an option, you cannot cancel or return to the previous menu. To
abort the operation without saving, you must press Ctrl + C to terminate the configuration tool.
Connect to the SAP HANA Service via JDBC WebSockets [Command Line] [page 56]
Connect to SAP HANA using JDBC WebSockets when you are using the SAP Cloud Platform, SAP
HANA service in the Cloud Foundry environment.
Connect to SAP HANA in the SAP Cloud Platform Neo Environment [Command Line] [page 62]
Specify connection information, user credentials, and SSL configuration information when SAP HANA
is deployed in the SAP Cloud Platform Neo environment.
Register the Agent with SAP HANA [Command Line] [page 65]
Before you can use adapters deployed on the Data Provisioning Agent, you must register the agent with
SAP HANA.
Store Source Database Credentials in Data Provisioning Agent [Command Line] [page 68]
Store source database access credentials in the Data Provisioning Agent secure storage using batch
mode.
Related Information
Start the configuration tool in interactive mode to modify the agent configuration without a graphical
environment.
Prerequisites
The command-line agent configuration tool requires the DPA_INSTANCE environment variable to be set to the
installation root location (<DPAgent_root>).
set DPA_INSTANCE=C:\usr\sap\dataprovagent
On Linux:
export DPA_INSTANCE=/usr/sap/dataprovagent
Caution
Multiple instances of the Data Provisioning Agent may be installed on a single Linux host. Be sure that you
set DPA_INSTANCE to the instance that you want to modify before starting the configuration tool. If you do
Procedure
Results
Connect to SAP HANA using JDBC when you are using an SAP HANA Cloud instance.
Prerequisites
● You have created an SAP HANA Cloud instance in the SAP Cloud Platform Cockpit.
● You have installed the Data Provisioning Agent to an on-premise or cloud-based host system.
Note
SAP HANA Cloud requires Data Provisioning Agent version 2.4.2.4 or newer.
● You have whitelisted the IP address of the agent host system in the SAP HANA Cloud instance.
● You have an Agent Admin HANA User for connecting the agent configuration tool to SAP HANA Cloud and
performing administrative actions such as registering agents and adapters.
This user must have the following roles or privileges to perform the actions noted in the table below:
Create the HANA User for Agent Messag ○ Object privilege: USERGROUP OPERATOR on the DEFAULT
ing (Optional) usergroup.
Note
This privilege is required only when you want to create the HANA User
for Agent Messaging automatically as part of the configuration proc
ess within the agent configuration tool.
Note
On SAP HANA Cloud, the user DBADMIN already has the USERGROUP OPERATOR privilege and can be
used if you also assign the AGENT ADMIN and ADAPTER ADMIN system privileges.
For security reasons, you may wish to assign all three privileges to a different user. For more
information, see SAP HANA Cloud Administrator DBADMIN in the SAP HANA Cloud Administration
Guide.
● You have a HANA User for Agent Messaging for messaging between the agent and SAP HANA Cloud.
To create such a user manually, specify the DEFAULT usergroup and a non-expiring password when
creating the user.
Tip
If the Agent Admin HANA User has been granted the privileges indicated in the “Roles and Privileges”
table, it can create the HANA User for Agent Messaging during the configuration process. The
configuration tool creates the HANA User for Agent Messaging as a technical user with the no
expiration period for the password.
Note
In general, the password for a new SAP HANA user expires according to the SAP HANA password
policy settings, the default for which is 182 days. To avoid agent disruptions in a production scenario,
treat a new HANA User for Agent Messaging as a technical user and ensure that its password does not
expire.
For more information about configuring the password policy for a technical user in SAP HANA, see the
SAP HANA Security Guide.
Procedure
Tip
5. Specify the hostname and port for the SAP HANA Cloud instance.
For example:
○ Hostname: <instance_name>.hanacloud.ondemand.com
○ Port Number: 443
6. Specify the Agent Admin HANA User credentials for SAP HANA Cloud as prompted.
7. If HTTPS traffic from your agent host is routed through a proxy, specify any required proxy information as
prompted.
The agent uses the HTTPS protocol to communicate with SAP HANA Cloud.
8. Specify the credentials for the HANA User for Agent Messaging.
The HANA User for Agent Messaging is used only for messaging between the agent and SAP HANA Cloud,
and must be different from the Agent Admin HANA User used for agent administration tasks.
9. Specify whether to create a new HANA User for Agent Messaging.
Tip
Generally, you create this user only during the initial configuration of an agent instance. If you are
modifying the configuration of an existing agent instance, you usually do not need to create a user.
Results
The configuration tool creates the HANA User for Agent Messaging, if applicable, and connects to the SAP
HANA Cloud instance.
Related Information
Specify connection information and administrator credentials when the SAP HANA system is located on-
premise.
Prerequisites
● The Agent Admin HANA User must have the following roles or privileges:
● If the SAP HANA server is configured for SSL, the agent host must already be prepared for SSL before
connecting the agent configuration tool to the SAP HANA server. If you want to use TCP with SSL, but the
agent is not yet prepared, see Configure SSL for SAP HANA On-Premise [Command Line Batch] [page
509].
Procedure
○ If you want to use SSL and the agent has already been prepared, choose true.
○ If you do not want to use SSL or the agent has not already been prepared, choose false.
For more information about preparing the agent for SSL, see Configuring SSL [page 498].
5. Specify the hostname, port, and Agent Admin HANA User credentials for the SAP HANA server as
prompted.
Tip
To determine the correct port number when SAP HANA is deployed in a multi-database configuration,
execute the following SQL statement:
Related Information
Configure SSL for SAP HANA On-Premise [Command Line Batch] [page 509]
Start the Configuration Tool [Command Line] [page 51]
Assign Roles and Privileges [page 25]
Connect to SAP HANA using JDBC WebSockets when you are using the SAP Cloud Platform, SAP HANA
service in the Cloud Foundry environment.
Prerequisites
● You are using the SAP Cloud Platform, SAP HANA service with the SAP HANA Data Provisioning Server
capability.
● You have installed the Data Provisioning Agent to an on-premise or cloud-based host system.
● The Agent Admin HANA User must have the following roles or privileges to perform the actions noted in
the table below:
Create the HANA User for Agent Messag ○ System privilege: USER ADMIN
ing (Optional)
Note
These privileges are required only when you want to create the HANA
User for Agent Messaging as part of the configuration process within
the agent configuration tool.
You have an SAP HANA User for Agent Messaging for messaging between the agent and SAP HANA. To create
such a user manually:
1. Create the agent user (for example, AGTUSR) with a non-expiring password.
Tip
If the Agent Admin HANA User has been granted the privileges indicated in the “Roles and Privileges” table,
it can create the HANA User for Agent Messaging during the configuration process. The configuration tool
creates the HANA User for Agent Messaging as a technical user with the no expiration period for the
password.
Note
In general, the password for a new SAP HANA user expires according to the SAP HANA password policy
settings, the default for which is 182 days. To avoid agent disruptions in a production scenario, treat a new
HANA User for Agent Messaging as a technical user and ensure that its password does not expire.
For more information about configuring the password policy for a technical user in SAP HANA, see the SAP
HANA Security Guide.
Context
The Data Provisioning Agent connects to the SAP HANA service through JDBC WebSockets.
Procedure
Tip
For example:
○ WebSocket URL: /service/<service_instance_id>
○ WebSocket Host: <instance_name>.dbaas.ondemand.com
○ WebSocket Port: 80
7. Specify the Agent Admin HANA User credentials for SAP HANA as prompted.
8. If HTTPS traffic from your agent host is routed through a proxy, specify any required proxy information as
prompted.
The agent uses the HTTPS protocol to communicate with the SAP HANA service.
Tip
Generally, you create this user only during the initial configuration of an agent instance. If you are
modifying the configuration of an existing agent instance, you usually do not need to create a user.
Results
The configuration tool creates the SAP HANA User for Agent Messaging, if applicable, and connects to the SAP
HANA server.
Related Information
When connecting to an SAP HANA database or to a remote source using JDBC, there are several connection
properties that you can configure.
Default Properties
The following table lists the default JDBC connection properties, which are case insensitive, available in the
agentcli tool and are written to the <DPAgent_root>\dpagentconfig.ini file.
Note
If you specify * as the
host name, this property
has no effect. Other wild
cards aren’t permitted.
Note
There's no support for
WebSocket (HTTP/
HTTPS) connections
with a SOCKS proxy.
WebSocket connections
must either use no proxy
or an HTTP proxy. Non-
WebSocket (TCP/TLS,
via Direct SQL, for exam
ple) connections can use
no proxy, a SOCKS
proxy, or an HTTP proxy.
jdbc.timeUnit DAYS, HOURS, MICROSEC SECONDS Specifies the JDBC time unit.
ONDS, MILLISECONDS, MI
NUTES, NANOSECONDS,
SECONDS
Entries in the
jdbc.additionalPara
meters must be specified
with a comma delimiter when
there are multiple parame
ters. For example, jdbc.addi
tionalParameters=recon
nect=TRUE,ignoreTopol
ogy=FALSE
Note
These additional proper
ties aren’t prepended
with “jdbc”.
There are other SAP HANA-specific properties available in addition to the properties listed here. See the SAP
HANA Client Interface Programming Reference for SAP HANA Platform for more information.
Specify connection information, user credentials, and SSL configuration information when SAP HANA is
deployed in the SAP Cloud Platform Neo environment.
Caution
This topic applies only to SAP HANA in the SAP Cloud Platform Neo environment. For information about
connecting to the SAP Cloud Platform, SAP HANA service in the Cloud Foundry environment, see Connect
to the SAP HANA Service via JDBC WebSockets [Command Line] [page 56].
When SAP HANA is in the cloud, the agent initiates all communication. The agent polls the server to see if there
are any messages for the agent to act upon.
Prerequisites
● The Data Provisioning delivery unit must be imported to the SAP HANA system.
● The Agent Admin HANA User must have the following roles or privileges:
Create HANA User for Agent Messaging ○ System privilege: USER ADMIN
(Optional) ○ Object privilege: EXECUTE on
GRANT_APPLICATION_PRIVILEGE
Note
These privileges are required only when you want to create the HANA
User for Agent Messaging as part of the configuration process within
the agent configuration tool.
● The HANA User for Agent Messaging must have the following roles or privileges:
Tip
If the Agent Admin HANA User has been granted the privileges indicated in the “Roles and Privileges”
table, it can create the HANA User for Agent Messaging during the configuration process. The
configuration tool creates the HANA User for Agent Messaging as a technical user with the no
expiration period for the password.
Note
In general, the password for a new SAP HANA user expires according to the SAP HANA password
policy settings, the default for which is 182 days. To avoid agent disruptions in a production scenario,
treat a new HANA User for Agent Messaging as a technical user and ensure that its password does not
expire.
For more information about configuring the password policy for a technical user in SAP HANA, see the
SAP HANA Security Guide.
Procedure
Note
If the agent framework keystore does not already have the certificates for the SAP HANA server, the
configuration tool automatically downloads and imports them during configuration.
5. Specify the hostname, port, and Agent Admin HANA User credentials for SAP HANA as prompted.
Generally, you create this user only during the initial configuration of an agent instance. If you are
modifying the configuration of an existing agent instance, you usually do not need to create a user.
Results
The configuration tool creates the HANA User for Agent Messaging, if applicable, and connects to the SAP
HANA server.
Related Information
Use the command-line configuration tool to stop or start the Data Provisioning Agent service.
Procedure
Results
The configuration tool indicates whether the agent service is running and the listening port in use by the agent.
Next Steps
On Windows, you can also manage the agent service from the standard Windows Services tool. The name of
the service is SAP_HANA_SDI_Agent_Service_Daemon_<instance_name>.
● ./dpagent_servicedaemon.sh start
● ./dpagent_servicedaemon.sh stop
Related Information
Before you can use adapters deployed on the Data Provisioning Agent, you must register the agent with SAP
HANA.
Prerequisites
● The Agent Admin HANA User must have the following roles or privileges:
● For SAP HANA on Cloud, the Agent XS HANA User must have the following roles or privileges:
Procedure
1. Start the command-line agent configuration tool and connect to SAP HANA.
2. Select 6 to enter the Agent Registration menu.
3. Select 1 to register the agent.
Caution
When you are asked for input entry for an option, you cannot cancel or return to the previous menu. To
abort the operation without saving, you must press Ctrl + C to terminate the configuration tool.
○ If SAP HANA is not in the cloud, specify the agent name and hostname.
Ensure that the SAP HANA server can communicate with the agent host. Depending on the network
configuration, you may need to qualify the agent hostname fully.
Ensure that your firewall settings allow the connection from the SAP HANA server to the agent host on
the listener port. By default, the listener is port 5050.
○ If SAP HANA is in the cloud, specify the agent name.
When SAP HANA is in the cloud, the agent service restarts to complete the registration process.
5. Press Enter to continue.
Results
The agent is registered with SAP HANA. If SAP HANA is in the cloud, the agent service automatically restarts.
Next Steps
Caution
Unregistering the agent from the SAP HANA server performs a cascade drop of the agent. As a result, any
remote subscriptions that use the agent are also deleted, even if they are active.
Related Information
If the password for the HANA User for Agent Messaging has changed or expired, you must update the
credentials in the agent secure storage.
Context
To set the new credentials in the agent secure storage, use the agent configuration tool in command-line
interactive mode.
Procedure
Caution
When the agent restarts, any real-time subscriptions configured on the agent are terminated and must
be reconfigured.
Related Information
Store source database access credentials in the Data Provisioning Agent secure storage using batch mode.
Context
If you don't want to store credentials in SAP HANA, you can store them in the Data Provisioning Agent secure
storage.
Entering credentials in the Data Provisioning Agent requires three components: remote source name, user
name and password. This method of storing credentials also gives you more management flexibility by allowing
you to edit and delete whenever you want.
Procedure
Results
You can now access these credentials to connect to a remote source through the Use Agent Stored Credential
remote source configuration parameter for your adapter. You can also use this editor to view, delete, and edit
credentials.
Related Information
Use the command-line configuration tool to connect to the SAP HANA server and configure the Data
Provisioning Agent and adapters.For example, you can use the configuration tool to view the agent and adapter
statuses and versions, manage custom and SAP-delivered adapters, and modify keystore paths.
Tip
Combine sequences of individual batch commands into scripts for tasks such as silent configuration with
no user interaction or automated configuration.
Store Source Database Credentials in Data Provisioning Agent [Batch] [page 82]
Store source database access credentials in the Data Provisioning Agent secure storage using batch
mode.
Related Information
Execute single commands to perform individual configuration tasks, or automate agent configuration by
grouping multiple commands into a script.
Prerequisites
The command-line agent configuration tool requires the DPA_INSTANCE environment variable to be set to the
installation root location (<DPAgent_root>).
set DPA_INSTANCE=C:\usr\sap\dataprovagent
On Linux:
export DPA_INSTANCE=/usr/sap/dataprovagent
Caution
Multiple instances of the Data Provisioning Agent may be installed on a single Linux host. Be sure that you
set DPA_INSTANCE to the instance that you want to modify before starting the configuration tool. If you do
not set the environment variable correctly, you may unintentionally modify the configuration of a different
agent instance.
Procedure
Results
connectHanaViaHTT Connects to the SAP HANA server For parameter details, see Connecting to SAP HANA
P using HTTP or HTTPS [Batch] [page 73]
connectHanaViaTCP Connects to an on-premise SAP For parameter details, see Connecting to SAP HANA
HANA server using TCP [Batch] [page 73]
Restriction
Connect the configuration tool to SAP HANA before
using this function.
Restriction
Connect the configuration tool to SAP HANA before
using this function.
Restriction
Connect the configuration tool to SAP HANA before
using this function.
The configuration tool provides help for each supported command and function, including required and
optional parameters and usage information. To view the help for a command or function, append --help to
the command.
For example, to view the help for the connectHanaViaHttp configuration function:
Connect the Data Provisioning Agent to SAP HANA in batch mode by specifying parameters that depend on
your scenario.
To connect to the SAP HANA server in batch mode, use the connectHanaViaTcp or connectHanaViaHTTP
function and specify any additional parameters relevant to your system landscape.
Parameters related to the SAP HANA server and administrator user are required in all connection scenarios.
-Dhana.admin.username=<username> Name of the Agent Admin HANA User that connects to the SAP
HANA server
- Path to the file that contains the Agent Admin HANA User password
Dhana.admin.password=<password_path
>
To determine the correct port number when SAP HANA is deployed in a multi-database configuration,
execute the following SQL statement:
Related Information
Connect to SAP HANA on-premise with the connectHanaViaTcp function of the command-line configuration
tool. In addition to the common parameters, additional connection parameters are required.
Prerequisites
● The Agent Admin HANA User must have the following roles or privileges:
● If the SAP HANA server is configured for SSL, the agent host must already be prepared for SSL.
Note
Related Information
Configure SSL for SAP HANA On-Premise [Command Line Batch] [page 509]
Connect to SAP HANA via JDBC WebSockets or Direct SQL with the connectHanaViaJdbc function of the
command-line configuration tool. In addition to the required parameters, additional optional connection
parameters may be needed for your configuration.
● The Data Provisioning delivery unit must be imported to the SAP HANA system.
● The Agent Admin HANA User must have the following roles or privileges:
Create the HANA User for Agent Messag ○ System privilege: USER ADMIN
ing (Optional)
Note
These privileges are required only when you want to create the HANA
User for Agent Messaging as part of the configuration process within
the agent configuration tool.
● You have an SAP HANA User for Agent Messaging for messaging between the agent and SAP HANA. To
create such a user manually:
1. Create the agent user (for example, AGTUSR) with a non-expiring password.
2. GRANT AGENT MESSAGING ON AGENT "<your_agent_name>" TO AGTUSR;
Tip
If the Agent Admin HANA User has been granted the privileges indicated in the “Roles and Privileges”
table, it can create the HANA User for Agent Messaging during the configuration process. The
configuration tool creates the HANA User for Agent Messaging as a technical user with the no
expiration period for the password.
Note
In general, the password for a new SAP HANA user expires according to the SAP HANA password
policy settings, the default for which is 182 days. To avoid agent disruptions in a production scenario,
treat a new HANA User for Agent Messaging as a technical user and ensure that its password does not
expire.
For more information about configuring the password policy for a technical user in SAP HANA, see the
SAP HANA Security Guide.
-Dhana.admin.username=<username> Specifies the name of the Agent Admin HANA User that connects to
the SAP HANA server
- Specifies the path to the file that contains the password for the
Dhana.admin.password=<password_path Agent Admin HANA User
>
-Dhana.xs.username=<username> Specifies the name of the HANA User for Agent Messaging
- Specifies the path to the file that contains the password for the
Dhana.xs.password=<path_to_password HANA User for Agent Messaging
>
Default: true
Default: true
-Djdbc.websocketURL=<value> Specifies the URL to use for the JDBC WebSockets connection
For Direct SQL, specifies the hostname of the SAP HANA server.
For Direct SQL, specifies the port used to connect to the SAP HANA
server.
-Djdbc.useProxy=<value> Specifies whether to use a proxy for the JDBC WebSockets or Direct
SQL connection
Default: false
Default: false
Restriction
SOCKS proxies are not supported when using JDBC WebSock
ets to connect to SAP HANA.
Default: false
-Djdbc.proxyPassword=<value> Specifies the path to the file that contains the password used for
proxy authentication
--hana.xs.createUser <value> Specifies whether or not the configuration program should create
an Agent XS HANA User
Example: Connect to SAP HANA via JDBC WebSockets and HTTP Proxy
without Authentication
Example: Connect to SAP HANA via Direct SQL and SOCKS Proxy with
Authentication
Connect to SAP HANA on cloud with the connectHanaViaHttp function of the command-line configuration
tool. In addition to the common parameters, extra connection parameters are required.
Prerequisites
● The Data Provisioning delivery unit must be imported to the SAP HANA system.
● The Agent Admin HANA User must have the following roles or privileges:
● The Agent XS HANA User must have the following roles or privileges:
Tip
The configuration tool can create the Agent XS HANA User during the agent configuration process as
long as the Agent Admin HANA User has been granted the correct privileges.
-Dhana.xs.username=<username> Name of the Agent XS HANA User for messaging between the Data
Provisioning Agent and the SAP HANA server
- Path to the file that contains the Agent XS HANA User password
Dhana.xs.password=<path_to_password
>
--hana.xs.createUser <value> Specifies whether or not the configuration program should create
an Agent XS HANA User
Example: Connect to SAP HANA on Cloud with HTTP and Create Agent XS
HANA User
If the Agent XS HANA User password has changed or expired, you may need to update the credentials in the
agent's secure storage.
Context
Use the agent configuration tool in command-line interactive mode to set the new credentials in the agent's
secure storage.
Procedure
Caution
When the agent restarts, any real-time subscriptions configured on the agent are terminated and you
may need to configure the real-time subscriptions again.
Store source database access credentials in the Data Provisioning Agent secure storage using batch mode.
Context
If you don't want to store credentials in SAP HANA, you can store them in the Data Provisioning Agent secure
storage.
Entering credentials in the Data Provisioning Agent requires three components: remote source name, user
name and password. This method of storing credentials also gives you more management flexibility by allowing
you to edit and delete whenever you want.
Procedure
4. Delete a credential.
Results
You can now access these credentials to connect to a remote source through the Use Agent Stored Credential
remote source configuration parameter for your adapter.
Related Information
Connect to the SAP HANA server and configure the agent and adapters with the Data Provisioning Agent
Configuration tool.
Caution
The graphical interface for the Data Provisioning Agent Configuration tool is deprecated.
For the latest supported functionality, use the command-line interface in interactive or batch mode. For
more information, see:
The configuration tool allows you to perform the following administrative tasks:
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
Store source database access credentials in the Data Provisioning Agent secure storage using the DP
Agent Configuration Tool.
Related Information
Before you can use the configuration tool to register the agent or deploy and register adapters, you must
connect to the SAP HANA server.
The steps required to connect the Data Provisioning Agent to the SAP HANA server vary depending on whether
the SAP HANA server is installed on-premise or in the cloud, and whether it is configured for secure SSL
connections.
Related Information
Specify connection information and user credentials when the SAP HANA system is located on-premise and
does not require a secure SSL connection.
Prerequisites
The Agent Admin HANA User must have the following roles or privileges:
Procedure
Note
Start the configuration tool using the Data Provisioning Agent installation owner. The installation owner
is the same user that is used to start the agent service.
Tip
To determine the correct port number when SAP HANA is deployed in a multi-database
configuration, execute the following SQL statement:
Specify connection information, user credentials, and SSL configuration information when the SAP HANA
system is located in the cloud.
When SAP HANA is in the cloud, the agent initiates all communication. The agent polls the server to see if there
are any messages for the agent to act upon.
Prerequisites
● The Data Provisioning delivery unit must be imported to the SAP HANA system.
● The Agent Admin HANA User must have the following roles or privileges.
● The Agent XS HANA User must have the following roles or privileges.
Note
The password for a new SAP HANA user expires according to the SAP
HANA system's password policy settings, the default for which is 182
days. To avoid agent disruptions in a production scenario, we recom
mend that the Agent XS HANA User is a technical user with a pass
word that does not expire.
Tip
The configuration tool can create the Agent XS HANA User during the agent configuration process as
long as the Agent Admin HANA User has been granted the correct privileges. The configuration tool
creates the Agent XS HANA User as a technical user with the default maximum password lifetime for
the SAP HANA system.
Procedure
For complete information, see Download and Deploy the Data Provisioning Delivery Unit.
2. Create or grant privileges to the Agent Admin HANA User and Agent XS HANA User.
a. Configure the Agent Admin HANA User.
This user connects to the SAP HANA system via the configuration tool to perform administrative tasks,
such as registering agents and registering adapters.
The Agent XS HANA User is used only for messaging between the Data Provisioning Agent and SAP
HANA on Cloud. The system saves the credentials for this user in the Data Provisioning Agent's secure
store for use at runtime.
Caution
It is strongly recommended that this user has only the minimally required application privilege, and
no additional administrative privileges.
Tip
The Data Provisioning Agent Configuration tool can create the Agent XS HANA User during the
agent configuration process. If you want the configuration tool to create the user, ensure that the
Agent Admin HANA User has the correct roles and privileges.
For complete information about creating users and granting permissions, see the SAP HANA
Administration Guide.
3. Connect to the SAP HANA server.
a. Click Connect to HANA.
b. Select HANA On Cloud.
c. Select Use HTTPS.
When you attempt to connect to HANA on Cloud with HTTPS for the first time, the configuration tool
allows you to automatically download and import the SAP HANA server certificates into the Data
Provisioning Agent keystore.
Note
If you prefer not to import the server certificates by this method, you must manually download and
import the certificates. For more information, see Manually Configure SSL for HANA on Cloud.
d. Specify the hostname, HTTP(s) port, and Agent Admin HANA User credentials for the SAP HANA
server.
The Agent XS HANA User is used only for messaging between the Data Provisioning Agent and the
SAP HANA server, and must be different from the Agent Admin HANA User that you used to connect to
the SAP HANA server.
○ Choose Create User if you want the configuration tool to create a user.
Tip
To create a user from the configuration tool, the Agent Admin HANA User that you use to
connect to the SAP HANA system must have the correct roles and privileges.
○ Choose Update User Credentials if you already specified an Agent XS HANA User and want to
change the user's credentials.
4. Register the Data Provisioning Agent with SAP HANA by specifying the agent name and clicking Register.
If you do not want to automatically download the SAP HANA server certificates the first time you attempt to
connect to HANA on Cloud, you must manually download and import the certificates.
Procedure
For complete information, see Download the SAP HANA Server Certificate.
b. Import the SAP HANA server root certificate into the agent keystore.
Note
You need the password for the Java keytool program to generate a keystore and import the SAP
HANA server certificate. For the password, commands, and additional information, see the
keytool.txt file located at <DPAgent_root>\ssl\keytool.txt.
Tip
Change the default password for the keystore to safeguard your certificates.
For the password, if you explicitly changed the keystore password, specify the new password here.
Otherwise, leave the default password as it is.
Related Information
Procedure
Results
The configuration tool indicates whether the agent service is running and the listening port in use by the agent.
Next Steps
On Windows, you can also manage the agent service from the standard Windows Services tool. The name of
the service is SAP_HANA_SDI_Agent_Service_Daemon_<instance_name>.
On Linux, you can also manage the agent with a shell script. The shell script is located at
<DPAgent_root>/bin/dpagent_servicedaemon.sh and supports the following commands:
● ./dpagent_servicedaemon.sh start
● ./dpagent_servicedaemon.sh stop
Related Information
Before you can use adapters deployed on the Data Provisioning Agent, you must register the agent with SAP
HANA.
Prerequisites
● The Agent Admin HANA User must have the following roles or privileges.
● For SAP HANA on Cloud, the Agent XS HANA User must have the following roles or privileges.
Procedure
1. Start the agent configuration tool and connect to the SAP HANA server.
2. Click Register Agent.
3. Specify the agent connection information.
○ If SAP HANA is not in the cloud, specify the agent name and hostname.
Ensure that the SAP HANA server can communicate with the agent host. Depending on the network
configuration, you may need to qualify the agent hostname fully.
Ensure that your firewall settings allow the connection from the SAP HANA server to the agent host on
the listener port. By default, port 5050.
○ If SAP HANA is in the cloud, specify the agent name.
When SAP HANA is in the cloud, the agent service is restarted to complete the registration process.
4. Click Register.
Results
The agent is registered with SAP HANA. If SAP HANA is in the cloud, the agent service is automatically
restarted.
Caution
Unregistering the agent from the SAP HANA server performs a cascade drop of the agent. As a result, any
remote subscriptions that use the agent are also deleted, even if they are active.
Related Information
The SAP JVM is bundled with the Data Provisioning Agent and used as the default Java Runtime Environment.
You can choose to update the version of the SAP JVM used by an installed agent or replace it with a custom
Java Runtime Environment.
Prerequisites
If you want to update your version of the SAP JVM, download the version of the SAP JVM that matches the
operating system and processor architecture used by the Data Provisioning Agent host.
Procedure
The -vm setting must be specified before the -vmargs setting in the dpagent.ini file, and -vm
and its setting must be entered on different lines. Additionally, do not use quotes around the path
even if the path contains spaces.
Related Information
Store source database access credentials in the Data Provisioning Agent secure storage using the DP Agent
Configuration Tool.
Context
If you don't want to store credentials in SAP HANA, you can store them in the Data Provisioning Agent secure
storage.
Entering credentials in the Data Provisioning Agent requires three components: remote source name, user
name, and password. This method of storing credentials also gives you more management flexibility by
allowing you to edit and delete whenever you want.
Results
You can now access these credentials to connect to a remote source through the Use Agent Stored Credential
remote source configuration parameter for your adapter. You can also use this editor to view, delete, and edit
credentials.
Related Information
Agent grouping provides failover and load-balancing capabilities by combining individual Data Provisioning
Agents installed on separate host systems.
Restriction
Failover is not supported for initial and batch load requests. Restart the initial load following a failure due to
agent unavailability.
Restriction
Load balancing is supported only for initial loads. It is not supported for changed-data capture (CDC)
operations.
Planning considerations
Before configuring agents in a group, review the following considerations and limitations:
Related Information
When an agent node in an agent group is inaccessible for longer than the configured heartbeat interval, the
Data Provisioning Server chooses a new active agent within the group. It then resumes replication for any
remote subscriptions active on the original agent.
Initial and batch load requests to a remote source configured on the agent group are routed to the first
available agent in the group.
Failover is not supported for initial and batch load requests. Restart the initial load following a failure due to
agent unavailability.
Although no user action is required for automatic failover within an agent group, you may choose to monitor
the current agent node information.
● To query the current master agent node name for a remote source:
Caution
If all nodes in an agent group are down, replication cannot continue and must be recovered after one or
more agent nodes are available.
Restarting nodes in an agent group does not impact active replication tasks.
For the master agent node, stopping or restarting the agent triggers the agent group failover behavior and a
new active master node is selected.
With multiple agents in an agent group, you can choose to have the agent for the initial loads selected
randomly, selected from the list of agents in a round-robin fashion, or not load balanced.
Note
Agent grouping provides load balancing for initial loads only. Load balancing is not supported for changed-
data capture (CDC) operations.
You can create an agent group or remove an existing group in the Data Provisioning Agent Monitor.
Prerequisites
The user who creates or removes the agent group must have the following roles or privileges:
Context
Use the buttons in the Agent Group table to create or remove an agent group.
Procedure
Note
When you remove an agent group, any agent nodes for the group are removed from the group first.
Agents cannot be removed from the group if there are active remote subscriptions.
Any agent nodes are removed from the group, and the group is removed from the Agent Group table.
You can manage the agent nodes that belong to an agent group in the Data Provisioning Agent Monitor.
Prerequisites
The user must have the following roles or privileges to manage agent nodes:
Context
Use the buttons in the Agent Monitor and Agent Group tables to perform the action.
Tip
Select an agent group in the Agent Group table to display its nodes in the Agent Monitor table.
Procedure
● To register a new agent with the SAP HANA system and add it to an existing agent group, click Create
Agent.
○ Select the new agent group from the Agent Group list.
If you are assigning the agent to a different group, select the empty entry for Enable SSL to avoid
connection issues when the group is changed.
○ To remove the agent from an agent group, select the empty entry from the Agent Group list.
The group for the agent is displayed in the Agent Monitor table.
● To add multiple existing agents to an agent group, select the group in the Agent Group table and click Add
Agents.
a. Select the agents that you want to add to the group.
b. Click Add Agents.
The selected agents are assigned to the agent group and all associated entries in the Agent Monitor and
Agent Group tables are updated.
Related Information
Manage Agents from the Data Provisioning Agent Monitor [page 103]
CREATE AGENT Statement [Smart Data Integration] [page 537]
ALTER AGENT Statement [Smart Data Integration] [page 527]
Before you can create remote sources in an agent group, you must add adapters to the group in the SAP HANA
Web-based Development Workbench.
Prerequisites
The user who adds an adapter must have the following roles or privileges:
Procedure
1. Open the SQL console in the SAP HANA Web-based Development Workbench.
4. Add the agent to each additional agent node in the agent group.
To receive the benefits of failover from an agent group, you must configure your remote sources in the agent
group.
Procedure
a. In the Catalog editor, right-click the Provisioning Remote Sources folder, and choose New
Remote Source.
b. Enter the required configuration information for the remote source, including the adapter name.
c. In the Location dropdown, choose agent group, and select the agent group name.
d. Click Save.
● To add an existing remote source to an agent group:
a. In the Catalog editor, select the remote source in the Provisioning Remote Sources folder.
b. In the Location dropdown, choose agent group, and select the agent group name.
c. Click Save.
Related Information
Procedure
1. Open the SQL console in the SAP HANA studio or Web-based Development Workbench.
2. Execute the CREATE or ALTER REMOTE SOURCE statement in the SQL console.
Note
If you are changing only the location for the remote source, you can omit the ADAPTER and
CONFIGURATION clauses:
Related Information
When you use ALTER REMOTE SOURCE to modify a remote source, you must specify the configuration and
credential details as XML strings.
You cannot change user names while the remote source is suspended.
Use the Data Provisioning Agent Monitor to perform basic administration tasks such as registering, altering, or
dropping Data Provisioning Agents.
Prerequisites
The user must have the following roles or privileges to manage agents:
Context
Use the following controls in the Agent Monitor table to perform an action.
Procedure
● Select Create Agent to register a new agent with the SAP HANA system.
a. Specify the name of the agent and relevant connection information.
b. If the agent uses a secure SSL connection, select Enable SSL.
c. If you want to assign the agent to an existing agent group, select the group under Agent Group.
The agent is removed from the Agent Monitor table. If the agent was assigned to an agent group, it’s also
removed from the agent group.
Related Information
The agent preferences provide advanced configuration options for the Data Provisioning Agent.The method for
accessing and modifying agent preferences depends on the configuration mode that you use.
Choose Config Preferences in the Data Provisioning Agent Configuration tool, and then select Adapter
Framework.
Use the Set Agent Preferences action in the Agent Preferences menu in interactive mode.
By default, the agent is configured to start in TCP mode and monitor port 5050 for requests from SAP HANA.
Framework listener port Port the agent monitors for requests from the SAP 5050
HANA server
framework.listenerPort
The Framework listener port should be SSL-ena
bled for security.
Admin port Local port used for internal communication be 5051
tween the agent and the agent configuration tool
framework.adminport
Do not enable the admin port within a firewall; the
port should be blocked from outside access to pre
vent unauthorized changes on the agent.
framework.threadPoolSize
framework.pollingTimeout
framework.timeUnit
framework.maxDataSize
Row fetch size (max) Maximum number of browse nodes or rows to fetch 1000
from an adapter
framework.fetchSize
Row fetch size (min) Minimum number of rows to fetch from an adapter 10
framework.min.fetchSize
Max number of retries Maximum number of times the agent tries to con 10
nect after a registration or ping failure
framework.retry.maxTries
Time to wait before retry The amount of time to wait before retrying 30
framework.retry.waitTime
Shared Directory for Agent Group Shared directory for the agent group to which this None
agent instance belongs, if any
framework.clusterSharedDir
framework.log.level ● TRACE
● DEBUG
● ERROR
● ALL
framework.log.maxBackupIndex
Log file max file size Maximum file size in MB or KB that the log file 10 MB
should use
framework.log.maxFileSize
Trace message max size When tracing is enabled, the specific number of 1024
characters in a trace message after which the mes
framework.trace.length sage is truncated
Trace ping message When tracing is enabled, specifies printing of the false
ping message
framework.trace.pingMessage
Trace all data Enables printing the content of the data rows sent false
to server
framework.trace.data
Max HTTP Connection per route Maximum number of connections the internal 20
HTTP client can create
cloud.defaultMaxPerRoute
cloud.maxTotal
proxyType
proxyHost
proxyPort
Non-Proxy Hosts
nonProxyHosts
framework.readOnlyAdapters
framework.so.maxOpenConnection
Related Information
Use the command-line configuration tool to manage advanced runtime options stored in the dpagent.ini
configuration file safely.
Agent runtime options are typically used when troubleshooting an agent issue or optimizing agent
performance.
The method for accessing and modifying the agent runtime options depends on the configuration mode that
you use.
Start the configuration tool with the --configAgentIniFile parameter and select the option that you want
to modify.
The configuration tool prompts you for any information required for the runtime option that you are modifying.
Use the --configAgentIniFile parameter and specify the function for the agent runtime option that you
want to modify, as well as any additional parameters required by the function.
For example, to change the maximum amount of memory available to the agent to 16 GB on Windows:
Clear DPAgent Cache on Next Start When enabled, the next time the agent is restarted, any cached
agent, OSGi, and Eclipse runtime data is removed and the caches are
reinitialized.
Caution
Do not enable this option unless instructed to do so by SAP Sup
port.
Switch Java Virtual Machine Updates the version of the SAP JVM used by an installed agent, or re
places the SAP JVM with a custom Java Runtime Environment
changeDefaultJVM
The SAP JVM is bundled with the Data Provisioning Agent and used
as the default Java Runtime Environment
-Ddpagent.vm.directory=<jvm_path>
Switch DPAgent Log Directory Modifies the location of the root directory where all agent-related log
files are generated
changeLogDirectory
The default root log path is <DPAgent_root>/log.
-Ddpagent.log.directory=<log_root_path>
Change DPAgent Max Available Memory Modify the maximum amount of memory that the agent can use
setDPAgentMemory -Ddpagent.vm.vmx=<amount>
enableRemoteDebugging -Ddpagent.remoteDebugging.port=<port_number>
-Ddpagent.remoteDebugging.suspend=<value>
Caution
Do not enable this option unless instructed to do so by SAP Sup
port.
injectSystemProperty -Ddpagent.system.key=<value>
-Ddpagent.system.value=<value>
Caution
Do not enable this option unless instructed to do so by SAP Sup
port.
Revert dpagent.ini to original state Removes any changes to the agent runtime options and reverts the
dpagent.ini to its original state.
setCleanParameter
There are items that you need to consider when moving to a different host from one where your agent is
configured.
If you are migrating to a different host, keep the following rules in mind:
● The agent install path must be the same. You cannot migrate to a different path because the path is
hardcoded in many places.
● The host operating system should be the same. For example, you cannot migrate a configuration from
Linux to Windows.
● If you are migrating an agent that was configured to talk to SAP HANA on cloud, you cannot have both
agents running afterwards. SAP HANA does not support communication with two agents using the same
configuration.
If the agent is the same version as the one on the old machine, then you can migrate the following objects:
If the agents are different versions, then you can migrate the following objects:
Note
After the migration, be sure to update the dpagentconfig.ini file by editing the agent.hostname
parameter to match the host the agent is now on.
Related Information
Prerequisites
Procedure
Next Steps
Caution
The OData adapter isn’t part of the Data Provisioning Agent installation. The OData adapter is installed with
the SAP HANA server and requires configuration that can’t be done using the Data Provisioning Agent
Configuration tool.
Restriction
When the target table is made with a column store and the option CS_DATA_TYPENAME is set to
ST_MEMORY_LOB, then the in-memory size is limited to less than 1 GB. To prevent this limitation, set the
option to LOB. This solution applies to all adapters.
Related Information
Before you can connect to remote sources using an adapter, you must register the adapter with SAP HANA.
Prerequisites
The HANA administrator user must have the following roles or privileges:
Note
This application privilege is required only for SAP HANA in the cloud.
Note
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
1. Start the Data Provisioning Agent Configuration tool and connect to SAP HANA.
2. For custom adapters, click Deploy Adapter and point to the adapter JAR files.
Note
Data provisioning adapters delivered by SAP are automatically deployed on the agent during agent
installation.
For example, log reader adapters require source configuration to enable real-time replication.
For complete information about source system configuration, see the relevant section for each adapter in
“Configure Data Provisioning Adapters”.
Results
The selected adapter is registered with SAP HANA and can be selected when creating a remote source.
Next Steps
Note
For SAP HANA in the cloud, you must restart the agent service to complete the registration of adapters. If
the registration succeeds and the restart of the service fails, or the registration of all adapters fails, then the
registration is rolled back.
Related Information
Before you can connect to remote sources using an adapter, you must register the adapter with SAP HANA.
Prerequisites
The HANA administrator user must have the following roles or privileges:
Note
This application privilege is required only for SAP HANA in the cloud.
Note
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
Procedure
1. Start the command-line agent configuration tool and connect to SAP HANA.
2. Select 8 to enter the Custom Adapters menu.
Note
Data provisioning adapters delivered by SAP are automatically deployed on the agent during agent
installation.
Note
The adapter name must match the name displayed by the Display Adapters option.
For example, log reader adapters require source configuration to enable real-time replication.
For complete information about source system configuration, see the relevant section for each adapter in
“Configure Data Provisioning Adapters”.
Results
The selected adapter is registered with SAP HANA and can be selected when creating a remote source.
Next Steps
Note
For SAP HANA in the cloud, you must restart the agent service to complete the registration of adapters. If
the registration succeeds and the restart of the service fails, or the registration of all adapters fails, then the
registration is rolled back.
Related Information
Use the Data Provisioning Agent Monitor to perform basic administration tasks, such as adding adapters to or
removing adapters from a Data Provisioning Agent instance.
Prerequisites
The user must have the following roles or privileges to manage adapters:
Context
Use the buttons in the Agent Monitor and Agent Adapter Mapping tables to perform an action.
Procedure
● To add adapters to an agent instance, select the agent and click Add Adapters in the Agent Monitor table.
a. Select the desired adapters from the list of adapters deployed on the agent instance.
b. Click Add Adapters.
Related Information
Using SAP HANA smart data integration, you set up an adapter that can connect to your source database, then
create a remote source to establish the connection.
Prerequisites
● The user who creates the remote source must have the following roles or privileges:
Context
In SAP HANA smart data integration, you can create a remote source with the Web-based Development
Workbench user interface.
Prerequisites
The user who creates the remote source must have the following roles or privileges:
Procedure
1. In the Web-based Development Workbench Catalog editor, expand the Provisioning node.
2. Right-click the Remote Sources folder and choose New Remote Source.
3. Enter the required information including the adapter and Data Provisioning Agent names.
Regarding user credentials, observe the following requirements:
○ A remote source created with a secondary user can be used only for querying virtual tables.
○ If the remote source is used for designing a .hdbreptask or .hdbflowgraph enabled for real time,
use technical user.
○ If you create a remote subscription using the CREATE REMOTE SUBSCRIPTION SQL statement, use
technical user.
4. Select Save.
Related Information
In SAP HANA smart data integration, you can create a remote source using the SQL console.
Prerequisites
The user who creates the remote source must have the following roles or privileges:
Context
To create a remote source using the SQL console, you must know the connection information for your source.
For an existing remote source, the connection information is in an XML string in the CONFIGURATION
statement.
For your adapter, refer to the remote source configuration topic for that adapter in this guide to see its sample
SQL code. Change the variables to the correct values for your remote source.
The example at the end of this topic illustrates the basic CONFIGURATION connection information XML string
for a Microsoft SQL Server adapter.
● If you’ve recently updated the Data Provisioning Agent, the connection information XML string could also
have been updated for your adapter. Therefore, refresh the adapter to get up-to-date connection
information.
● To view the connection information for an existing remote source, execute SELECT * FROM
"PUBLIC"."REMOTE_SOURCES". In the resulting view, look in the CONNECTION_INFO column.
Tip
To ensure you can view the entire XML string in the CONNECTION_INFO column, in your SAP HANA
preferences enable the setting Enable zoom of LOB columns.
● To view all of the configuration parameters for a given adapter type, execute SELECT * FROM
"PUBLIC"."ADAPTERS". In the resulting view, look in the CONFIGURATION column. This information can
be useful if you want to, for example, determine the PropertyEntry name for a given parameter in the user
interface, shown as displayName. For example:
Related Information
The syntax for creating secondary user credentials for SAP HANA smart data integration adapters is different
from the syntax for SAP HANA system adapters.
The syntax for creating secondary user credentials for SAP HANA smart data integration adapters is as follows.
To build and execute flowgraphs and replication tasks in SAP Web IDE, first configure the grantor privilege.
Prerequisites
To configure the grantor privilege, you must have one or more remote sources already configured.
Procedure
1. In SAP HANA, create a grantor database user with the rights to grant privileges to others.
If necessary, create the grantor user and role, and grant the role to the user.
Note
For example:
2. Grant the following privileges on the remote sources to the grantor role.
○ CREATE VIRTUAL TABLE
○ CREATE VIRTUAL FUNCTION
○ CREATE REMOTE SUBSCRIPTION
○ LINKED DATABASE
○ PROCESS REMOTE SUBSCRIPTION EXCEPTION
○ ALTER
○ DROP
3. For the SAP HANA Service, use the SAP Cloud Platform cockpit to create a user-provided grantor service.
a. In the SAP Cloud Platform cockpit, navigate to your space and create a user-provided service instance.
b. Specify the name of the service credentials.
{
"host": "<hostname>",
"port": "<port_number>",
"certificate": "<host_certificate>",
For more information, see Create User-Provided Service Instances Using the Cockpit .
4. For SAP HANA on-premise, log into SAP HANA as XSA_ADMIN and create a user-provided grantor service.
a. Log in using the XS command-line interface.
For example:
b. If you use a different space for your SAP Web IDE project, change the space. Otherwise, the default is
“SAP”.
xs t -s PROD
Create a service for the grantor database user with a service name of your choice.
xs cups remote_system_grant_service -p
"{\"host\":\"hostname\",\"port\":\"port_number\",\"user\":
\"GEN_GRANTOR_USER\",\"password\":\"
Welcome1\",\"driver\":\"com.sap.db.jdbc.Driver\",\"tags\":[\"hana\"]}"
5. Add the grantor service to the database module in your SAP Web IDE project.
In the MTA development descriptor (mta.yaml), add the grantor service as a resource and build the project.
modules:
- name: hdb1
type: hdb
path: hdb1
requires:
- name: hdi_hdb1
properties:
TARGET_CONTAINER: ~{hdi-container-name}
- name: grant_service
resources:
- name: hdi_hdb1
properties:
hdi-container-name: ${service-name}
type: com.sap.xs.hdi-container
- name: grant_service
type: org.cloudfoundry.existing-service
parameters:
service-name: remote_system_grant_service
Note
First create a database module in your project if one doesn’t already exist.
Note
If you’re creating the dictionary for object search, the ALTER privilege is required.
b. In the grantor file, specify the grantor service name and any remote sources that you want to use.
{
"remote_system_grant_service": {
"object_owner": {
"roles": [
"GEN_GRANTOR_ROLE"
],
"global_object_privileges": [{
"name":"HanaRemoteSource",
"type":"REMOTE SOURCE",
"privileges":[
"CREATE VIRTUAL TABLE", "CREATE REMOTE SUBSCRIPTION"
]
}]
},
"application_user": {
"roles": [
"GEN_GRANTOR_ROLE"
]
}
}
}
Tip
To avoid errors, build starting from the database module instead of right-clicking the hdbgrants file and
choosing Build Selected File.
Next Steps
After you have successfully built the module, you can create virtual table (hdbvirtualtable), flowgraph
(hdbflowgraph), and replication task (hdbreptask) objects in the SAP Web IDE space.
Related Information
GRANT Statement (Access Control) (SAP HANA SQL and System Views Reference)
After you install SAP HANA smart data integration, you must take several actions to enable and access the
monitoring user interfaces for Data Provisioning Agents, remote subscriptions, and tasks.
These actions allow you to access the Data Provisioning monitors by either typing the URL directly in your
browser or through links in SAP HANA cockpit.
Related Information
Download and Deploy the Data Provisioning Delivery Unit [page 33]
Monitoring Data Provisioning in the SAP HANA Web-based Development Workbench
Grant the appropriate roles to users who perform the various tasks in the Data Provisioning monitors.
Prerequisites
Ensure that you’ve been granted the SYSTEM privilege USER ADMIN to be able to create, alter, or delete users.
Procedure
1. Log in to SAP HANA studio with a user name that has been granted the USER ADMIN system privilege.
2. Grant the role sap.hana.im.dp.monitor.roles::Monitoring to those that perform monitoring tasks.
a. In the Systems view, expand your SAP HANA server name and expand Security.
b. Double-click the user name.
c. On the Granted Roles tab, click the + icon in the upper left corner.
Next Steps
Users can access the monitors from SAP HANA cockpit or view the monitors directly by entering the following
URLs in a web browser:
Enterprise Semantic Services provides an API to enable searching for publication artifacts or run-time objects
based on their metadata and contents. It is optional for SAP HANA smart data integration.
To enable Enterprise Semantic Services, an administrator does the following high-level tasks:
● Downloads the SAP HANA Enterprise Semantic Services delivery unit and installs it on the SAP HANA
platform
● Grants roles and privileges to users
● Publishes datasets to the Enterprise Semantic Services knowledge graph, or in the case of an application
that has already been configured to call the Enterprise Semantic Services REST API, the application
populates the knowledge graph
Setting Up the SAP HANA Instance for Enterprise Semantic Services [page 126]
The Enterprise Semantic Services component supports both on-premise multitenant and SAP HANA
cloud platform deployments.
Grant Enterprise Semantic Services Roles and Privileges to Users [page 132]
Next: Enable SAP HANA Smart Data Integration REST API [page 133]
Related Information
The Enterprise Semantic Services component supports both on-premise multitenant and SAP HANA cloud
platform deployments.
For details on supported versions, see the applicable Product Availability Matrix.
Related Information
For a multitenant deployment, Enterprise Semantic Services requires the SAP HANA script server and access
to multitenant database containers.
Prerequisites
Procedure
For example, in the Web-based Development Workbench or SAP HANA studio, enter the following SQL
statement:
Note
If the SAP HANA smart data quality component is already installed, then the scriptserver service has
already been added.
Related Information
Configure HTTP(S) Access to Tenant Databases via SAP HANA XS Classic (SAP HANA Administration Guide)
Port Assignment in Tenant Databases (SAP HANA Administration Guide)
Download the Enterprise Semantic Services delivery unit and deploy it to enable semantic searches of data
sources.
Procedure
Related Information
To install Enterprise Semantic Services (ESS), import the downloaded ESS delivery unit.
Related Information
Import the ESS Delivery Unit with SAP HANA Studio [page 129]
Import the ESS Delivery Unit with SAP HANA Application Lifecycle Management [page 129]
How to import the Enterprise Semantic Services (ESS) delivery unit using SAP HANA studio.
Prerequisites
Procedure
How to import the Enterprise Semantic Services (ESS) delivery unit using SAP HANA Application Lifecycle
Management.
Prerequisites
Context
For multitenant database deployment, import the delivery unit on a tenant database, not on the system
database.
1. If not already granted, grant the role sap.hana.xs.lm.roles::Administrator to the user name you will use to
log in to SAP HANA Application Lifecycle Management.
a. In SAP HANA studio Systems view, expand the name of your SAP HANA server and choose Security
Users System .
b. On the Granted Roles tab, click the green + icon in the upper left corner.
c. On the Select Roles dialog, type lm in the search string box.
d. Select role sap.hana.xs.lm.roles::Administrator and click OK.
2. Open SAP HANA Application Lifecycle Management by entering the following URL in a web browser:
<host name>:80<2-digit instance number>/sap/hana/xs/lm
3. Log in with the user name you authorized in step 1.
The first time you log in, a pop-up window appears asking you to enter a name for this server.
4. On the Home tab, click the Delivery Units tile.
5. Click Import.
6. Click Browse, navigate to where you downloaded the delivery unit, select the .tgz file, and click Open.
7. Click Import.
Results
After successful import, the name of the delivery unit displays in the list on the left.
After downloading and importing the Enterprise Semantic Services (ESS) delivery unit, install this component
to enable semantic searches of data sources.
Prerequisites
● Upgrade your SAP HANA instance if you need to upgrade to a new SPS revision.
● If installed, uninstall the DEMO delivery unit.
● If you are upgrading from a version earlier than 1.0 SP03 Rev0 (1.3.0), first unininstall Enterprise Semantic
Services.
● If you have ESS version SPS01 Patch 1, also known as 1.0 SP00 Rev1, or earlier, follow the procedure that
requires the installation script install.xsjs.
● If you have ESS version SPS01 Patch 2, also known as 1.0 SP01 Rev2, or later, follow this procedure, which
requires the installation script install.html.
Procedure
Results
Successful installation is indicated with the message Setup completed including a status table that lists each
setting.
At any point in the installation you can monitor its status by accessing the install.html URL. Any errors display
with messages for corrective actions.
Related Information
After installing Enterprise Semantic Services, grant the necessary roles to the SAP HANA users who will
interact directly or indirectly with Enterprise Semantic Services.
Procedure
1. Log in to SAP HANA with a user name that has the EXECUTE privilege on the GRANT_ACTIVATED_ROLE
procedure on the Object Privileges of the user.
2. In the Systems view, expand Security in one of the following database names:
○ If you are installing on SAP HANA Version 1.0, select <your SYSTEM database name> and expand
Security.
○ If you are installing on SAP HANA Version 2.0, select<your TENANT database name> and expand
Security.
3. Grant the appropriate role to each user by following these steps:
a. Double-click the user name.
b. On the Granted Roles tab, click the “+” icon in the upper left corner.
c. On the Select Roles dialog, type ess in the search string box.
d. Select the appropriate role for this user and click OK.
Option Description
sap.hana.im.ess.roles::Administrator For users who will access the Enterprise Semantic Services Adminis
tration user interface
sap.hana.im.ess.roles::Publisher For users who will access the Enterprise Semantic Services publica
tion API to define content to be published in the knowledge graph.
sap.hana.im.ess.roles::User For users who will access the Enterprise Semantic Services con
sumption (read-only) APIs such as Search, Autocomplete, and con
tent-type identification (CTID).
4. Alternatively, you can open the SQL console of the SAP HANA studio and execute the following statements:
CALL
"_SYS_REPO"."GRANT_ACTIVATED_ROLE"( 'sap.hana.im.ess.roles::Administrator ',
'<USER_NAME> ')
You can permanently uninstall Enterprise Semantic Services, for example in the case of an upgrade.
Prerequisites
Procedure
http://<<your_HANA_instance:port>>/sap/hana/xs/lm
b. Choose Products Delivery Units .
c. Select HANA_IM_ESS.
d. Click Delete.
e. Click the checkbox including objects and packages.
f. Confirm the deletion.
2. Remove the ESS users.
In the Web-based Development Workbench or SAP HANA studio, drop the Enterprise Semantic Services
users. For example, in SAP HANA studio, enter the following SQL statements:
In the Web-based Development Workbench or SAP HANA studio, drop the HANA_IM_ESS schema. For
example, in SAP HANA studio, enter the following SQL statement:
Use the SAP HANA smart data integration REST API to programmatically execute and monitor flowgraphs and
to process data for interactive data transformation within your application.
For more information, see the SAP HANA Smart Data Integration REST API Developer Guide.
To take advantage of SAP HANA smart data quality functionality, you must perform a few tasks.
Procedure
Related Information
3.1 Directories
Download and deploy directories to take advantage of smart data quality functionality.
Context
The Cleanse and Geocode nodes rely on reference data found in directories that you download and deploy to
the SAP HANA server.
If reference data isn’t provided, the Cleanse node performs parsing, but doesn’t perform assignment.
Additionally, you’re able to create and activate flowgraphs that include the Geocode node, but their execution
fails.
You may need to download multiple directories, depending on your license agreement.
Before you install your directories, stop the Script Server, and then restart it once the installation is
complete. Make sure that you don’t have any running flowgraph tasks.
Related Information
Install or Update Directories on the SAP HANA Host Using Lifecycle Manager [page 136]
Install or Update Directories from a Web Browser Using Lifecycle Manager [page 138]
Integrate Existing Directories into Lifecycle Manager [page 141]
Uninstall Directories [page 144]
Address Directories & Reference Data
Follow these steps the first time you install the directories on an SAP HANA host or update the directories after
you have a release-dated folder.
Procedure
Note
The default location is where SAP HANA is installed. Choose a different directory path, so the
directories and reference data are separate from the SAP HANA installation. If you decide to uninstall
SAP HANA with the directories in the default path, then the directories are uninstalled also.
Note
In this document, when referring to the “sidadm”, the “sid” is lowercase. When referring to the “SID”,
the “SID” is uppercase.
4. Open the SAP HANA Platform Lifecycle Management Web site https://<hostname>:1129/lmsl/
HDBLCM/<SID>/index.html. Log in with your <sidadm> user name and password.
5. Click Download Components and specify the download mode as Download Archives on the SAP HANA
Host, and then click Next.
6. Enter your user ID and password, and click Next.
Note
18. Right-click on the system name and select Configuration and Monitoring Open Administration .
19. On the Configuration tab, enter dq in the filter and press Enter.
The dq_reference_data_path is set with the reference data path you defined under the Default column.
This path automatically has a dated folder at the end. For example, <filepath>/Directories/
2017-07/.
20.(Optional) To delete old directory data, open a file browser and navigate to the directory location. Select
the old release-dated folder and press Delete.
Caution
Some directory data is updated monthly and other directory data is updated quarterly. Therefore, the
monthly release folders can contain a link to the directories in an older release folder rather than to the
actual directory data. Before deleting, be sure that those directories aren’t in use.
21. Configure the operations cache to improve performance. See "Configuring the Operation Cache" in the
SAP HANA Smart Data Integration and Smart Data Quality Administration Guide.
22. Set alerts to be notified of when the directories expire. Configure monitoring alerts from SAP HANA
cockpit. For more information, see “Monitoring Alerts” in the SAP HANA Administration Guide.
Follow these steps the first time you install the directories using a web browser or update the directories using
a web browser after you have a release-dated folder.
Procedure
Note
The default location is where SAP HANA is installed. Choose a different directory path, so the
directories and reference data are separate from the SAP HANA installation. If you decide to uninstall
SAP HANA with the directories in the default path, then the directories are uninstalled also.
Note
In this document, when referring to the “sidadm”, the “sid” is lowercase. When referring to the “SID”,
the “SID” is uppercase.
4. Open the SAP HANA Platform Lifecycle Management Web site https://<hostname>:1129/lmsl/
HDBLCM/<SID>/index.html. Log in with your <sidadm> user name and password.
5. Click Download Components and specify the download mode as Download Archives via the Web Browser,
and then click Next.
6. Enter your user ID and password, and click Next.
Your user ID begins with an “S” followed by 10 digits and is tied to an account with active directories. For
example, “S0123456789”.
○ The Archives are Accessible from the SAP HANA Host. Copy the Address Directories to a location that
can be accessed by the <sidadm>. We recommend that the <sidadm> creates this directory, so the
permissions are correct.
○ Upload Archives to the SAP HANA Host. See the steps for uploading archives in the topic Upload
Archives to the SAP HANA Host [page 140].
12. Enter the shared location.
Note
If the location box turns red, either there’s an invalid path or the location doesn’t have the required
permission for the <sidadm>.
Note
If the location box turns red, either there’s an invalid path or the location doesn’t have the required
permission for the <sidadm>.
14. Review and confirm the information, and then click Next.
15. Click Close after the Upload/Extract Components is finished.
16. Click Install or Update Additional Components.
17. Click Add Software Locations.
18. Enter the path to the extracted folder from the previous step, and then click Search also in Subfolder. Click
Next.
19. Select the Address Directories that you want to install or click Select All.
Note
If a warning about “too loose permissions” is shown, you may ignore the message.
20.Enter the system administrator password, database user name, and database user password in the Specify
Authorization window.
21. Set the installation path for address directories and reference data in the Define Reference Data Properties
window. For example, <filepath>/Directories/installed. The location you specify here is the
reference data path. Click Next.
22. Verify that the information is correct on the Review & Confirm window, and then click Update.
23. Click Close to return to the Lifecycle Manager window.
Next Steps
Sample Code
Sample Code
Related Information
If you installed or updated directories from a web browser using Lifecycle Manager and want to upload the
archives to the SAP HANA Host, follow these steps.
Procedure
1. Click Select Archives for Upload. Enter the path to the location where the Address Directories were
downloaded and select all ZIP files.
2. Click Upload.
3. Confirm the Temporary Extract Path, and then click Next.
Note
Make a note of this path, because you’ll use it when adding software locations.
Note
If a warning about “too loose permissions” is shown, you can ignore the message.
10. Enter the system administrator password, database user name, and database user password in the Specify
Authorization.
11. Set the installation path for address directories and reference data in the Define Reference Data Properties
window. For example, <filepath>/Directories/installed. The location you specify here is the
reference data path. Click Next.
12. Verify that the information is correct on the Review & Confirm window, and then click Update.
13. Click Close to return to the Lifecycle Manager window.
Next Steps
You can verify that the directories are installed by following the instructions at the end of the Install or Update
Directories from a Web Browser Using Lifecycle Manager [page 138] topic.
If you downloaded your directories from the Support Portal using Download Manager, use this procedure to
update your directories.
Prerequisites
Context
Lifecycle Manager organizes your reference data to make installing and updating your directories easier by
creating release-dated folders.
○ If a path is empty for the System and Databases columns, then continue to the next step.
○ If the paths for the System and Databases columns are different from each other, then update the
System column to be the master reference data path by double-clicking the value in the System
column. In the System section, enter the path in the New Value option. Delete any paths in the New
Value option under the Databases section.
Note
If you have multiple tenants that require different reference data paths, you can manage them
outside of SAP HANA Lifecycle Manager.
Note
Choose a different directory path from the location where SAP HANA is installed. Separate locations
ensure that the directories and reference data are separate from the SAP HANA installation. If you
decide to uninstall SAP HANA with the directories in the same path as SAP HANA, then the directories
are uninstalled also.
Note
In this document, when referring to the “sidadm”, the “sid” is lowercase. When referring to the “SID”,
the “SID” is uppercase.
6. Open the SAP HANA Platform Lifecycle Management Web site https://<hostname>:1129/lmsl/HDBLCM/
<SID>/index.html. Log in with your <sidadm> username and password.
7. Click Download Components and specify the download mode.
8. Enter your user ID and password, and click Next.
Your user ID begins with an “S”, followed by 10 digits, and it’s tied to an account with active directories. For
example, “S0123456789”.
Note
Note
If you have set the path at the beginning of this procedure, you may not see this window.
17. On the Review & Confirm window, verify that the information is correct and then click Update.
18. Click Close to return to the Lifecycle Manager window.
19. (Optional) To verify that the directory is installed, open SAP HANA Studio and connect to the SYSTEM
database.
20.(Optional) On the Configuration tab, enter dq in the filter and press Enter.
The dq_reference_data_path is set with the reference data path you defined under the Default column.
This path automatically has a dated folder at the end. For example, <filepath>/Directories/
2017-07/.
21. (Optional) To delete old directory data, open a file browser, and navigate to the directory location. Select
the old release-dated folder and press Delete.
Caution
Some directory data is updated monthly and other directory data is updated quarterly. Therefore, the
monthly release folders may contain a link to the directories in an older release folder, rather than to
the actual directory data. Before deleting, be sure that those directories aren’t in use.
22. Configure the operations cache to improve performance. See "Configuring the Operation Cache" in the
SAP HANA Smart Data Integration and Smart Data Quality Administration Guide.
23. Set alerts to be notified of when the directories expire. Configure monitoring alerts from the SAP HANA
cockpit. For more information, see “Monitoring Alerts” in the SAP HANA Administration Guide.
Related Information
When you’re finished using the directory information, you can uninstall the directories from the system.
Procedure
1. Open SAP HANA Studio and connect to the SAP HANA Server as the SYSTEM user.
2. Right-click the system name and select Configuration and Monitoring Open Administration .
3. Click the Configuration tab and type dq in the Filter option.
The dq_reference_data_path in the scriptserver.ini file appears in the list. Make a note of the
system path. For example, /<filepath>/Directories/installed/2017-05/.
4. Right-click dq_reference_data_path, and then click Delete.
5. Select the system, and then click Delete.
6. Open a file browser and delete the folder that was in the dq_reference_data_path.
7. (Optional) To verify that the directories were deleted, click View System Information Installed
Components .
When updating your SAP HANA Smart Data Integration landscape to a new version, consider the steps that
must be taken for each component.
Related Information
Update the Data Provisioning Agent by running the installation program in update mode.
Prerequisites
Before you update the Data Provisioning Agent, ensure that your SAP HANA server has already been updated
to the same revision.
If your agent has remote source subscriptions for real-time data capture, you must also suspend capture
before upgrading the agent. To suspend active remote source subscriptions, use the SQL console in the SAP
HANA Studio or Web-based Development Workbench:
Note
To verify the success of the upgrade for log reader adapters, set the adapter framework logging level to
INFO. To change the adapter framework logging level, choose Preferences Adapter Framework
Logging Level in the SAP HANA Data Provisioning Agent Configuration tool.
Note
Stop the service using the Data Provisioning Agent installation owner. The installation owner is the
same user that is used to start the agent service.
Tip
To upgrade the agent in command-line mode, use hdbinst.exe on Windows or ./hdbinst on Linux.
5. Choose Update SAP HANA Data Provisioning Agent and select the path of the existing agent that you want
to update.
In command-line mode, enter the number of the existing agent as listed by the installation program.
6. On Windows, specify the unique agent name and the agent service username and password.
You can specify a new unique name or use the same one as before, but it must be different than any names
already used by other agent instances on the same host system.
7. On Linux, start the agent service.
Tip
Note
Start the service using the Data Provisioning Agent installation owner. The installation owner is the
same user that is normally used to start the agent service.
To allow SAP HANA to detect any new adapter capabilities, use the SQL console.
a. Retrieve a list of the adapters configured in your environment.
Note
If you have multiple agents in your SAP HANA environment, refresh each adapter only a single time
with an upgraded agent. Refreshing the adapters with each agent isn’t necessary.
Results
Next Steps
Tip
After updating the Data Provisioning Agent, we recommend that you review the update log file for any
errors, and take any necessary corrective actions.
If you suspended capture for remote source subscriptions on the agent, you can now resume capture.
Caution
Before resuming capture, you must first upgrade all agents in your SAP HANA environment. If you haven’t
upgraded all agents, do that first and then return to this section.
After all agents have been upgraded, use the SQL console to resume capture:
Repeat the command for each remote source subscription in your environment.
Tip
After you resume a remote source subscription, additional automatic upgrade steps take approximately 10
minutes to complete. To verify that the process has completed successfully, view the Data Provisioning
Agent framework log:
● When the adapter upgrade has been completed, you should see the message
<remote_source_name> has been upgraded successfully.
● When the real-time replication has been resumed, you should see the message
<remote_source_name> is resumed successfully.
After downloading and importing the Enterprise Semantic Services (ESS) delivery unit, install this component
to enable semantic searches of data sources.
Prerequisites
● Upgrade your SAP HANA instance if you need to upgrade to a new SPS revision.
● If installed, uninstall the DEMO delivery unit.
● If you are upgrading from a version earlier than 1.0 SP03 Rev0 (1.3.0), first unininstall Enterprise Semantic
Services.
Context
● If you have ESS version SPS01 Patch 1, also known as 1.0 SP00 Rev1, or earlier, follow the procedure that
requires the installation script install.xsjs.
● If you have ESS version SPS01 Patch 2, also known as 1.0 SP01 Rev2, or later, follow this procedure, which
requires the installation script install.html.
Procedure
Successful installation is indicated with the message Setup completed including a status table that lists each
setting.
At any point in the installation you can monitor its status by accessing the install.html URL. Any errors display
with messages for corrective actions.
Related Information
To uninstall SAP HANA Smart Data Integration, perform the required uninstallation tasks for each component
in the landscape.
Related Information
Uninstall the Data Provisioning Agent from a host system using the uninstallation manager.
Context
The uninstallation manager supports graphical and command-line modes on Windows and Linux platforms.
Procedure
Tip
To ensure that all installation entries are removed correctly, use the same user and privileges as the
original installation owner.
For example, if sudo was used during the original installation, log in as the installation owner and
run sudo ./hdbuninst <...>.
Results
Next Steps
After uninstalling the agent, several files and directories generated by the agent during runtime are left in place.
If you choose, you can safely remove these remaining files and directories manually.
● configTool/
● configuration/
● install/
● log/
● LogReader/
● workspace/
You can permanently uninstall Enterprise Semantic Services, for example in the case of an upgrade.
Prerequisites
Procedure
http://<<your_HANA_instance:port>>/sap/hana/xs/lm
b. Choose Products Delivery Units .
c. Select HANA_IM_ESS.
d. Click Delete.
e. Click the checkbox including objects and packages.
f. Confirm the deletion.
2. Remove the ESS users.
In the Web-based Development Workbench or SAP HANA studio, drop the Enterprise Semantic Services
users. For example, in SAP HANA studio, enter the following SQL statements:
In the Web-based Development Workbench or SAP HANA studio, drop the HANA_IM_ESS schema. For
example, in SAP HANA studio, enter the following SQL statement:
Data provisioning adapters can connect to a variety of sources to move data into SAP HANA, and well as other
use cases.
The adapters in the following table are delivered with the Data Provisioning Agent. For information about
configuring your adapter, see the documentation about each adapter in this guide.
Note
Before configuring your adapters and remote sources, make sure that the Data Provisioning Agent is
configured and that the necessary JDBC libraries are installed.
For information about using adapters for replication or transformation, see the Modeling Guide for SAP HANA
Smart Data Integration and SAP HANA Smart Data Quality.
If the source you’re using isn’t covered by the adapters listed, use the Adapter SDK to create custom adapters
to suit your needs. See the Adapter SDK Guide for SAP HANA Smart Data Integration for more information.
See the Product Availability Matrix for information about supported versions.
ABAPAdapter Retrieves data from virtual tables through RFC for ABAP tables and ODP extractors. It also pro
vides change data capture for ODP extractors.
ASEAdapter Retrieves data from SAP ASE. It can also receive changes that occur to tables in real time. You
can also write back to a virtual table.
AseECCAdapter Retrieves data from an SAP ERP system running on SAP ASE. It can also receive changes that
occur to tables in real time.
BWAdapter This adapter is available for use only with SAP Agile Data Preparation.
CamelAccessAdapter Retrieves data from a Microsoft Access source. The Camel Access adapter is a predelivered
component that is based on the Apache Camel adapter.
CamelFacebookAdapter The Camel Facebook adapter is a predelivered component that is based on the Apache Camel
adapter. Use the Camel Facebook component to connect to and retrieve data from Facebook.
CamelInformixAdapter Retrieves data from an Informix source. It can also write back to an Informix virtual table. The
Camel Informix adapter is a predelivered component that is based on the Camel adapter.
CamelJdbcAdapter The Camel JDBC adapter is a predelivered component that is based on the Apache Camel
adapter.
Use the Camel JDBC adapter to connect to most databases for which SAP HANA smart data
integration doesn’t already provide a predelivered adapter.
In general, the Camel JDBC adapter supports any database that has SQL-based data types and
functions, and a JDBC driver.
CassandraAdapter Retrieves data from an Apache Cassandra remote source. You can also write to an Apache Cas
sandra target.
CloudDataIntegrationA This adapter is available for use only with SAP Data Warehouse Cloud.
dapter
DB2ECCAdapter Retrieves data from an SAP ERP system running on IBM DB2. It can also receive changes that
occur to tables in real time. The only difference between this adapter and the DB2LogReaderA
dapter is that this adapter uses the data dictionary in the SAP ERP system when browsing met
adata.
DB2LogReaderAdapter Retrieves data from IBM DB2. It can also receive changes that occur to tables in real time. You
can also write back to a virtual table.
DB2MainframeAdapter Retrieves data from IBM DB2 for z/OS. IBM DB2 iSeries, formerly known as AS/400, is also
supported.
FileAdapter Retrieves data from formatted text files. You can also write back to a virtual table.
You can also access SharePoint source data, as well as write to an HDFS target file.
FileAdapterDatastore File datastore adapters use the SAP Data Services engine as the underlying technology to read
from a wide variety of sources.
SFTPAdapterDatastore
HanaAdapter This adapter provides real-time changed-data capture capability in order to replicate data from
a remote SAP HANA database to a target SAP HANA database. You can also write back to a
virtual table. Use this adapter to extract data from an ECC on an SAP HANA source.
ImpalaAdapter Retrieves data from an Apache Impala source. The Apache Impala adapter is a predelivered
component that is based on the Apache Camel adapter.
MssqlECCAdapter Retrieves data from an SAP ERP system running on Microsoft SQL Server. It can also receive
changes that occur to tables in real time. The only difference between this adapter and the
MssqlLogReaderAdapter is that this adapter uses the data dictionary in the SAP ERP system
when browsing metadata.
MssqlLogReaderAdapter Retrieves data from Microsoft SQL Server. It can also receive changes that occur to tables in
real time, be it through log reading or triggers. You can also write back to a virtual table.
ODataAdapter Retrieves data from an OData service. You can also write to an OData target.
OpenConnectorAdapter This adapter is available for use only with SAP Data Warehouse Cloud.
OracleECCAdapter Retrieves data from an SAP ERP system running on Oracle. It can also receive changes that oc
cur to tables in real time. The only difference between this adapter and the OracleLogReaderA
dapter is that this adapter uses the data dictionary in the SAP ERP system when browsing met
adata.
OracleLogReaderAdapter Retrieves data from Oracle. It can also receive changes that occur to tables in real time, be it
through log reading or triggers. You can also write back to a virtual table.
PostgreSQLLogReaderA Use this adapter to batch-load or replicate change data in real time from a PostgreSQL data
dapter base to SAP HANA.
SDIDB2MainframeLog This adapter provides real-time replication functionality for DB2 for z/OS sources.
ReaderAdapter
Note
This adapter is referred to in the documentation as “SDI DB2 Mainframe” adapter. Also,
this adapter is similar to the “DB2 Mainframe” adapter, in name and functionality. The
main difference between this adapter and the “DB2 Mainframe” adapter is that the “SDI
DB2 Mainframe” adapter supports real-time replication.
SoapAdapter This adapter is a SOAP web services client that can talk to a web service using the HTTP proto
col to download the data. The SOAP adapter uses virtual functions instead of virtual tables to
expose server-side operations as it closely relates to how the operation is invoked.
TeradataAdapter Retrieves data from Teradata. It can also receive changes that occur to tables in real time. You
can also write back to a virtual table.
TwitterAdapter Retrieves data from Twitter. It can also receive new data from Twitter in real time.
Note
Data provisioning adapters allow specifying virtual IP addresses for source systems as a parameter for the
remote source, and they allow changing the virtual IP addresses when the remote source is suspended.
If you’re using either TSL/SSL or Kerberos, and you require stronger encryption than 128-bit key length, you
must update the existing JCE policy files.
Related Information
Use the configAdapters function of the Data Provisioning Agent configuration tool to adjust adapter settings
specific to your sources.
Procedure
Note
Select 1 if you want to display the current preferences instead of changing the configured values.
4. Select the entry for the adapter that you want to configure.
5. Specify values for the preferences for your adapter.
Use the configAdapters function of the Data Provisioning Agent configuration tool to adjust adapter settings
specific to your sources.
Configure one or more adapter preferences by specifying the adapter configuration function and the adapter
preferences as additional function parameters.
Note
For security, you must use a password file to set passwords in discrete command mode. Save the password
in a file and specify the location of the file as the value for the password preference.
● To display a list of available adapter configuration functions, use the help function.
The configuration tool displays the configuration function for each available adapter.
● To display a list of available preferences for an adapter, specify the adapter function name with the help
parameter.
The configuration tool displays the available configuration preferences, prerequisites, and a configuration
example for the specified adapter.
The adapters described in this document that come installed with the Data Provisioning Agent were created
using the Adapter SDK.
Related Information
Overview of this SAP HANA Smart Data Integration adapter’s features and functionality
The Apache Camel Facebook adapter is created based on Camel Adapter. The adapter uses the Facebook
component (http://camel.apache.org/facebook.html) to access Facebook APIs. Facebook data of the
configured Facebook user such as friends, families, and movies, are exposed to SAP HANA server by virtual
tables by the Camel Facebook Adapter.
Related Information
By default, Camel Facebook Adapter is not available in Data Provisioning Agent. To use it, you must perform
setup tasks.
Procedure
2. Download the Facebook Component JAR file, which is located in the Apache Camel ZIP file, and put it in the
<DPAgent_root>/camel/lib directory.
3. Download Facebook4J, and put it in the <DPAgent_root>/camel/lib directory.
Note
See the SAP HANA Smart Data Integration Product Availability Matrix for information about version
compatibility with these downloads.
Related Information
Configuration settings for accessing a Camel Facebook source. Also included is sample code for creating a
remote source using the SQL console.
The Camel Facebook adapter has the following remote source configuration parameters. You use all of these
parameters to configure Facebook component options; See <DPAgent_root>/camel/facebook.xml. If you
need to specify non-default values for Facebook component options, you can add more remote source
parameters in the adapter configuration in adapters.xml, and update the Facebook component bean in
<DPAgent_root>/camel/facebook.xml accordingly. See http://camel.apache.org/facebook.html for a
complete list of these options.
Facebook requires the use of OAuth for all client application authentication. To use the Camel Facebook
adapter with your account, you need to create a new application within Facebook at https://
developers.facebook.com/apps and grant the application access to your account. The Facebook
application's ID and secret allow access to Facebook APIs that do not require a current user. A user access
Example
Sample Code
Related Information
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
Use the Camel Informix adapter to connect to an IBM Informix remote source.
Note
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
By default, Camel Informix Adapter is not available in Data Provisioning Agent. To use it, you must perform
setup tasks.
Procedure
Configure the following options in SAP HANA smart data access to configure your connection to an Informix
remote source. Also included is sample code for creating a remote source using the SQL console.
Delimident If set to True, the Informix database object name is enclosed with
double quotation marks.
Configuration > Secur Use Agent Stored Creden Set to True to use credentials that are stored in the DP Agent secure
ity tial
storage.
Example
Sample Code
Related Information
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
Use the Camel JDBC adapter to connect to most databases for which SAP HANA smart data integration does
not already provide a pre-delivered adapter.
In general, the Camel JDBC adapter supports any database that has SQL-based data types and functions, and
a JDBC driver.
If you are using MS Access or IBM Informix, use the Camel adapters specific to those databases.
Adapter Functionality
Related Information
By default, Camel JDBC Adapter is not available in Data Provisioning Agent. To use it, you must perform setup
tasks.
Procedure
2. Download the appropriate JDBC file, and copy it to the <DPAgent_root>/camel/lib directory.
Configure the following options to create a connection to a remote database. Also included is sample code for
creating a remote source using the SQL console.
Configuration Database Type Specifies the type of database to which you will connect.
JDBC Driver Class Specifies the JDBC driver class for the database you are us
ing.
Configuration > Security Use Agent Stored Credential Set to True to use credentials that are stored in the Data Pro
visioning Agent secure storage.
Credentials Credentials Mode Remote sources support two types of credential modes to
access a remote source: technical user and secondary cre
dentials.
User Credentials Database user name Specifies the database user name.
Database user password Specifies the password for the database user.
Related Information
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
Procedure
1. To allow MySQL to treat double quotes (") as an identifier quote character, append the property string ?
sessionVariables=sql_mode=ANSI_QUOTES to the JDBC URL parameter in the remote source
configuration.
2. Download and place the mysql-connector-java-<version>-bin.jar file under <DPAgent_root>/
camel/lib folder. If there is no /lib folder, create one.
3. Restart the Data Provisioning Agent.
Related Information
SAP HANA smart data integration and all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
The Apache Camel Microsoft Access adapter is created based on Camel Adapter. Using the adapter, you can
access Microsoft Access database data via virtual tables.
Note
The Camel Access adapter can be used only when the Data Provisioning Agent is installed on Microsoft
Windows.
Context
By default, Camel Microsoft Access adapter is not available in Data Provisioning Agent. To use it, you must do
the following:
Procedure
1. In Microsoft Access, in the Tables window, right-click MSysObjects, and select Navigation Options to show
system objects.
2. In the Info window of Microsoft Access, right-click the Users and Permissions button, and select User and
Group Permissions to give an admin user all permissions on MSysObjects.
3. Enable macros in the Microsoft Access Trust Center.
4. Run the following command:
Sub currentuser()
strDdl = "GRANT SELECT ON MSysObjects TO Admin;"
CurrentProject.Connection.Execute strDdl
End Sub
Configure the following options in smart data access to configure your connection to a Microsoft Access
remote source. Also included is sample code for creating a remote source using the SQL console.
Configuration Access File Path Specifies the path to the Microsoft Access database file.
Access File Name Specifies the name of the Microsoft Access database file.
Configuration > Se Use Agent Stored Creden Set to True to use credentials that are stored in the DP Agent secure
curity tial
storage.
Example
Sample Code
Related Information
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
Apache Cassandra is a free and open-source distributed NoSQL database management system designed to
handle large amounts of data across many commodity servers, providing high availability with no single point of
failure.
The Cassandra adapter is specially designed for accessing and manipulating data from Cassandra Database.
Note
Adapter Functionality
Related Information
To enable SSL, on Cassandra you must import the server certificates and enable client authentication.
You must enable SSL on your Cassandra remote source before you can connect to it.
Procedure
client_encryption_options:
enabled: true
optional: false
keystore: .keystore
keystore_password: Sybase123
require_client_auth: false
truststore: .truststore
truststore_password: Sybase123
# More advanced defaults below:
protocol: ssl
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_256_CBC_SHA]
3. Export the certificates from all Cassandra nodes and copy them to the Data Provisioning Agent host.
5. Restart Cassandra.
6. In the SAP HANA smart data integration Cassandra remote source configuration options, set Use SSL to
True, and set Use Client Authentication as desired.
Context
To enable client authentication, you must edit a Cassandra file and properly configure your remote source.
Procedure
client_encryption_options:
enabled: true
optional: false
keystore: .keystore
keystore_password: Sybase123
require_client_auth: true
truststore: .truststore
truststore_password: Sybase123
# More advanced defaults below:
protocol: ssl
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_256_CBC_SHA]
# More advanced defaults below:
protocol: ssl
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_256_CBC_SHA]
2. Generate the private and public key pair for the Data Provisioning Agent node.
Leave the key password the same as the keystore password, and export the public key.
3. Copy the dpagent.cer file to each Cassandra node, and import it into the Cassandra truststore.
Related Information
The Cassandra adapter by default allows user name and password authentication. However, you can improve
security by enabling Kerberos authentication.
Procedure
$ hostname
node1.example.com
$ ntpq -p
remote refid st t when poll reach delay offset
jitter
==============================================================================
*li506-17.member 209.51.161.238 2 u 331 1024 377 80.289 1.384 1.842
-tock.eoni.com 216.228.192.69 2 u 410 1024 377 53.812 1.706 34.692
+time01.muskegon 64.113.32.5 2 u 402 1024 377 59.378 -1.635 1.840
-time-a.nist.gov .ACTS. 1 u 746 1024 151 132.832 26.931 55.018
+golem.canonical 131.188.3.220 2 u 994 1024 377 144.080 -1.732 20.072
RHEL-based systems
Debian-based systems
SUSE systems
7. Copy keytabs to the related Cassandra nodes, and edit the dse.yaml file.
kerberos_options:
keytab: resources/dse/conf/dse.keytab
service_principal: cassandra/_HOST@EXAMPLE.COM
http_principal: HTTP/_HOST@EXAMPLE.COM
qop: auth
authenticator: com.datastax.bdp.cassandra.auth.KerberosAuthenticator
9. Restart Cassandra.
10. Prepare the keytab for the DP Agent. The principal must have the value of the user created previously in
this procedure.
11. Copy the dpagent.keytab file to the DP Agent host, and create your remote source.
Related Information
Configure the following options for a connection to an Apache Cassandra remote source. Also included is
sample code for creating a remote source using the SQL console.
Connection Hosts The list of host names and ports Cassandra used to connect to
the Cassandra cluster.
host1[:port1]
[,host2[:port2],host3[:port3]...]
If the port number is not provided for a host, the default port
9042 will be used for the host.
Authentication Mechanism User Name and Password (Default): Use user name and pass
word to perform the authentication.
Paging Status Specify whether to enable paging when getting the results of a
query from Cassandra. The default value is On.
Read Timeout (milliseconds) Specify the number of milliseconds the driver waits for a re
sponse from a given Cassandra node, before considering it unre
sponsive.
Connect Timeout (milliseconds) Specify the number of milliseconds the driver waits to establish a
new connection to a Cassandra node before giving up.
Data Type Conver Map TEXT/VARCHAR to Cassandra data types TEXT and VARCHAR are mapped to
sion NVARCHAR (5000) NCLOB in HANA, which makes it impossible to use these col
umns as query conditions in a WHERE clause. Set the value to
True to map to NVARCHAR instead. The default value is False.
Map ASCII to VARCHAR (5000) The Cassandra data type ASCII is mapped to CLOB in HANA,
which makes it impossible to use these columns as query condi
tions in a WHERE clause. Set the value to True to map to VAR
CHAR instead. The default value is False.
Load Balancing Pol Use Round Robin Policy Specify whether to use the Round Robin Policy. The default value
icy is False.
You can use either Round Robin Policy or DC Aware Round Robin
Policy, but not both.
Use DC Aware Round Robin Pol Specify whether to use the DC Aware Round Robin Policy. The
icy default value is False.
Use Token Aware Policy Specify whether to use the Token Aware Policy. The default value
is False.
Use Latency Aware Policy Specify whether to use the Latency Aware Policy. The default
value is False.
Security Use SSL Specify whether to connect to Cassandra using SSL. The default
value is False.
Use Client Authentication Specify whether to connect to Cassandra using client authenti
cation. The default value is False.
Use Agent Stored Credential Set to True to use credentials that are stored in the DP Agent se
cure storage.
Service Name The SASL protocol name to use, which should match the user
name of the Kerberos service principal used by the DSE server.
Use Keytab Set this to True if you want the module to get the technical user's
key from the keytab. The default value is False.
If Key Tab is not set, then the module locates the keytab from the
Kerberos configuration file. If it is not specified in the Kerberos
configuration file, then the module looks for the file
<user.home><file.separator>krb5.keytab.
Keytab Set this to the file name of the keytab to get the technical user's
secret key.
Use Ticket Cache Set this to true if you want the ticket-granting ticket (TGT) to be
obtained from the ticket cache. Set this option to False if you do
not want this module to use the ticket cache. The default value is
False.
This module searches for the ticket cache in the following loca
tions:
Ticket Cache Set this to the name of the ticket cache that contains the user's
TGT. If this is set, Use Ticket Cache must also be set to true; oth
erwise, a configuration error is returned.
Credentials Credentials Mode Select Technical user or Secondary user depending on the pur
pose of the remote source you want to create.
SQL
Related Information
Configure the Adapter Truststore and Keystore Using the Data Provisioning Agent Configuration Tool [page
513]
Enable SSL on Cassandra [page 173]
Enable Client Authentication [page 174]
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
The Apache Impala adapter is a data provisioning adapter that is used to access Apache Impala tables.
An Impala table can be internal table, external table, or partition table. Impala tables could be stored as data
files with various file formats. Also, they can be Kudu tables stored by Apache Kudu. Different table types have
An Impala table type is transparent to the Impala adapter. The Impala adapter supports all of these types of
tables and cares about column metadata only. The Impala adapter supports operations that are legal to the
back end Impala table.
The Impala adapter requires you to install the Impala JDBC connector in DPAgent. Follow the steps below to
install the Impala JDBC connector:
Adapter Functionality
Related Information
Configure the following options for a connection to an Apache Impala remote source. Also included is sample
code for creating a remote source using the SQL console.
● No Authentication (0 in SQL)
● Kerberos (1 in SQL)
● User Name (The LDAP bind name) (2 in SQL)
● User Name and Password (Default) (Used for LDAP au
thentication) (3 in SQL)
Security Enable SSL Encryption Specify whether to connect to Impala Server using SSL.
Note
The CA certificate for the remote source must be im
ported into the adapter truststore on the Data Provision
ing Agent host.
Use Agent Stored Credential Set to True to use credentials that are stored in the Data Pro
visioning Agent secure storage.
Allow Self-Signed Server SSL Specify whether to allow the server to use a self-signed SSL
Certificate certificate. This property is meaningful only if SSL is enabled.
Require Certificate Name Specify whether to require that a CA-issued SSL certificate
Match Server Host Name name must match the Impala Server host name. This prop
erty is meaningful only if SSL is enabled.
Kerberos Realm Optional. If specified, you can omit the realm part (for exam
ple @EXAMPLE.COM) of the Impala Service Principal and
User Principal properties.
Impala Service Principal Specify the Kerberos principal of the Impala service.
Use Keytab Set this to True if you want the module to get the technical
user's key from the keytab. The default value is False.
If Key Tab is not set, then the module locates the keytab from
the Kerberos configuration file. If it is not specified in the Ker
beros configuration file, then the module looks for the file
<user.home><file.separator>krb5.keytab.
Keytab Set this to the file name of the keytab to get the technical us
er's secret key.
Use Ticket Cache Set this to true if you want the ticket-granting ticket (TGT) to
be obtained from the ticket cache. Set this option to False if
you do not want this module to use the ticket cache. The de
fault value is False.
This module searches for the ticket cache in the following lo
cations:
Ticket Cache Set this to the name of the ticket cache that contains the us
er's TGT. If this is set, Use Ticket Cache must also be set to
true; otherwise, a configuration error is returned.
Data Type Mapping Map Impala STRING to Specify to which SAP HANA type the Impala STRING is map
ped. Choose from the following values:
● CLOB
● VARCHAR(5000)
Map Impala VARCHAR(length Specify to which SAP HANA type the Impala VAR
> 5000) to
CHAR(length > 5000) is mapped. Choose from the following
values:
● NCLOB
● NVARCHAR(5000)
Schema Alias Replace Schema Alias Schema name to be replaced with the schema given in
ments
Schema Alias Replacement.
Schema Alias Replacement Schema name to be used to replace the schema given in
Schema Alias.
Credentials Credentials Mode Remote sources support two types of credential modes to
access a remote source: technical user and secondary cre
Note dentials.
Depending on the ● Technical User: A valid user and password in the remote
value you chose for the database. This valid user is used by anyone using the re
Authentication mote source.
● Secondary User: A unique access credential on the re
Mechanism parame
mote source assigned to a specific user.
ter, the credentials
that appear will be dif
ferent.
Credential (User Name and User name. Required only if the Authentication Mechanism
Password) > User Name
parameter is set to User Name and Password.
Credential (User Name and Password. Required only if the Authentication Mechanism pa
Password) > Password
rameter is set to User Name and Password.
Credential (User Name) > User User name. Required only if the Authentication Mechanism
Name
parameter is set to User Name.
Credential (Kerberos) > Pass Kerberos password. Required only if the Authentication
word
Mechanism parameter is set to Kerberos.
The Impala adapter's User Name and Password remote source parameters are used when the Impala server
requires LDAP authentication. The User Name parameter is the LDAP user name. The Password parameter is
the LDAP bind password. Depending on the LDAP bind name pattern configuration in Impala Server, you may
need to provide values for either of these parameters:
Example
Sample Code
Related Information
You must configure the Kerberos realm's Key Distribution Center (KDC) host name or address through either
the krb5.conf file or adapter remote source parameters.
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = kdc.example.com
}
[domain_realm]
Note
Do not remove any existing configurations in this file. Changes to this file take effect immediately without
the need to restart the Data Provisioning Agent.
You can configure the realm and KDC through the adapter remote source parameters Realm and KDC; you
need to specify both. This is a shortcut to editing <DPAgent_root>/krb5/krb5.conf. The adapter writes
the configuration to the krb5.conf if absent when the adapter connects to KDC.
6.9 File
Use the File adapter to read formatted and free-form text files.
The File adapter enables SAP HANA users to read formatted and free-form text files. In contrast to the File
Datastore adapters, use the File adapter for the following scenarios:
● SharePoint access
● SharePoint on Office365
● Pattern-based reading; reading multiple flies in a directory that match a user-defined partition
● Five system columns are included, including row num, file location, and so on
● Real-time file replication
To specify a file format such as a delimiter character, you must create a configuration file with the
extension .cfg to contain this information. Then each file can be read and parsed through this format,
returning the data in columns of a virtual table.
For free-form, unstructured text files, you do not need to designate a file format definition, and you can use the
FILECONTENTROWS virtual table to view the data.
● Ensure that the user account under which the Data Provisioning Agent is running has access to the files on
the local host, a shared directory, or a SharePoint site.
● If the files are located on the same host as the Data Provisioning Agent, the files must be located in the
same directory, or a subdirectory, of the Data Provisioning Agent root directory.
Adapter Functionality
Note
Note
Only rows appended to a file initiate the capture. Only APPEND is supported. Using any other
command, such as DELETE, UPDATE, and so on, may shut down replication altogether.
Also, the addition of a file to the virtual table's directory initiates the capture. This functionality is not
supported for HDFS source files.
● SELECT, INSERT
Related Information
The File adapter is already deployed with the Data Provisioning Agent that you installed. However, you must
configure and register the adapter.
Procedure
Next Steps
Now, you can register your adapter with the Data Provisioning Agent.
Related Information
Parameter Description
Root directory The root directory for your data files. No remote source can reach beyond this directory
for data files.
File format root directory The root directory for your file format definitions. No remote source can reach beyond
this directory for format files.
Access Token A password. An access token protects the files from access from different agents. Use
this password when creating a remote source.
Determine the configuration parameters for your File adapter remote source. You can use the code samples
below for creating a remote source using the SQL console.
The available parameters might change depending on which options you choose.
Note
To use a Data Provisioning Agent installed on Linux to connect to the SharePoint site, enable basic
authentication on the SharePoint server.
Note
Whichever you select Local File
System, SharePoint Server, or
SharePoint on Office365, the File
adapter remote source only
displays the file format definitions
(.cfg files) under the Data
Provisioning Agent local folder that
you specify in the remote source. It
never displays files or directories
on the SharePoint server directly,
which is different than the Excel
Adapter with SharePoint scenario.
You need to provide the path to the
source file on the SharePoint site in
the CFG file, then create the virtual
table in the file format definitions.
Note
Do not use a link directory or
directory shortcut for a value in
this parameter.
\\<host_name>\<directory>
Directory of the file format definitions Location where you store your file
format definition files. This directory
must exist before you can create a
remote source. Include the full path
and file name.
Note
Do not use a linked directory or
directory shortcut for a value in
this parameter.
SharePoint on Office365 Configuration Authentication Mode Choose the type of credentials needed
to access SharePoint on Office365
Local Folder Path The path to the folder that you want to
access on the local file system where
the Data Provisioning Agent is
deployed.
ConnectionInfo > HDFS Configuration Host Name The remote URL to connect to the
remote HDFS, usually defined in
core-site.xml..
Use Ticket Cache Set this to true if you want the ticket-
granting ticket (TGT) to be obtained
from the ticket cache. Set this option to
False if you do not want this module to
use the ticket cache. The default value
is False.
ConnectionInfo > SharePoint Server Server URL Enter the URL for the server where the
Configuration SharePoint source is located.
Credentials > SharePoint Login SharePoint User (Domain\Username) The domain and user name for the
SharePoint account.
Credentials > Credential (Kerberos) Password The password for the Kerberos-
protected HDFS.
SharePoint on Office365 Credential Client Credential Enter the client secret you created on
the Microsoft Azure Portal.
● If you use Keytab and not Ticket Cache, specify the keytab file path in the Keytab parameter.
● If you are using Ticket Cache and not Keytab, specify the ticket cache file path in the Ticket Cache
parameter. Then, you can use any password (such as “abc” or “123”for the Credential (Kerberos)
parameter, because a password is required in Web IDE.
● Also, if you are using Ticket Cache and not Keytab, you can leave the ticket cache file path blank, but then
you must use the correct password for the Password of Credential (Kerberos) option
The following code samples illustrate how to create a remote source using the SQL console.
Sample Code
Sample Code
Sample Code
Sample Code
Configuration files enable the application to read the file format accurately. You must have a configuration file
when using the File adapter.
You can create file format configuration files either within SAP HANA Web IDE using the command line or by
creating a text file.
Using Web IDE can speed up the process of creating your file format configuration files. Using the command
line or a text file to create the configuration file requires that you use a separate text editor to enter the options
and values manually into an empty file.
Related Information
Create a file format configuration file to work with the data file used by the File adapter.
Each configuration file is a text file and must match the following format:
● The first line must contain a comment or a description. This line is ignored during processing.
● A set of key-value pairs to specify the various parsing options.
● A set of COLUMN=<name>;<SAP HANA datatype>;<optional description>
Fixed-Format Files
Fixed file formats are also supported (FORMAT=fixed). The formatting can be specified using the
COLUMSTARTENDPOSITION parameter or the ROW_DELIMITER and ROWLENGTH parameters.
Example
If your file exists in a subfolder, be sure to include that in the path for the FORCE_DIRECTORY_PATTERN
parameter.
FORCE_FILENAME_PATTERN=<file_name>.txt
FORCE_DIRECTORY_PATTERN=<root_directory><local_folder_path>/<folder_name>
Note
The FORCE_DIRECTORY_PATTERN should be an absolute path that includes the root directory, local folder
path, and folder path on the Sharepoint server.
Related Information
Global
However, if the virtual table maps to files in a particular directory or directory tree,
or only to particular file names, you can specify this information in the virtual table
directly. For example:
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plandata
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plandata/%
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plan%:
FORCE_FILENAME_PATTERN=plan%.txt
● Reading files inside the directory and matching the provided name pattern:
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plandata
FORCE_FILENAME_PATTERN=plan%.txt
Note
The path you use in the FORCE_DIRECTORY_PATTERN parameter is case sen
sitive. For example, if you set the Root Directory remote source parameter to
“C:\”, you must match that case in the FORCE_DIRECTORY_PATTERN param
eter.
● CSV (default)
● FIXED
● XML
● JSON
CODEPAGE The character encoding with which the file is read. By default, the operating sys
tem default codepage is used. In case the file has a Byte Order Mark, this code
page is always used. Valid values of the Java installation can be found by creating a
virtual table for CODEPAGE and querying its contents.
ROW_DELIMITER A character sequence indicating the end of a row in the file. In case they are non-
printable characters, they can be provided when encoded as /d65536 or /xFFFF or
as Unicode notation /u+FFFF. Alternatively, the typical \r and \n are supported as
well. Examples:
● \n Unix standard
● \r\n Windows standard
● \d13\d10 Windows standard, but characters provided as a decimal number
● \x0D\x0A Windows standard, but characters provided as a hex number
SKIP_HEADER_LINES In case the file has a header, the number of lines to be skipped is entered here.
When SKIP_HEADER_LINES=0, it means that the writer will not add a header to
the newly created data file.
When SKIP_HEADER_LINES>0, it will add a header to the newly created data file.
ERROR_ON_COLUMNCOUNT By default, a row with fewer columns than defined is considered okay. By setting
this parameter to true, it is expected that all rows of the file have the number of
columns defined.
LOCALE The decimal and date conversion from the strings in the file into native numbers or
dates might be locale-specific. For example, the date “14. Oktober 2000” when
you are using a German locale works. However, for all other languages, this date
does not work.
Valid values for the locale can be found by querying a virtual table based on the
LOCALE table of the adapter.
WRITEBACK_NUMBER_FORMAT Instructs the File adapter to respect the LOCALE parameter when writing decimal
values.
Default: True
The decimal number follows the column definition of the CFG file if set to True. If
WRITEBACK_NUMBER_FORMAT=false, the decimal number is plain and keeps the
origin fraction digits, and it does not follow the decimal definition in the CFG file.
DATEFORMAT The file format can use these data types for date/time-related values. Each can
have a different format string. The syntax of the format string is the Java Simple
TIMEFORMAT DateFormat syntax.
SECONDDATEFORMAT
TIMESTAMPFORMAT
LENIENT Controls automatic date and time format conversion when the value exceeds the
range.
For example, when this parameter is set to True, the 14th month of 2016 is con
verted to the second month of 2017.
Default: E
COLUMN Multiple entries consist of the columnname;datatype, where the data type is any
normal SAP HANA data type.
CSV only
COLUMN_DELIMITER The character sequence indicating the next column. If non-printable charac
ters are used, then either the \d65536, \xFFFF, or \u+FFFF encoding works.
The default value is “'”. In addition:
TEXT_QUOTES Sometimes text data is enclosed in quotes so a column delimiter inside the text
does not break the format. The line 2000;IT Costs; software
related only;435.55 would appear as 4 columns because the text con
tains a semicolon as well. If the file was created with quotes like 2000;"IT
Costs; software related only";435.55, then there is no such is
sue, but the file parser must act more carefully and not just search for the next
column delimiter. It must check if the text is inside the text quote character or
outside.
ESCAPE_CHAR Another way to deal with inline special characters is to escape them; for exam
ple, 2000;IT Costs\; software related only;435.55. Here
the \ is an escape character and indicates that the subsequent character is to
be taken literally, not as a column delimiter.
TEXT_QUOTES_ESCAPE_CHAR How to make quotes appear inside the text; for example, IT Costs;
"software related" only. One option the file creator might have used
is to use the global escape character: 2000;"IT Costs; \"software
related\" only";435.55. Another popular method is to have the
quotes escaped by another quote like in 2000;"IT Costs; ""software
related"" only";435.55. In that case, both the TEXT_QUOTE=" and
the TEXT_QUOTE_ESCAPE_CHAR=" are set to the " character.
QUOTED_TEXT_CONTAIN_ROW_DELIM The default value is False, which tells the parser that regardless of any quotes
ITER or escape characters, the text inside a row never contains the row delimiter
character sequence. In this case, the parser can break the file into rows faster,
can search only for the character sequence, and only the column parsing has
to consider the escape and quote characters. If set to true, parsing is slower.
Fixed-Width only
COLUMNSSTARTENDPOSI In a fixed-width file, the column positions must be specified for each column. Examples:
TION
● 0-3;4-11;12-37;38-53 defines that the first 4 characters are the first column, the next 8
contain the data for the second column, and so on. Columns must be in the proper or
der.
● 0;4;12;38 is equivalent to the previous example, but the last column ends with the line
end
● 0;4;12;38-53 can be used as well. In fact, for every column you can either specify the
start and end position or just the start.
ROWLENGTH In fixed-width files, there does not need to be a row delimiter. Often the file has some and
then they must be stripped away. The following examples assume that the last data charac
ter is at index 53 as specified the COLUMNSTARTENDPOSITION example.
● ROWLENGTH=56 ROW_DELIMITER= would work for a file that has a row delimiter. The
payload text ranges from 0..53; therefore, it is 54 characters long plus two characters
for \r\n. The last column does not contain the \r\n because it is instructed to end at
index 53.
● ROWLENGTH=54 ROW_DELIMITER=\r\n is equivalent to the previous example. Each
row is expected to be 54 characters plus 2 characters long. The main advantage of this
notation is that COLUMNSSTARTENDPOSITION=0;4;12;38 would work and the trailing
\r\n is stripped away. In the previous example, the last column would start at 38 and
end at index 55 due to rowlength=56 and therefore contain the \r\n characters in the
last column.
SCHEMA_LOCATION The absolute path to the XML schema file <filename>.xsd. For example:
SCHEMA_LOCATION=Z:\FileAdapter\XMLSchema\example.xsd
TARGET_NAMESPACE The target namespace defined in the XML schema. For example:
TARGET_NAMESPACE=NISTSchema-negativeInteger-NS
ROOT_NAME The XML node to use to display the hierarchy structure. For example:
ROOT_NAME=root
CIRCULAR_LEVEL The maximum recursive level in the XML schema to display. For example:
CIRCULAR_LEVEL=3
Use SAP Web IDE to create file format configuration files so that your CSV and fixed-format files can be used in
processing your data in the Flowgraph Editor.
Prerequisites
Ensure the SAP HANA database (HDB) module exists. For more information, see the SAP HANA Developer
Guide for XS Advanced Model.
Context
Procedure
1. Create a virtual table by right-clicking the src folder and choosing New File , then enter a file name
with the .hdbvirtualtable extension. For example, myVT.hdbvirtualtable.
2. Right-click the .hdbvirtualtable file, and then choose Open With File Format Editor .
Option Description
6. Set the required options based on the file format type selected in the previous step.
○ If you chose the CSV format, set the column delimiter that is used to separate the values in your file.
Frequently, this delimiter is a comma (,), but you can also use other values such as a semicolon (;),
colon(:), pipe (|), and so on.
○ If you chose the Fixed format, enter the row length and the column start and end numbers. The row
length should be the value of the entire row, including spaces and delimiters. The column start and end
numbers are the length of each column. You can enter the span of the column, or only the starting
number of the column. For example, 0-3;4-14;15-27 and 0;4;15 each return three columns with a total
row length of 28.
7. In the Virtual Table Columns section, add one or more columns of your virtual table or enter column
heading names by clicking the + icon and setting the name, description, and type of each new column.
Depending on the type you selected, you may also need to enter length, precision, and scale options.
If you choose to test your settings, you can see the columns you have added in the Simulation pane.
You can run the simulation on your file and copy the detected header information as columns in the Virtual
Table Columns section.
8. To test your settings, copy several lines from your data file and paste them into the top-right pane. Click the
Run Simulation icon above this pane to see the simulated results in the bottom-right pane.
9. When you have completed the settings, click Save.
You can either create a configuration file that serves as the file format definition, or you can use the virtual
table as the file format definition. The advantage of a virtual table is that you can use the table as an object
in your flowgraph. For example, you can reference it as a HANA Object in the Data Source and Data Target,
and in any nodes where you can call a procedure in the flowgraph. If you created a virtual table, the
configuration file is automatically created and placed in the file adapter with the name
<remote_object_identifier>.cfg. If you are creating a configuration file, continue with the following
steps.
10. (Optional) If you want to create a configuration file:
1. Right-click the .hdbvirtualtable file and choose Configuration (.cfg.txt).
2. Export the configuration file by right-clicking the .cfg.txt file, and choose Export to place the file in
your file adapter.
Related Information
Lists the options available for generating the file format definition in SAP Web IDE.
Remote Source
Option Description
Remote Source Name Enter the name of the remote source, which helps locate the remote source.
Remote Object Identifier Enter the ID of the object on the remote source, which helps identify the configuration
file that it references. If the configuration file does not exist under that identifier, a con
figuration file is created when the virtual table is deployed.
File Location
Option Description
Filename Pattern Enter a file name pattern that indicates the files that are used automatically with this file
format definition. For example, if you enter 123%, all files that begin with 123 automati
cally use this file format definition.
Directory Pattern Enter a directory pattern that indicates the location of the files used with this file format
definition. You can use the directory pattern alone, or with the filename pattern setting,
to narrow the virtual tables that use this file format definition. You might want to exe
cute a simple select * from <virtualtable> without a WHERE clause on a
directory and name of the file. In that case, every single file in the root directory and
subdirectories is read and parsed according to this virtual table format definition. Proc
essing might take a while and produce many errors. However, if the virtual table maps
to files in a particular directory, directory tree, or to particular file names only, you can
specify this information in the virtual table directly. For example:
Directory Pattern=/usr/sap/FileAdapter/FileServer/
plandata
Filename Pattern=plan%.txt
Number of Partitions Enter the number of partitions for parallel processing, which can improve performance.
Entering 0 means that the data is run serially.
Option Description
Format Choose the data source file type, CSV (comma-separated value), or Fixed (fixed-width
files). The option you select here displays format-specific options.
Code Page Select the character encoding for the file. By default, the operating system default is
used. When the file has a Byte Order Mark, this code page is always used. Valid values
of the Java installation can be found by creating a virtual table for the code page and
querying its contents. If you chose JSON or XML as the format, then set the code page
to UTF-8.
Locale The locale option sets the decimal and date conversion from the strings in the file into
native numbers or dates. For example, the month and day “14. Oktober 2017” are valid
in German, but is not valid in other languages. Valid values for the locale can be found
by querying a virtual table based on the LOCALE table of the adapter.
Skipped Leader Lines If the file contains header information such as metadata that is not used in the actual
data columns, enter the number of lines to be skipped.
Row Delimiter Enter the character sequence indicating the end of a row in the file. When the delimit
ers are nonprintable characters, they can be provided encoded as /d65536 or /xFFFF
or as Unicode notation /u+FFFF. Alternatively, the typical \r and \n are supported as
well. Examples:
● \n Unix standard
● \r\n Windows standard
● \d13\d10 Windows standard, but characters provided as a decimal number
● \x0D\x0A Windows standard, but characters provided as a hex number
Option Description
Column Delimiter Enter the character sequence indicating the next column. If nonprintable characters
are used, then encoding \d65536, \xFFFF, or \u+FFFF works.
● ; The semicolon is the column separator, so a line looks like 2000;IT Costs;
435.55
● | Use the pipe character as the delimiter
● \d09 Use an ASCII tab character as the delimiter
Escape Character If you have special characters in your data such as quotation marks or semicolons, en
ter an escape character to use the character literally. For example,2000;IT Costs
\; software related only;435.55. In this example, the \ char is an escape
char and indicates that the subsequent character, the semicolon (;), is to be taken lit
erally, not as a column delimiter.
Text Quotes Character If you have text data enclosed in quotes, you can specify a character to indicate that
the quotes are part of the data, and not a delimiter. The line 2000;IT Costs;
software related only;435.55 would appear as 4 columns because the
text contains a semicolon as well. If the file was created with quotes like 2000;"IT
Costs; software related only";435.55, then the semicolon within the
quotes is not interpreted as a column delimiter.
Text Quotes Escape Character Specify how to make quotes appear inside the text, like in IT Costs; "software
related" only. One option the file creator might have used is to use the global
escape character: 2000;"IT Costs; \"software related\" only";
435.55. Another popular method is to have the quotes escaped by another quote
like in 2000;"IT Costs; ""software related"" only";435.55. In
that case both the Character for Text Quotes=" and the Text Quotes Escape Charac
ter=" are set to the " character.
Quoted Text Contains Row Delim When disabled (off), indicates that any quotes or escape characters in text inside a
iter row does not contain the row delimiter character. In this case, processing is faster.
When enabled (on), processing is slower because the system looks for quotes or es
cape characters in the rows.
Option Description
Row Length In fixed width files, you do not need to set a row delimiter. Often the file has some de
limiters, and then they must be stripped away. The following examples show that the
last data character is at index 53:
Columns Start and End Position Enter the column positions for each column. Example:
● 0-3;4-11;12-37;38-53 defines that the first 4 characters are the first column, the
next 8 contain the data for the second column, and so on. Columns must be in the
proper order.
● 0;4;12;38 is equivalent to the example in the previous bullet. The last column ends
with the line end.
● 0;4;12;38-53 can be used as well. In fact, every single column can either specify
the start and end position or just the start.
Option Description
Exponential Character Enter a character that identifies any exponents in the data.
Date Format Choose from the available date and time formats or enter a custom date or time format.
Each option can have a different format string. The syntax of the format string is the Java
Seconddate Format SimpleDateFormat syntax.
Time Format
Timestamp Format
Convert Invalid Date or Time Enable the option to correct automatically invalid date or time values. For example, the
27th hour changes to 3 am.
Option Description
Type Select the column type based on the data in the column.
Length Enter the number of characters of the longest value in the column.
Precision Enter the total number of digits in the value. Used for Decimal data types.
Scale Enter the number of digits to the right of the decimal. Used for Decimal data types.
Procedure
1. Navigate to <DPAgent_root>\agentutils.
2. Run the following from the command line:
For Windows:
For UNIX:
Only the -file, -cfgdir, and -format (when using a CSV file) parameters are required.
The value for the -file parameter is the path to the directory containing one or more data files or the path
to a single file name for which the configuration files must be generated. The value for -cfgdir is the path
to the output directory where the generated configuration files are stored.
A number of options and value pairs can be provided as extra parameters. The following are supported:
Parameter Description
-firstRowAsColumnName Specifies whether to use the first row in a data file as the column names when gen
erating a CFG file with the createfileformat.sh/bat tool.
If set to TRUE, createfileformat.sh/bat uses the row above the real data
as the column name. Otherwise, createfileformat.sh/bat sets the row
names as COL1, COL2, ... by default. The default value is FALSE.
/17
Note
To use this parameter together with -skipHeaderLine, the row containing
the column names should be included in the -skipHeaderLine count. If
you set -firstRowAsColumnName to true and did not configure -
skipHeaderLine, -skipHeaderLine is set automatically to 1.
Note
○ FIXED format files do not support -firstRowAsColumnName.
○ The count of column name in the column name line must be correct.
○ If there are two-column names, but there are 3 columns in the file data,
the last column is ignored.
○ The column delimiter also applies to the column name line.
-columnStartEndPosition FIXED file column start end position. For example, -columnStartEndPosition
0-10;11-20.
Handle this parameter differently for Windows and Linux systems. The following is
an example for Windows:
Delimiter Character
Column delimiter ,
\r\n (Windows)
Escape character \
Note
Only one format of each type (date, time, second date) is allowed per file. If you have two columns
containing different formatted dates in it, only the first one is recognized. The second is Varchar.
Run this tool to generate a configuration file named call_center.dat that has ';' as a column delimiter and
'\n' as a row delimiter:
You can generate a CFG file when you create a virtual table using SQL.
A convenient way to generate the necessary configuration file is to do so when creating a virtual table using
SQL. By including the appropriate parameters in the SQL, a CFG file is generated and inserted into the
appropriate directory that you specified when creating the File adapter remote source.
For example, the following sample code generates a file named v_plan_2.cfg that is created in the file format
directory.
Related Information
The remote source provides tables that reflect the content of the CFG files, and they can be imported as virtual
tables.
After a remote source is created, you can browse the remote source. Each of the configured CFG files is shown
as a remote table under the remote source and can be imported as a virtual table. The following tables are
always included:
Table Description
CODEPAGES Use this table to retrieve all supported code pages of the Java installation and optionally
specify one in the various file format configuration files. The code page controls the char
acter encodings of the source files.
FILECONTENT This virtual table has one row per file and the entire file content is inside a BLOB column.
Use this table for unstructured data files.
FILECONTENTROWS Similar to FILECONTENT, this table returns the data as is, without any conversion, but
splits the file into rows at every <newline> character.
FILECONTENTROWSTEXT Similar to FILECONTENTROWS, this table also uses a character buffer for improved per
formance when handling lines with a length less than or equal to MAX_CLOB_IN
LINE_LOB_LENGTH(43690).
For lines with a greater length, this table behaves the same as FILECONTENTROWS.
FILECONTENTTEXT This virtual table has one row per file, and the entire file content is inside an NCLOB col
umn. Use this table for unstructured data files. In case the file has no ByteOrderMark
(BoM) header to identify the code page, or the operating system default code page is not
the proper one, you can supply the reader option CODEPAGE.
FILEDIRECTORY Returns a list of all files in the remote source configured root directory and its subdirecto
ries.
LOCALES This table returns all supported Java locales, and the values can be used to control the lo
cale of the file read, which impacts the decimal format, the month names of a date format,
and so on.
STATISTICS_CHAR Calculates the number of occurrences of each character in the files. Characters that occur
often usually are column delimiters, optional text quotes, and row delimiter characters.
Procedure
1. Configure the remote source, making sure that you set the appropriate parameters.
At a minimum, you will need to configure the following remote source parameters, :
Source Options
Target Options
Root Directory
Local Folder Path The download folder path. This folder will be created un
der the <DPAgent_root>/ directory automatically.
The local folder path is the location on the Data Provision
ing Agent machine local drive where you want to place
your source data files that the Data Provisioning Agent will
download from the SharePoint server.
See the remote source configuration topic for more information about these parameters.
2. Replicate data from, for example, the following SharePoint server URL: http://<host name>/
sharepoint_site1/SharedDocuments/myCSVfile.txt.
3. In the local CFG file, add the following:
FORCE_FILENAME_PATTERN=myCSVfile.txt
FORCE_DIRECTORY_PATTERN=<root directory>\download\sharepoint\SharedDocuments
Results
Data Provisioning Agent will download myCSVfile.txt from the SharePoint Server URL SharedDocuments
folder and store the file locally in <root directory>\download\sharepoint\SharedDocuments.
Note
When you execute a query, the Data Provisioning Agent downloads the file and places it in the download
folder. If you execute the same query to obtain the same file, the system downloads the file again and
replaces the existing file in the download folder.
Related Information
Context
You can access the SharePoint server using HTTPS or SSL. You first must download the SharePoint certificate
(CER) and configure your system.
Procedure
Note
Keytool is in the jre/bin folder. Add it to the $PATH environment. For example, C:\Program Files
\Java\jre7\bin\keytool.exe
Information about how to use a shared network directory for data files with the File adapter.
You can access data and format files in a shared directory if you follow these guidelines:
Windows
When using Windows, make sure that you manually access the network folder first using a user name and
password before you try to connect by creating a remote source.
Linux
To access a Linux network folder, mount the folder under the Data Provisioning Agent root installation
directory.
Observe the instructions for the Root Directory and Directory of the file format definitions parameters when
creating your File remote source.
Related Information
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
You must register an application on Microsoft Azure Portal as a precondition for using the Microsoft Graph API,
which is what allows you to access SharePoint on Office365 using the File or Microsoft Excel adapters.
Procedure
Next Steps
After registration, note the application (client) ID and Directory (tenant) ID, which you use during remote
source configuration.
Related Information
Configure your Azure application to set up your credentials and Microsoft Graph API.
Context
Further configuration of your Azure application is necessary to create credentials for when you create a remote
source. You must also grant permissions for using the Microsoft Graph API.
Procedure
If you want to authenticate using the username and password mode, set the Default client type to Yes.
If you want to authenticate the Client Credential mode, set the Default client type to No.
a. If you chose to authenticate using Client Credential mode, create a credential by clicking Certificates &
secrets. If you chose to authenticate using the username and password mode, skip to step 2.
b. Click New client secret, give it a name and an expiration time, and click Add.
c. Note the secret password for use during remote source creation.
2. Add permissions by clicking API permissions on the application page, and then clicking Add a permission.
3. Click Microsoft Graph.
4. If you use the Client Credential mode, click Application Permission. If you use the Username and password
mode, click Delegated permission.
5. Add the following permissions:
○ Directory.Read.All
○ Files.Read.All
○ Group.Read.All
○ Sites.Read.All
○ User.Read.All
6. Grant consent for these permissions.
If you have and administrator role, click Grant consent.
If you do not have an administrator role, ask your administrator to grant permission for you.
Note
If permissions are not granted, you will not be able to access SharePoint Office365.
Every time a permission is changed, you should redo the grant operations.
You can now create your remote source, using the information created while setting up your Microsoft Azure
Portal application.
Related Information
File Datastore adapters leverage the SAP Data Services engine as the underlying technology to read from a
wide variety of file sources. SAP Data Services uses the concept of datastore as a connection to a source.
These adapters provide features including:
● FileAdapterDatastore
● SFTPAdapterDatastore
Adapter Functionality
Related Information
6.10.1 Authorizations
Authorization requirements for accessing remote sources with a File Datastore Adapter.
● Ensure the user account under which the Data Provisioning Agent is running has access to the files on the
local host.
● If the files are located on the same host as the Data Provisioning Agent, the files must be located in the
same directory or in a subdirectory of the Data Provisioning Agent root directory.
How the File Datastore Adapter locates data files and configuration files.
By default, the adapter has access to the <DPAgent_root>/workspace directory. To enable browsing your
files, you can either put your data and configuration files in the default workspace directory, or you can
configure the location of the files in dpagentconfig.ini. For example, add the following to the
<DPAgent_root>/dpagentconfig.ini file:
#Number of paths that could be browsed in SAP HANA via this adapter
dsadapter.fileadapter.dirCount=2
dsadapter.fileadapter.dir1=C\:\\TEST1\\UAT
dsadapter.fileadapter.dir2=C\:\\TEST2\\QA
Then, browse to your data file and right-click to add it as virtual table. The adapter creates a corresponding
configuration file in the same folder. For example, when you add employee.csv to SAP HANA, the adapter
creates an employee.cfg file in the same folder.
Alternately, you can create your own configuration files through virtual procedures or create them manually:
● To use the Create Configuration File virtual procedure, in the SAP HANA Web Development Workbench:
Catalog, navigate to Provisioning Remote Sources <remote_source> File Operations CFG
Utilities . Right-click Create Configuration File and Add as Virtual Procedure.
● To create the configuration files manually, execute CREATE VIRTUAL PROCEDURE SQL.
You can then use the virtual table in a flowgraph or SQL execution.
Create a file format configuration file to work with your data file (File Datastore adapters).
Each configuration file is a text file and must match the following format:
● The first line must contain a comment or a description. This line is ignored during processing.
● A set of key-value pairs to specify the various parsing options
● A set of COLUMN=<name>;<SAP HANA datatype>;<optional description>;<optional date
type format>;<optional column width>
The date type format is necessary only if the SAP HANA data type of this column is date-related. Column
width is necessary only if the data file is a fixed-format file.
Example
Fixed-Format Files
Fixed file formats where FORMAT=fixed are supported. You can specify the formatting using the <optional
column width> parameter. For example, in COLUMN=COL3;VARCHAR(8);Description;;8, the last column
value of 8 indicates the width.
Global
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plandata
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plandata/%
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plan%:
FORCE_FILENAME_PATTERN=plan%.txt
● To read files inside the directory and matching the provided name pattern:
FORCE_DIRECTORY_PATTERN=/usr/sap/FileAdapter/
FileServer/plandata
FORCE_FILENAME_PATTERN=plan%.txt
CODEPAGE The character encoding with which to read the file. By default, the operating sys
tem default is used. If the file has a Byte Order Mark, this codepage is used. Valid
values of the Java installation can be found by creating a virtual table for CODE
PAGE and querying its contents.
ROW_DELIMITER A character sequence indicating the end of a row in the file. If these sequences are
nonprintable characters, they can be provided encoded as /d65536 or /xFFFF or
as Unicode notation /u+FFFF. Alternatively, the typical \r and \n are supported as
well. Examples:
● \n UNIX standard
● \r\n Windows standard
● \d13\d10 Windows standard, but characters are provided as a decimal num
ber
● \x0D\x0A Windows standard, but characters are provided as a hex number
SKIP_HEADER_LINES If the file has a header, enter the number of lines to skip.
ERROR_ON_COLUMNCOUNT By default, a row with fewer columns than defined is considered acceptable. Set
ting this parameter to true indicates that all rows of the file have as many columns
as defined.
LOCALE The decimal and date conversion from the strings in the file into native numbers or
dates might be locale-specific. For example, if you have the date “14. Oktober
2000”, and you are using a German locale, it works. However, for all other lan
guages, it does not work.
Valid values for the locale can be found by querying a virtual table based on the
LOCALE table of the adapter.
NULL_INDICATOR Specifies a character sequence to indicate to the software thatthe data is NULL.
Note
The software ignores NULL indicators specified in the file format for blob col
umns.
IGNORE_ROW_MARKERS Specifies a character sequence that appears at the beginning of specific rows.
When the software reads the file, or when it automatically creates metadata, and it
encounters the row markers, it ignores the row and moves to the next row.
DATEFORMAT Specifies the date format for reading and writing date values to and from the file.
TIMEFORMAT Specifies the time format for reading and writing time values to and from the file.
SECONDDATEFORMAT Specifies the date/time format for reading or writing date/time values to and from
the file.
BOF Specifies a string that marks the start of data in the file.
EOF Specifies the string that marks the end of data in the file.
Default: E
LENIENT Controls automatic date and time format conversion when the value exceeds the
range.
For example, when this parameter is set to True, the 14th month of 2016 is con
verted to the second month of 2017.
CSV only
COLUMN_DELIMITER The character sequence indicating the next column. If nonprintable characters
are used, then any of the following encodings works: \d65536, \xFFFF, or \u
+FFFF.
TEXT_QUOTES Sometimes text data is enclosed in quotes so a column delimiter inside the text
does not break the format. The line 2000;IT Costs; software
related only;435.55 would appear as four columns because the text
contains a semicolon as well. If the file was created with quotes such as
2000;"IT Costs; software related only";435.55, then there
is no such issue. However, the file parser must not just search for the next col
umn delimiter, it also must check if the text is inside or outside of the text
quote character.
ESCAPE_CHAR Another way to deal with inline special characters is to escape them. For exam
ple, 2000;IT Costs\; software related only;435.55. Here
the \ character is an escape character and indicates that the subsequent char
acter is to be taken literally and not as a column delimiter.
TEXT_QUOTES_ESCAPE_CHAR You can make quotes appear inside the text. For example, IT Costs;
"software related" only. One option the file creator might have used
is to use the global escape character 2000;"IT Costs; \"software
related\" only";435.55. Another popular method is to have the
quotes escaped by another quote; for example, 2000;"IT Costs;
""software related"" only";435.55. In that case, both the
TEXT_QUOTE=" and the TEXT_QUOTE_ESCAPE_CHAR=" are set to the "
character.
QUOTED_TEXT_CONTAIN_ROW_DELIM The default value is false and tells the parser regardless of any quotes or es
ITER cape characters that the text inside a row never contains the row delimiter
character sequence. In this case, the parser can break the file into rows much
faster; it must search for the character sequence only, and only the column
parsing has to consider the escape and quote characters. If set to true, parsing
becomes slower.
Fixed-Width only
COLUMNSSTARTENDPOSI In a fixed-width file, the column positions must be specified for each column. Examples:
TION
● 0-3;4-11;12-37;38-53 defines that the first 4 characters are the first column, the next 8
contain the data for the second column, and so on. Columns must be in the proper or
der.
● 0;4;12;38 is equivalent to the previous example, but the last column ends with the line
end
● 0;4;12;38-53 can be used as well. In fact, for every column you can either specify the
start and end position or just the start.
ROWLENGTH In fixed-width files, there does not need to be a row delimiter. Often the file has some and
then they must be stripped away. The following examples assume that the last data charac
ter is at index 53 as specified the COLUMNSTARTENDPOSITION example.
● ROWLENGTH=56 ROW_DELIMITER= would work for a file that has a row delimiter. The
payload text ranges from 0..53; therefore, it is 54 characters long plus two characters
for \r\n. The last column does not contain the \r\n because it is instructed to end at
index 53.
● ROWLENGTH=54 ROW_DELIMITER=\r\n is equivalent to the previous example. Each
row is expected to be 54 characters plus 2 characters long. The main advantage of this
notation is that COLUMNSSTARTENDPOSITION=0;4;12;38 would work and the trailing
\r\n is stripped away. In the previous example, the last column would start at 38 and
end at index 55 due to rowlength=56 and therefore contain the \r\n characters in the
last column.
Use the provided utility virtual procedures to alter, create, delete, or view configuration files.
First use auto-detect to identify the delimiters and metadata, then use these virtual procedures to alter a file
source to your requirements.
To create a virtual procedure, in the SAP HANA Web Development Workbench: Catalog, navigate to
Provisioning Remote Sources <remote_source> File Operations CFG Utilities . Right-click the
utility and select Add as Virtual Procedure.
Related Information
Use the Alter Configuration File Property virtual procedure to change values in a CFG file property.
Example
Item_ID,Item_Price,Item_Description,Date_Added
101,99.99,Item1,2016-10-11
The Alter File Field virtual procedure allows you to change the column metadata for a given file field. You can
alter data types, length, format, and so on.
Item_ID,Item_Price,Item_Description,Date_Added
101,99.99,Item1,2016-10-11
Use the virtual procedure to update the second column Item_Price to be Price and DECIMAL(10,5).
Create Configuration File allows you to create a CFG file under available root folders using SAP HANA tables.
Example
Item_ID,Item_Price,Item_Description,Date_Added
101,99.99,Item1,2016-10-11
Deletes the given configuration file using the Delete Configuration File virtual procedure.
Usage
The View Configuration File virtual procedure allows you to view an existing CFG file under available root
folders.
Example
6.10.6 FileAdapterDatastore
To access file sources, use the FileAdapterDatastore, one of the File Datastore adapters.
Remote source configuration parameters for the FileAdapterDatastore. Also included is a code sample for
creating a remote source using the SQL console.
File Format Configuration Format Specifies that the file has a delimiter
character between columns. Flat Files
and Fixed Width Files are supported.
Error handling Log data conversion warnings Specifies whether to log data
conversion warnings.
Error file root directory Only available when Write error rows to
file is enabled (true). Full path to the
directory in which to store the error file.
Note
Auto-detect might not match your
data types and delimiters exactly.
If this is the case, you can use the
CFG file utility virtual procedures to
modify the files to your
specifications.
Data type mapping match percent Used for auto-detection. Given a file
with data:
Range is 50 to 100.
by navigating to Configure
Adapters FileAdapter .
The following code sample illustrates how to create a remote source using the SQL console:
Example
Sample Code
Related Information
6.10.7 SFTPAdapterDatastore
To access SFTP CFG file sources, use the SFTPAdapterDatastore, one of the File Datastore adapters.
Related Information
Remote source configuration parameters for the SFTPAdapterDatastore. Also included is a code sample for
creating a remote source using the SQL console.
File Format Configuration Format Specifies that the file has a delimiter
character between columns. Flat Files
and Fixed Width Files are supported.
Error handling Log data conversion warnings Specifies whether to log data
conversion warnings.
Error file root directory Only available when Write error rows to
file is enabled (true). Full path to the
directory in which to store the error file.
Remote File Format Directory The directory of the folder on the SFTP
server to browse
by navigating to Configure
Adapters FileAdapter .
The following code sample illustrates how to create a remote source using the SQL console.
Example
Sample Code
Hive is the data warehouse that sits on top of Hadoop and includes a SQL interface. While Hive SQL does not
fully support all SQL operations, most SELECT features are available. The Hive adapter service provider is
created as a remote source, and requires the support of artifacts like virtual table and remote subscription for
each source table to perform replication.
Note
Before registering the adapter with the SAP HANA system, ensure that you have downloaded and installed
the correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib/hive folder.
Adapter Functionality
Restriction
Write-back operations including INSERT, UPDATE, and DELETE are supported only on Hive 2.3.3 and
newer, and the source table must support ACID transactions.
When writing to a Hive virtual table, the following data type size limitations apply:
Functionality Supported?
Real-time No
Functionality Supported?
ORDER BY Yes
GROUP BY Yes
Related Information
Find out which Hive, Hadoop, and other product JAR files are compatible with which versions of SAP HANA
smart data integration
Depending on the version of SAP HANA smart data integration and Hive you are using, there are steps you
need to take to ensure you have the proper JAR files loaded.
Beginning with SDI 2.2.2, we no longer bundle the hadoop-core-0.20.2.jar file. When you upgrade from a
previous version, you must manually download this file and place it in the <DPAgent_root>/lib/hive folder.
After you restart the Data Provisioning Agent and refresh the Hive adapter in SAP HANA, your remote source
will work.
There are two ways that conflicts may occur when installing JAR files in the <DPAgent>/lib/hive directory:
● The directory cannot contain multiple versions of Hive JAR files, which avoids class name conflicts.
● The directory cannot contain both a hadoop-core-0.20.2.jar and hadoop-common-<version>.jar
file at the same time, which also avoids class name conflicts.
If you are not using more advanced features, the following JAR files are sufficient:
● hive-jdbc-1.2.2-standalone.jar ● hadoop-core-0.20.2.jar
● hive-jdbc-2.3.0-standalone.jar ● hadoop-core-0.20.2.jar
Alternatively, you can use the JAR files associated with advanced Hive features listed below.
If you are using some advanced Hive features, for example, Zookeeper and HTTP Transport Mode, a standalone
Hive JAR (hive-jdbc-<version>-standalone.jar) is not sufficient. If you are using any of the following
● hive-jdbc-1.2.2-standalone.jar ● common/hadoop-common-2.7.3.jar
● commons-codec-1.4.jar ● common/lib/commons-
● httpclient-4.4.jar configuration-1.6.jar
● common/lib/hadoop-auth-2.7.3.jar
● common/lib/jaxb-impl-2.2.3.jar
● common/lib/slf4j-log4j12-1.7.10.jar
● hdfs/lib/xercesImpl-2.9.1.jar
$HIVE_HOME/lib/ $HADOOP_HOME/share/hadoop/
● commons-codec-1.4.jar ● common/hadoop-common-2.8.1.jar
● curator-client-2.7.1.jar ● common/lib/commons-
● curator-framework-2.7.1.jar collections-3.2.2.jar
● guava-14.0.1.jar ● common/lib/commons-
● hive-common-2.3.0.jar configuration-1.6.jar
● hive-jdbc-2.3.0.jar ● common/lib/commons-lang-2.6.jar
● hive-serde-2.3.0.jar ● common/lib/hadoop-auth-2.8.1.jar
● hive-service-2.3.0.jar ● common/lib/httpcore-4.4.4.jar
● hive-service-rpc-2.3.0.jar ● common/lib/jaxb-impl-2.2.3-1.jar
● hive-shims-0.23-2.3.0.jar ● common/lib/slf4j-log4j12-1.7.10.jar
● hive-shims-2.3.0.jar ● hdfs/lib/xercesImpl-2.9.1.jar
● hive-shims-common-2.3.0.jar
● httpclient-4.4.jar
● libthrift-0.9.3.jar
● zookeeper-3.4.6.jar
Related Information
SAP HANA smart data integration and all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0
Configure the following options in smart data access to set up your connection to a Hive remote source. Also
included is sample code for creating a remote source using the SQL console.
Note
If you change the remote source parameters after you have made a connection with Hive, you need to
restart the Data Provisioning Agent.
Data Type Mapping Map Hive STRING to If the length of a string in Hive is
greater than 5000 characters, set this
parameter to CLOB. The default value
is VARCHAR(5000).
Note
The CA certificate for the remote
source must be imported into the
adapter truststore on the Data
Provisioning Agent host.
● Default
● Kerberos: If set to Kerberos, the
Realm, KDC, and Hive Service
Principal settings are used when
making the Hive connection.
Use Agent Stored Credential Set to True to use credentials that are
stored in the DP Agent secure storage.
Note
A value for this parameter is
necessary only when Use
Connection URL is set to False.
Use Ticket Cache Set this to true if you want the ticket-
granting ticket (TGT) to be obtained
from the ticket cache. Set this option to
False if you do not want this module to
use the ticket cache. The default value
is False.
tor>krb5cc_<user.name>.
You can override the ticket cache
location by using the Ticket Cache
parameter.
● For Windows, if a ticket cannot be
retrieved from the file ticket cache,
the module uses Local Security
Authority (LSA) API to get the TGT.
Use Key Tab Set this to True if you want the module
to get the technical user's key from the
keytab. The default value is False.
The following code samples illustrate how to create a remote source using the SQL console:
Basic
Example
Sample Code
Kerberos
Example
Sample Code
Related Information
Even though you may have enabled Kerberos debug messages in the remote source configuration by setting
Kerberos Debug to True or by setting sun.security.krb5.debug=true in the dpagentconfig.ini file, the
debug messages will not appear in the Data Provisioning Agent log framework.trc.
To make the Kerberos debug messages visible, start the Data Provisioning Agent directly by executing
<DPAgent_root>/dpagent, or in Windows, <DPAgent_root>/dpagent.exe). Then, all logs and Kerberos
Related Information
The IBM DB2 Log Reader adapter provides real-time changed-data capture capability to replicate changed data
from a database to SAP HANA in real time. You can also use this adapter for batch loading.
The Log Reader service provider is created as a remote source and requires the support of artifacts like virtual
tables and remote subscriptions for each source table to perform replication.
Note
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
Adapter Functionality
● Connect multiple remote sources in SAP HANA to the same source database
Note
Use different DB2 users to set up different replications on the same DB2 database.
Restriction
DB2 range partitioned tables are not supported for real-time CDC.
Note
Functionality Supported?
Realtime Yes
Functionality Supported?
ORDER BY Yes
GROUP BY Yes
Related Information
Information about setting up your source system and adapter for real-time replication.
Related Information
The remote database must be set up properly for Log Reader adapters to function correctly when using real-
time replication.
● If you have a UDB client instance and a UDB server instance on different machines, the client and server
must be of the same UDB version.
● On a Windows system, the DB2 connectivity autocommit parameter must be enabled (autocommit=1).
The autocommit parameter is specified in the DB2 call level interface (CLI) configuration file for the
primary database.
If the autocommit parameter is not enabled, a deadlock problem can occur. The path to the CLI
configuration file is:<%DB2DIR%> \sqllib\db2cli.ini where <%DB2DIR%> is the path to the DB2 client
installation. Alternatively, to enable autocommit, open the DB2 administrative command-line console and
run: db2set DB2OPTIONS=-c
● To initialize LogReader without error, the database must have a tablespace created with these
characteristics:
Before upgrading your source DB2 database, keep the following in mind:
● Suspend any remote subscriptions before upgrading. Before suspending your subscriptions, ensure that
all data has been synced to the SAP HANA target table.
● After suspending your subscriptions, ensure that there is a change to the DB2 source table.
● After your upgrade, resume your subscriptions. If you receive an error such as code: <-2,657> after
resuming your subscriptions, reset and then resume all of your subscriptions.
Related Information
The method for setting DB2 UDB environment variables depends on the operating system.
Note
The DB2 UDB environment variables should be set up regardless of whether your Data Provisioning Agent
is installed on the same server as the DB2 database or not. Prior to setting up the variables, be sure that
you have installed the IBM Data Server Runtime Client.
For Linux, the DB2 UDB installation provides two scripts for setting up the DB2 UDB environment variables:
db2cshrc for C shell and db2profile for Bourne or Korn shell. These scripts set the library path environment
variable based on the bit size of the installed server or client.
For Linux platforms, the 32-bit and 64-bit versions of the driver and API libraries are located in <$HOME/
sqllib/lib32>/sqllib/lib32 and <$HOME/sqllib/lib64>/sqllib/lib64, respectively, where <
$HOME> is the home directory of the DB2 UDB instance owner.
Note
If the Data Provisioning Agent is installed on Linux, the library path environment variable must point to the
64-bit libraries. For Windows, the library path environment variable must point to the 32-bit libraries.
Note
We recommend that you add a line to the <DPAgent_root>/bin/dpagent_env.sh file to set the
db2profile environmental variables. This ensures that when you use dpagent_service.sh to start and
stop the DPAgent service, the DB2 UDB environment variables are sourced automatically. For example, you
could add a line such as . /home/db2inst1/sqllib/db2profile.
Additional steps are necessary when installing the Data Provisioning Agent and DB2 Server on different
servers.
If the Data Provisioning Agent and the DB2 Server are on different machines, the IBM Data Server Runtime
Client must be installed on the Data Provisioning Agent machine.
Related Information
On a Windows system, you must configure a DB2 Universal Database JDBC data source in the DB2
Administration Client, then use the database name and database alias specified for that DB2 Universal
Database JDBC data source when you configure DB2 LogReader Adapter connectivity.
On a Linux system, catalog the node and the primary database in DB2. Set the DB2 LogReader Adapter
pds_datasource_name parameter to the database alias. Also set the pds_host_name and
pds_host_number.
Follow these steps to catalog the remote TCP/IP node from the DB2 client.
Procedure
Follow these steps to catalog the primary database from the DB2 client.
Procedure
1. Catalog the primary database using this command at the DB2 prompt:
Parameter Description
Comment =
Configure your DB2 UDB database to work with the Log Reader adapter and replication.
● If you have a UDB client instance and a UDB server instance on different machines, the client and server
must be of the same UDB version.
Related Information
These steps show how to add a temporary tablespace to the primary database.
Procedure
%>bash
%>source /db2home/db2inst1/sqllib/db2profile
%>db2
<db2_admin_user> and <db2_admin_password> are the administrative user ID and password for the
primary database.
3. Create a buffer pool:
Note
Determine the DB2 UDB page size using the LIST TABLESPACES SHOW DETAIL command. For
example, to create a temporary tablespace named deep13 with a 16-KB page size and buffer pool
named tom_servo, enter:
db2 => create user temporary tablespace deep13 pagesize 16K managed by
automatic storage bufferpool tom_servo
Procedure
where <dbalias> is the cataloged alias of the primary database, <db2_user> is the primary
database user, and <db2_user_ps> is the password.
b. Determine the LOGARCHMETH1 setting:
2. If the results do not show that LOGARCHMETH1 is set to LOGRETAIN or to the path name of the directory
to which logs are archived or TSM server, set it:
where <path> is the full path name of the directory where the archive logs are to be stored. If you change
the setting of the DB2 UDB logarchmeth1 parameter, DB2 UDB requires you to back up the database. Use
your normal backup procedure or see the IBM documentation for information on the BACKUP DATABASE
command.
3. Reactivate and backup the DB2 UDB database to make the configuration change take effect:
a. Deactivate the database db2 => DEACTIVATE DATABASE dbalias USER db2_user USING
db2_user_ps, where <dbalias> is the cataloged alias of the primary database, <db2_user> is the
primary database user, and <db2_user_ps> is the password.
b. Back up the database:
○ LOGARCHMETH1=LOGRETAIN
○ LOGARCHMETH1=DISK: path
db2 => BACKUP DATABASE dbalias TO path USER db2_user USING db2_user_ps
where <dbalias> is the cataloged alias of the primary database, <path> is the log archive path
you specified, <db2_user> is the primary database user, and <db2_user_ps> is the password.
c. Activate the database again db2 => ACTIVATE DATABASE dbalias USER db2_user USING
db2_user_ps, where <dbalias> is the cataloged alias of the primary database, <db2_user> is the
primary database user, and <db2_user_ps> is the password.
4. Verify the configuration change:
where <dbalias> is the cataloged alias of the primary database, <db2_user> is the primary
database user, and <db2_user_ps> is the password.
The last SELECT statement returns two rows: one for the on-disk (DBCONFIG_TYPE=0) value and another
for the in-memory (DBCONFIG_TYPE=1) value. Make sure that both of the values are changed to
LOGRETAIN or DISK.
These steps show how to create a DB2 UDB user and grant permissions.
Context
DB2 LogReader Adapter requires a DB2 UDB login that has permission to access data and create new objects
in the primary database. The DB2 UDB login must have SYSADM or DBADM authority to access the primary
database transaction log.
Procedure
1. Create a new operating system user named ra_user using commands appropriate for your operating
system. For example, to create a user named ra_user on a Linux operating system, use: %>useradd -
gusers -Gmgmt -s/bin/shell -psybase -d/home/ra_user -m ra_user, where <psybase> is
the password corresponding to the ra_user user name.
2. Start the DB2 UDB command-line processor:
%>bash
%>source /db2home/db2inst1/sqllib/db2profile
%>db2
3. Connect to the primary DB2 UDB database: db2=>connect to pdb user db2_admin_user using
db2_admin_password, where <db2_admin_user> and <db2_admin_password> are the administrative
user ID and password for the primary database.
4. Grant all necessary authorities to ra_user:
Library Notes
DB2 UDB JDBC driver Include the DB2 JDBC driver library in the Data Provisioning Agent CLASSPATH envi
ronment variable. Use the corresponding version of the JDBC driver listed in the IBM
documentation.
For information about required JDBC libraries, see the SAP HANA smart data
integration Product Availability Matrix (PAM). This JAR file (db2jcc4.jar) must be
copied to the following directory:
<DPAgent_root>/lib
Log Reader native interface The DB2 Log Reader Adapter calls a C-based native interface to access the DB2 Log
Reader API to read its log record. Include the native interface library in the DPAgent
PATH (for Windows) or LD_LIBRARY_PATH (for Linux) environment variable or
JVM -Djava.libary.path variable if you start up the Data Provisioning Agent
from Eclipse.
Platform Notes
Note
The native interface libraries are packaged into the Data Provisioning Agent in
staller.
These libraries are installed into the DB2 database during replication initialization for
specific procedure calls. Include them in the Data Provisioning Agent CLASSPATH
environment variable. These libraries are packaged into the Data Provisioning Agent
installer.
Related Information
Run SQL scripts to clean objects manually from the DB2 source database.
Clean-up scripts are used to drop database-level objects. Usually you do not need to execute a clean-up script
after an adapter is dropped, because the adapter drops database-level objects automatically. However, in some
cases, if any errors occur before or while automatically dropping these objects, the objects may not be
dropped. At that point, you may want to execute the clean-up script to drop the objects.
You can use the Data Provisioning Agent command-line configuration tool to validate the configuration of the
IBM DB2 log reader environment before creating remote sources that use the IBM DB2 Log Reader adapter.
Prerequisites
Before validating the log reader environment, be sure that you have downloaded and installed the correct JDBC
libraries. For information about the proper JDBC library for your source, see the SAP HANA smart data
integration Product Availability Matrix (PAM).
Also, before starting these steps, place your files in <DPAgent_root>/lib, and manually create the /lib
folder.
Procedure
Next Steps
After you have validated the configuration of the IBM DB2 log reader environment, you can create remote
sources with the IBM DB2 Log Reader adapter. You can manually create remote sources or generate a creation
script with the command-line configuration tool.
Related Information
Use the Data Provisioning Agent command-line configuration tool to validate parameters and generate a
usable script to create a remote source for log reader adapters.
Prerequisites
Before generating a remote source creation script for your source, be sure that you have downloaded and
installed the correct JDBC libraries. For information about the proper JDBC library for your source, see the SAP
HANA smart data integration Product Availability Matrix (PAM).
Before performing these steps, place your files in <DPAgent_root>/lib. Note that you must manually create
the /lib folder.
Specify the name of the agent to use and the name of the remote source to create, as well as any
connection and configuration information specific to your remote source.
For more information each configuration parameter, refer to the remote source configuration section for
your source type.
Results
The configuration tool validates the configuration details for your remote source and generates a script that
can be used to create the remote source. You can view the validation results in the Data Provisioning Agent log.
By default, the configuration tool generates the remote source creation script in the user temporary directory.
For example, on Windows: C:\Users\<username>\AppData\Local\Temp\remoteSource-
<remote_source_name>.txt.
Related Information
Note
Log Reader adapter preferences are no longer set in the Data Provisioning Agent Configuration Tool with
the exception of Number of wrapped log files, Enable verbose trace, and Maximum log file size. They are now
in the remote source configuration options in SAP HANA. If you have upgraded from a previous version, the
settings you find in the Agent Configuration Tool are the previous settings and are displayed for your
reference.
Maximum scan queue size The maximum number of log records 1000
permitted in the log reader log scan
queue during replication.
Ignore log record processing errors Determines whether to ignore log re false
cord processing errors.
Timeout in seconds to retry connecting The number of seconds the agent waits 10
between retry attempts to connect to
the primary database.
Number of wrapped log files The maximum size in 1-K blocks of the 3
LogReader system log file before wrap
ping.
Maximum log file size Limits the size of the message log to 1000
conserve disk space.
Turn on asynchronous logging mode Specifies whether or not Log Reader TRUE
should turn on asynchronized logging
mode. (TRUE, FALSE)
Maximum size of work queue for asyn The Maximum size of the work queue 1000
chronous logging for asynchronous logging file handler to
collect the log records. The range is 1 to
2147483647.
Discard policy for asynchronous logging Specifies the discard policy for the BLOCKING
file handler asynchronous logging file handler when
the work queue is saturated.
When setting up a remote source and you use a remote source name longer than 30 characters, the generated
log reader folder name under <DPAgent_root>/LogReader/ is converted to AGENT<xxxx>.
The log file is located at <DPAgent_root>/log/Framework.trc and reads: The instance name
<original_name> exceeds 30 characters and it is converted to <converted_name>.
Load and Replicate LOB columns When this parameter is set to False, the
LOB columns are filtered out when
doing an initial load and real-time
replication.
Note
This option is not available for an
ECC adapter.
Whitelist Table in Remote Database Enter the name of the table that
contains the whitelist in the remote
database.
GET_REMOTE_SOURCE_TABLE_DEFI
NITIONS is called.
Note
The value of this parameter can be
changed when the remote source
is suspended.
key1=value1[;key2=value2].
..
For example,
securityMechanism=9;encryp
tionAlgorithm=2
Note
The IBM DB2 log reader adapter
does not support the following
LDAP scenarios:
Schema Alias Replacements Schema Alias Schema name to be replaced with the
schema given in Schema Alias
Replacement. If given, accessing tables
under it is considered to be accessing
tables under the schema given in
Schema Alias Replacement.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Use Agent Stored Credential Set to True to use credentials that are
stored in the Data Provisioning Agent
secure storage.
Note
When you use credentials stored in
the agent's secure storage, you
must still specify the user name in
Credentials User Name .
Additionally, the Credential Mode
must not be none or empty.
CDC Properties > Log Reader Maximum operation queue size The maximum number of operations
permitted in the log reader operation
queue during replication. The value
range is 25 to 2147483647.
Ignore log record processing errors Specifies whether to ignore log record
processing errors. If set to True, the
replication does not stop if log record
processing errors occur. The default
value is False.
Timeout in seconds to retry connecting The number of seconds the agent waits
between retry attempts to connect to
the primary database. The value range
is 0 to 3600.
LogReader read buffer size Allows you to adjust the size of the DB2
log read. If the size is too small, you
may encounter an errorsqlcode
-2650 reason 8.
Related Information
Make sure that the DB2 port number is within a valid range.
If the DB2 port number is set to 0 or larger than 65535, DB2 converts it to a port number that is less than
65535. The following translation rules apply:
● If the port number is 0 or 65536, DB2 will set a random port number after you restart DB2, each time.
● If the port number is larger than 65536, the real port number that DB2 set is the port number, minus
65536. For example, 70000–65536=4464. In this case, 4464 is the real port number that DB2 sets.
● On Windows, open Task Manager. Find the PID of DB2_XX service, then open a cmd prompt and type
netstat -aon | findstr PID.
● On Linux, use the ps -aux |grep db2sysc command.
Using a schema alias can help you manage multiple schema, remote sources, and tables more easily.
The Schema Alias and Schema Alias Replacement options, available in the remote source configuration
parameters for some Data Provisioning adapters, allow you to switch easily between schema, remote sources,
and tables. The Schema Alias is the name of the schema in the original system. The Schema Alias Replacement
is the name of the schema in the current system that replaces the Schema Alias name.
A common use case is to create a remote source pointing to a development database (for example, DB_dev),
and then create virtual tables under that remote source. Afterward, you may switch to the production database
(for example, DB_prod) without needing to create new virtual tables; the same tables exist in both DB_dev and
DB_prod, but under different schema and databases.
During the development phase, you may create a virtual table for a source table OWNER1.MYTABLE in DB_dev,
for example. Note that OWNER1.MYTABLE is the unique name of the source table, and it is a property of the
virtual table. With it, the adapter knows which table in the source database it is expected to access. However,
when you switch to the production database (DB_prod), there is no OWNER1.MYTABLE, only
OWNER2.MYTABLE. The unique name information of the virtual table cannot be changed once created.
You can resolve this problem using the Schema Alias options. In this case, we want to tell the adapter to replace
OWNER1 with OWNER2. For example, when we access OWNER1.MYTABLE, the adapter should access
OWNER2.MYTABLE. So here, OWNER1 is Schema Alias from the perspective of DB_prod, while OWNER2 is
Schema Alias Replacement.
Related Information
You can review processing information in the Log Reader log files.
Note
By default, the adapter instance name is the same as the remote source name when the remote source is
created from the SAP HANA Web-based Development Workbench.
Set up secure SSL communication between DB2 and the Data Provisioning Agent.
Context
If you want to use SSL communication between your DB2 source and the Data Provisioning Agent, you must
prepare and import certificates and configure the source database.
Procedure
cd C:\SSL
"C:\Program Files\ibm\gsk8\bin\gsk8capicmd_64.exe" -keydb -create -db
"key.kdb" -pw "ibm123456" -stash
"C:\Program Files\ibm\gsk8\bin\gsk8capicmd_64.exe" -cert -create -db
"key.kdb" -pw "ibm123456" -label "SSLLabel" -dn
"CN=XXXX.XX.XX.XXXX,O=IBM,OU=IDL,L=Bangalore,ST=KA,C=INDIA"
On Linux:
cd /home/db2inst1/SSL
/home/db2inst1/sqllib/gskit/bin/gsk8capicmd_64 -keydb -create -db
"key.kdb" -pw "ibm123456" -stash
/home/db2inst1/sqllib/gskit/bin/gsk8capicmd_64 -cert -create -db "key.kdb"
-pw "ibm123456" -label "SSLLabel" -dn
"CN=XXXX.XX.XX.XXXX,O=IBM,OU=IDL,L=Bangalore,ST=KA,C=INDIA"
/home/db2inst1/sqllib/gskit/bin/gsk8capicmd_64 -cert -extract -db
"key.kdb" -pw "ibm123456" -label "SSLLabel" -target "key.arm" -format
ascii -fips
b. Connect to the DB2 database using the instance user, and use the command-line interface to update
SSL-relevant configuration parameters.
Specify the server SSL key location, label, and port, and set the communication protocol to include
SSL.
For example, to use a key stored in H:\cert\SSL with the label “SSLLabel” and port 56110:
In the DB2 diagnostic log db2diag.log, check for the following message:
Additionally, verify that the /etc/services file contains the specified SSL port.
2. Prepare the DB2 client for SSL connections.
a. Copy the SSL key from the DB2 database server to the DB2 client location.
Create an SSL directory on the DB2 client, and copy key.arm from the DB2 server into this directory.
b. Add the DB2 server SSL key to the DB2 client.
From the SSL directory on the DB2 client, use the gskit tool to import the server SSL key.
For example:
Specify the SSL keydb and stash, and restart the instance.
For example:
db2 catalog tcpip node SSLNODE remote <hostname> server 56110 security ssl
db2 catalog database mydb as sslmydb at node SSLNODE
For example:
Use the Java keytool to import the SSL key. By default, keytool is located in $JAVA_HOME/bin.
For example:
c. Configure the SSL password with the Data Provisioning Agent configuration tool.
Specify the same password used when importing the SSL key, and then restart the Data Provisioning
Agent.
Next Steps
When you create a DB2 remote source, ensure that the following parameters are set appropriately:
Related Information
There are times when you may want to limit access to all of the tables in a source database. For data
provisioning log reader adapters, as well as SAP HANA and SAP ECC adapters, an efficient way to limit access
is to create a whitelist.
Restricting access to only those tables that are to be replicated is done by creating a whitelist of source
database objects in a separate table.
Note
The whitelist impacts only the virtual table created and the replications created after the whitelist was
created.
Note
● The whitelist table, which can have any name, must have two columns named
REMOTE_SOURCE_NAME and WHITELIST.
● The whitelist items are separated by a comma.
● You can use an asterisk (*) to represent any character or empty string. However, the asterisk must be
placed at the end of a whitelist item. Otherwise, it is treated as a normal character.
● You can add multiple rows of whitelisted tables for a single remote source.
To add a whitelist for the remote source called “localmssqldb”, insert a row into the whitelist table:
object.A, object.B*, and so on, means that the table (or procedure) object.A and the table (or procedure)
starting with object.B are filtered for the remote source “localmssqldb”.
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
The DB2 Mainframe adapter supports IBM DB2 for z/OS and IBM DB2 iSeries, which is formerly known as AS/
400.
The DB2 Mainframe adapter is a data provisioning adapter that provides DB2 client access to the database
deployed on IBM DB2 for z/OS and iSeries systems. DB2 database resources are exposed as remote objects of
the remote source. These remote objects can be added as data provisioning virtual tables. The collection of
DB2 data entries are represented as rows of the virtual table.
Adapter Functionality
Related Information
The method for setting DB2 UDB environment variables depends on the operating system.
Note
The DB2 UDB environment variables should be set up regardless of whether your Data Provisioning Agent
is installed on the same server as the DB2 database or not. Prior to setting up the variables, be sure that
you have installed the IBM Data Server Runtime Client.
For Linux, the DB2 UDB installation provides two scripts for setting up the DB2 UDB environment variables:
db2cshrc for C shell and db2profile for Bourne or Korn shell. These scripts set the library path environment
variable based on the bit size of the installed server or client.
For Linux platforms, the 32-bit and 64-bit versions of the driver and API libraries are located in <$HOME/
sqllib/lib32>/sqllib/lib32 and <$HOME/sqllib/lib64>/sqllib/lib64, respectively, where <
$HOME> is the home directory of the DB2 UDB instance owner.
Note
If the Data Provisioning Agent is installed on Linux, the library path environment variable must point to the
64-bit libraries. For Windows, the library path environment variable must point to the 32-bit libraries.
Note
We recommend that you add a line to the <DPAgent_root>/bin/dpagent_env.sh file to set the
db2profile environmental variables. This ensures that when you use dpagent_service.sh to start and
stop the DPAgent service, the DB2 UDB environment variables are sourced automatically. For example, you
could add a line such as . /home/db2inst1/sqllib/db2profile.
DB2 mainframe database users must have certain permissions granted to them.
The IBM DB2 Mainframe adapter requires a user with read privileges to the SYSIBM.SYSCOLUMNS system
table.
Context
If you receive the following error from the adapter, follow these steps to bind the DB2 SYSHL package:
Procedure
Prepare the IBM DB2 JDBC JAR files to use one of the DB2 Mainframe adapters.
To use one of the DB2 Mainframe adapters, you are required to copy the following IBM DB2 JDBC JAR files to
the /lib folder of the Data Provisioning Agent installation directory (<DPAgent_root>\lib).
● db2jcc4.jar (Required)
You can download this file here: http://www-01.ibm.com/support/docview.wss?uid=swg21363866 .
Download the JDBC JAR file according to your DB2 database version.
● db2jcc_license_cisuz.jar (Required)
You can find information about this file here: http://www-01.ibm.com/support/docview.wss?
uid=swg21191319
● These JAR files are available in the installation directory after you installed the IBM DB2 client. For
example, on a Windows system, the JAR files are located in C:\Program Files\IBM\SQLLIB\java.
● Download them from the IBM Support and Download Center.
Note
If the source z/OS DB2 system contains a non-English CCSID table space, you are required to update the
JVM to an internationalized version. At a minimum, the charsets.jar file within the current JVM should
Options for connecting to the remote mainframe data server. Also included is sample code for creating a
remote source using the SQL console.
Database Host Host name or IP address on which the remote DB2 data
server is running.
Use Agent Stored Creden Set to True to use credentials that are stored in the DP
tial
Agent secure storage.
z/OS DB2 Additional Info Bind Packages When this option is set to Yes, the DB2 mainframe
adapter automatically checks and binds all of the required
missing JAR files.
Note
If any necessary packages are missing, an error oc
curs.
Source System is AS/400 Set this parameter to Yes if your source system is AS/
(IBM I)
400.
Credential Properties Credentials Mode Remote sources support two types of credential modes to
access a remote source: technical user and secondary
credentials.
User Name The DB2 user with access to the tables that are added as
virtual tables in SAP HANA.
Example
Sample Code
Related Information
Store Source Database Credentials in Data Provisioning Agent [Batch] [page 82]
Store Source Database Credentials in Data Provisioning Agent [Command Line] [page 68]
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
Configure the Adapter Truststore and Keystore Using the Data Provisioning Agent Configuration Tool [page
513]
This adapter lets SAP HANA users access Microsoft Excel files.
Adapter Functionality
Related Information
Option Description
Access Token A password. An access token protects the Microsoft Excel files from being accessed from a
different remote source. Use this same password when creating a remote source.
Options for connecting to the remote Microsoft Excel data. Also included is sample code for creating a remote
source using the SQL console.
Note
If you want to use a Data Provisioning Agent installed on Linux to connect to the SharePoint site, enable
Basic Authentication on the SharePoint server.
File Source File Source Type ● File System: Specifies that the Microsoft Excel
source is located in a file system.
● SharePoint: Specifies that the Microsoft Excel
source is located on a SharePoint server.
● SharePoint on Office365: Specifies that the Micro
soft Excel source is on an Office365 server.
File System Folder The directory of Microsoft Excel files. Use a relative
path.
\\<host_name>\<directory>
Note
Password-protected Microsoft Excel files are not
supported.
HANA Port The port used to connect to the SAP HANA server
SharePoint Server URL Enter the URL for the server where the SharePoint
source is located.
Local Folder Path The path to the folder that you want to access on the
local file system where the Data Provisioning Agent is
deployed.
Table First Row as Header Determines whether the first row of the sheet is con
sidered the header. If set to True, each column's con
tent is used as the column name of the virtual table in
SAP HANA.
Values:
Start Row of Data Determines which row of the sheet is the first data row
the Microsoft Excel adapter loads into the virtual table.
End Row of Data Determines which row of the sheet is the last data row
the adapter loads into the virtual table.
Show Hidden Column and Rows Determines whether to process the columns that are
hidden from the sheet.
Values:
Column Filter The list of columns that are processed. Any column
that does not exist in the list is ignored.
Note
If the Column Filter option is empty, all columns
are processed. If the Column Filter option is not
empty, only the listed columns are processed.
SharePoint on Office365 Authentication Mode Choose the type of credentials needed to access
SharePoint on Office365
Local Folder Path The path to the folder that you want to access on the
local file system where the Data Provisioning Agent is
deployed.
User Token User Token for Excel Folder Access The same password as the adapter Access Token pref
erence. If this parameter is left blank or is different
from the Access Token, the remote source is not al
lowed to read the Microsoft Excel files. Microsoft Excel
Adapter Preferences [page 289]
SharePoint Credential SharePoint logon (Domain\User The domain and user name for the SharePoint ac
Name) count.
SharePoint on Office365 Client Credential Enter the client secret you created on the Microsoft
Credential Azure Portal.
SharePoint on Office365 Username Enter the username for the Microsoft account
Username Password
The following code samples illustrate how to create a remote source using the SQL console.
Example
Sample Code
Example
Sample Code
Related Information
Accessing Microsoft Excel Data Files in a Shared Network Directory [page 296]
Context
You can access the SharePoint server using HTTPS or SSL. You first must download the SharePoint certificate
(CER) and configure your system.
Procedure
Note
Keytool is in the jre/bin folder. Add it to the $PATH environment. For example, C:\Program Files
\Java\jre7\bin\keytool.exe
Information about how to use a shared network directory for data files with the Microsoft Excel adapter.
You can access Microsoft Excel data files in a shared directory, however you must follow a few rules:
Windows
When using Windows, make sure that you first manually access the network folder using a user name and
password before trying to connect via creating a remote source.
To access a Linux network folder, mount the folder under the Data Provisioning Agent root installation
directory.
Observe the instructions for the Folder parameter when creating your Microsoft Excel remote source.
Related Information
You must register an application on Microsoft Azure Portal as a precondition for using the Microsoft Graph API,
which is what allows you to access SharePoint on Office365 using the File or Microsoft Excel adapters.
Procedure
Next Steps
After registration, note the application (client) ID and Directory (tenant) ID, which you use during remote
source configuration.
Configure your Azure application to set up your credentials and Microsoft Graph API.
Context
Further configuration of your Azure application is necessary to create credentials for when you create a remote
source. You must also grant permissions for using the Microsoft Graph API.
Procedure
If you want to authenticate using the username and password mode, set the Default client type to Yes.
If you want to authenticate the Client Credential mode, set the Default client type to No.
a. If you chose to authenticate using Client Credential mode, create a credential by clicking Certificates &
secrets. If you chose to authenticate using the username and password mode, skip to step 2.
b. Click New client secret, give it a name and an expiration time, and click Add.
c. Note the secret password for use during remote source creation.
2. Add permissions by clicking API permissions on the application page, and then clicking Add a permission.
3. Click Microsoft Graph.
4. If you use the Client Credential mode, click Application Permission. If you use the Username and password
mode, click Delegated permission.
5. Add the following permissions:
○ Directory.Read.All
○ Files.Read.All
○ Group.Read.All
○ Sites.Read.All
○ User.Read.All
6. Grant consent for these permissions.
If you have and administrator role, click Grant consent.
If you do not have an administrator role, ask your administrator to grant permission for you.
If permissions are not granted, you will not be able to access SharePoint Office365.
Every time a permission is changed, you should redo the grant operations.
Next Steps
You can now create your remote source, using the information created while setting up your Microsoft Azure
Portal application.
Related Information
You can access Microsoft Outlook data stored in a PST file using the Outlook adapter.
Related Information
You can adjust Microsoft Outlook adapter settings in the Data Provisioning Agent Configuration Tool by running
<DPAgent_root>/configTool/dpagentconfigtool.exe.
Access Token A password to access Microsoft Outlook PST files. This exact
value must be used when setting up a Microsoft Outlook re
mote source.
Configuration settings for accessing a Microsoft Outlook source. Also included is sample code for creating a
remote source using the SQL console.
Configure the following options in smart data access to configure your connection to a Microsoft Outlook PST
file.
Option Description
PST File Location Specifies the path and file name to the PST file from which the adapter will read. The
user of the Data Provisioning Agent must have permission to access this PST file.
Ignore Extra Folder Select True to not show any irrelevant folders when browsing metadata.
Credentials Mode Remote sources support two types of credential modes to access a remote source:
technical user and secondary credentials.
● Technical User: A valid user and password in the remote database. This valid user
is used by anyone using the remote source.
● Secondary User: A unique access credential on the remote source assigned to a
specific user.
PST File Access Token Specifies the access token. This value must be the same as the Access Token value in
the Outlook adapter preferences set in the Data Provisioning agent configuration tool.
Example
Sample Code
Use the Microsoft SQL Server Log Reader adapter to batch load to SAP HANA or to replicate changed data in
real time from a database to SAP HANA.
Note
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
Note
The user configured during the installation of the Data Provisioning Agent must have read access to the
transaction log, which is the .ldf file.
Note
If you are using Microsoft SQL Server on Amazon RDS or Microsoft Azure, observe the following limitations:
● To avoid remote access issues in Amazon RDS, ensure the database instance setting Publicly
Acccessible has been enabled.
● Real-time replication is not supported.
Adapter Functionality
Realtime change data capture (CDC) Yes. Both log reader and trigger-based realtime replication
are supported
Note
● Log Reader adapters do not support the truncate
table operation.
● The Microsoft SQL Server Log Reader adapter does
not support WRITETEXT and UPDATETEXT.
● For CDC replication, data imported into Microsoft
SQL Server using the bcp tool is not supported be
cause the tool bypasses writing to the Microsoft
SQL Server transaction logs.
● ADD COLUMN
● DROP COLUMN
● RENAME TABLE
● RENAME COLUMN
● ALTER COLUMN DATATYPE
Note
For trigger-based CDC, the supported schema changes
are:
● ADD COLUMN
● DROP COLUMN
● ALTER COLUMN DATA TYPE
Related Information
Information about setting up your source system and adapter for real-time replication.
Note
Real-time replication is not supported with Microsoft SQL Server on Amazon RDS or Microsoft Azure.
Related Information
Remote Database Setup for Microsoft SQL Server Real-time Replication [page 304]
Installing Data Provisioning Agent and Microsoft SQL Server on Different Servers [page 314]
Connecting Multiple Remote Sources to the Same SQL Server Source Database [page 316]
Remote Database Cleanup for Microsoft SQL Server Real-time Replication [page 316]
Validate the Microsoft SQL Server Log Reader Environment [page 317]
Generate a Log Reader Remote Source Creation Script [page 318]
Microsoft SQL Server Trigger-Based Replication [page 319]
The remote database must be set up properly for Log Reader adapters to function correctly when using real-
time replication.
Note
If the non-persisted computed columns are not part of primary keys, values for these columns will
be propagated to the target table during initial load. However, because changes in values for these
columns are not recorded in transaction logs, the changed values for these columns cannot be
replicated during realtime (CDC), and they will be set to NULL for the changed rows.
Note
The Microsoft SQL Server Log Reader relies on database logs to perform data movement. Logs must be
available until the data is successfully read and replicated to the target SAP HANA database. To ensure that
the data is replicated to SAP HANA, configure Microsoft SQL Server in Full Recovery Mode.
Restriction
Microsoft SQL Server with Transparent Data Encryption (TDE) is not supported.
Real-time replication for Microsoft SQL Server relies on the ability of the adapter to read transactions from
physical log files. Because these files are encrypted by Microsoft SQL Server, the adapter cannot read the
transactions, so consequently real-time capability is not available.
Procedure
1. Create a Microsoft SQL Server user, for example DP_USER, for the remote source.
2. Grant the required privileges as follows:
use master
go
create login DP_USER with password =‘MyPW’
go
use <primary database>
go
create user DP_USER for login DP_USER
go
EXEC sp_addsrvrolemember ‘DP_USER’, ‘sysadmin’
go
Procedure
1. Log on to Microsoft SQL Server using the newly created user, and change the Microsoft SQL Server
remote admin connections configuration option to enable DAC to allow remote connections:
reconfigure
go
Related Information
Install and set up the sybfilter driver so that the Log Reader can read the primary transaction log files.
Prerequisites
On Windows Server 2008 R2, Windows Security Update KB3033929 must already be installed on the host
system.
Procedure
1. In Windows Explorer, navigate to the sybfilter driver installation directory. This directory is located at
<DPAgent_root>\LogReader\sybfilter\system\<platform>, where <DPAgent_root> is the root
directory or Data Provisioning Agent installation and <platform> is either winx86 or winx64.
○ winx86 is for 32-bit Windows Server 2008, Windows Server 2008 R2, and Windows 7
○ winx64 is for 64-bit Windows Server 2008, Windows Server 2008 R2, and Windows 7
2. Right-click sybfilter.inf, and click Install to install the sybfilter driver.
3. Under any directory, create a configuration file to save all log file paths for primary databases. The
configuration file must have a .cfg suffix. For example, under <DPAgent_root>\LogReader
\sybfilter\system\<platform>, create a file named LogPath.cfg.
4. Add a system environment variable named <RACFGFilePath>, then set its value to the path and file name
of the configuration file.
○ User manager: Use the add command in the management console. The syntax for this command is
add serverName dbName logFilePath. For example, to add the log file named pdb1_log.ldf for
the database pdb1 on the data server PVGD50857069A\MSSQLSERVER, use the following: add
PVGD50857069A\MSSQLSERVER pdb1 C:\Mssql2012\MSSQL11.MSSQLSERVER\MSSQL\DATA
\pdb1_log.ldf
○ Add the following into the LogPath.cfg file directly:
[PVGD50857069A\MSSQLSERVER, pdb1]
log_file_path=C:\Mssql2012\MSSQL11.MSSQLSERVER\MSSQL\DATA\pdb1_log.ldf
Related Information
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
Follow these steps to enable TCP/IP for the Microsoft SQL Server adapter.
Procedure
1. Go to SQL Server Configuration Tool, and choose SQL Server Configuration Manager SQL Server
Network Configuration Protocols for <SQLInstanceName> , where <SQLInstanceName> is your
Microsoft SQL Server instance.
2. Right-click TCP/IP, and choose Enable.
Context
Before you can begin using the SQL Server Log Reader adapter, you must configure the primary data server.
Note
If the Database Data Capture Mode parameter is set to MSSQL CDC Mode when you create a remote
source, this step is not necessary.
Note
If you are using Microsoft SQL Server installed on Windows 2012 and later, you must restart Microsoft SQL
Server in single-user mode from the command line opened with the Run as Administrator parameter
enabled.
Procedure
a. In Control Panel Administrative Tools Services , find the service named MicrosoftSQLServer
(<SERVER>), where <SERVER> is the name of your Microsoft SQL Server data server. For example,
Microsoft SQL Server (TEAMSTER).
b. Right-click your Microsoft SQL Server instance and choose Properties.
c. In the General tab, click Stop.
Tip
You can also stop Microsoft SQL Server in single-user mode from the command line using
Administrator privileges.
For example, if you started the instance using a command prompt, enter Ctrl+C in the window and
enter Y to stop it.
Tip
Restart Microsoft SQL Server in single-user mode from the command line using Administrator
privileges.
The connection is made. If the DAC is already in use, the connection fails with an error indicating it
cannot connect.
4. To initialize the server, execute script <DPAgent_root>\LogReader\scripts
\mssql_server_init.sql. Script <DPAgent_root>\LogReader\scripts
\mssql_server_deinit.sql can be used to de-initialize the server if necessary.
5. Open the Microsoft SQL Server service properties window: by clicking Start Control Panel
Administrative Tools Services , then right-click your Microsoft SQL Server instance, and choose
Properties. Then, recover the user account to the previous value in the Log On tab.
6. Stop and restart the Microsoft SQL Server service back to normal mode.
Related Information
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
Learn how to set up your environment to run your Microsoft SQL Server database on one computer and run the
Data Provisioning Agent on a separate Linux computer.
Context
The following is an example of how to set up an environment with a SQL Server database named “mypdb”
installed on Windows computer A and a Data Provisioning Agent installed on Linux computer B.
For example, in step 2, the Microsoft SQL log files are stored in two directories:
○ C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQL2K12SP1\MSSQL\DATA
○ C:\MSSQL_LOG\mypdb
Share the two directories on computer A, and then mount the two directories on computer B.
a. Create a directory on computer B where you want to mount the MSSQL log file folder. For example,
create directories on computer B named /MSSQL_share_folder1 and /MSSQL_share_folder2.
b. On computer B execute the following command:
c. Assuming that the username/password for computer A is ABC/123456, the logon domain is localhost,
and the share name of the MSSQL log file folder on computer A is //10.172.162.145/DATA and //
10.172.162.145/mypdb, you can execute the following command to mount the MSSQL log file folder on
computer B:
Related Information
More steps are necessary when configuring a Microsoft SQL Server Log Reader Adapter to connect to a
Microsoft SQL Server host that uses Always On Availability Groups.
Prerequisites
Before you can configure the log reader adapter with Always On support, Microsoft SQL Server must be
configured with an Availability Group Listener. For more information, see the Microsoft SQL Server
documentation.
Context
The following is an example that shows you how to set up an environment with a Microsoft SQL Server
database named “mypdb”. The database is also configured with an Always On Availability Group with a
secondary database, in addition to the primary database.
1. Install and configure Sybfilter on each host in the Microsoft SQL Server Always On Availability Group,
including the primary and secondary databases.
You can copy Sybfilter from the agent installation directory on the Data Provisioning Agent host. For
example, C:\usr\sap\dataprovagent\LogReader\sybfilter.
2. Run a SQL query to get the exact location of the log files.
3. Share the folders that contain “mypbd” database log files on each host computer in the Always On
Availability Group.
Note
Grant READ permissions for the shared folders to the DPAGENT user on each host in the Always On
Availability Group. If you haven't done so already, make sure that your log files are readable by following
the instructions in Make Log Files Readable [page 306].
○ Because the mapping is based on a parent directory and not on the log file itself, only one entry is
sufficient for both mypdb_log.ldf and mypdb_log_2.ldf.
○ Put the original path on the left side of the equal sign and the UNC pathname of each share folder on
the right side, separated by semicolons.
For example, suppose that you are connecting to the database “mypdb”, with the primary database on
computer A and one secondary database on computer B.
[myrs:mypdb]
C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQL2K12SP1\MSSQL\DATA=\
\<host_name_A>\mssql_data;\\<host_name_B>\mssql_data
5. When you create the remote source, set the value of the SQL Server Always On parameter to True and
specify the Availability Group Listener Host and Availability Group Listener Port.
Tip
We recommend that you also set Database Data Capture Mode to MS SQL CDC Mode. If you do not use
the Microsoft CDC data capture mode, you need to execute server-level initialization scripts on each
host in the Always On Availability Group.
Context
The following procedure is an example, which shows you how to set up an environment with a Microsoft SQL
Server database named “mypdb” that is configured as part of a failover cluster.
Procedure
1. Install and configure Sybfilter on each host in the failover cluster, including the primary and secondary
databases.
You can copy Sybfilter from the agent installation directory on the Data Provisioning Agent host. For
example, C:\usr\sap\dataprovagent\LogReader\sybfilter.
2. Run a SQL query to get the exact location of the log files.
3. Share the folders that contain mypbd database log files on the active node of the failover cluster.
Note
Grant READ permissions for the shared folder to the DPAGENT user. If you haven't done so already,
make sure that your log files are readable by following the instructions in Make Log Files Readable
[page 306].
[myrs:mypdb]
C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQL2K12SP1\MSSQL\DATA=\
\<host_name>\mssql_data
Results
By default, during a failover event, the agent tries to reopen an inaccessible log file three times at intervals of 5
seconds. If the agent is unable to open the log file after these attempts, the task fails.
You can modify the number of attempts and the retry interval by changing the lr_reopen_device_times
and lr_reopen_device_interval parameters in <DPAgent_root>\LogReader\config\mssql.cfg.
More steps are necessary when installing the Data Provisioning Agent and Microsoft SQL server on different
computers.
Context
The following example shows you how to set up an environment with a Microsoft SQL Server database named
“ mypdb” on computer A and a Data Provisioning Agent installed on another computer B.
Procedure
Sybfilter can be copied from the Data Provisioning Agent installation directory on computer B. For
example, C:\usr\sap\dataprovagent\LogReader\sybfilter.
2. Run a SQL query to get the exact location of the log files.
Grant READ permissions for the shared folders to the DPAGENT user on computer B. If you haven't
done so already, make sure that your log files are readable by following the instructions in Make Log
Files Readable [page 306].
○ Because the mapping is based on a parent directory and not on the log file itself, only one entry is
sufficient for both mypdb_log.ldf and mypdb_log_2.ldf
○ Put the original path on the left side of the equal symbol and the UNC path name of the share folder on
the right side.
[DB1]
D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA=\
\computer1\mssql_data
[DB2]
D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA=\
\computer2\mssql_data
If DB1 and DB2 have the same name, add a remote source name to differentiate:
[RS1:DB1]
D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA=\
\computer1\mssql_data
[RS2:DB1]
D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA=\
\computer2\mssql_data
6. When you create the remote source, set the value of the Use Remote Database parameter to True.
Related Information
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
You can connect multiple remote sources to the same remote database, providing that you meet the following
conditions:
● Each remote source uses a unique schema, specified in the LogReader Objects Schema in Remote
Database remote source configuration parameter.
● Each remote source uses a unique name, in case the remote sources are created on different SAP HANA
instances.
Run SQL scripts to disable replication and clean up objects manually from the Microsoft SQL Server source
database.
Cleanup scripts disable replication of a source database and drop database-level objects. Usually, you do not
need to execute a cleanup script after an adapter is dropped, because replication is disabled and the adapter
automatically drops database-level objects. However, in some cases, if any errors occur during or before
automatically disabling replication and dropping these objects, the replication may still be enabled and objects
may not be dropped. At that point, you may need to execute the cleanup script to drop the objects.
You can find the Microsoft SQL Server cleanup script files at <DPAgent_root>\LogReader\scripts.
The script to be executed depends on which Database Data Capture Mode you select in your remote source
configuration. If you select MSSQL CDC Mode, execute mssql_logreader_mscdc_cleanup.sql. If you
select Native Mode, execute mssql_logreader_native_cleanup.sql.
You can use the Data Provisioning Agent command-line configuration tool to validate the configuration of the
Microsoft SQL Server log reader environment, before creating remote sources that use the SQL Server Log
Reader adapter.
Prerequisites
Before validating the log reader environment, be sure that you have downloaded and installed the correct JDBC
libraries. For information about the proper JDBC library for your source, see the SAP HANA Smart Data
Integration Product Availability Matrix (PAM).
Before performing these steps, place your files in <DPAgent_root>/lib and manually create the /lib folder.
Procedure
○ To test whether the Microsoft SQL Server environment is ready for replication, choose Mssql
Replication Precheck.
○ To create a new Microsoft SQL Server user with the permissions required for replication, choose Create
A Mssql User With All Permissions Granted.
For each task, provide any additional parameters required by the task. For example, to test whether the
Microsoft SQL Server environment is ready for replication, you must specify the name of the Microsoft SQL
Server user and whether CDC is being used.
After you have validated the configuration of the Microsoft SQL Server log reader environment, you can create
remote sources with the Microsoft SQL Server Log Reader adapter. You can manually create remote sources or
generate a creation script with the command-line configuration tool.
Related Information
Use the Data Provisioning Agent command-line configuration tool to validate parameters and generate a
usable script to create a remote source for log reader adapters.
Prerequisites
Before generating a remote source creation script for your source, be sure that you have downloaded and
installed the correct JDBC libraries. For information about the proper JDBC library for your source, see the SAP
HANA smart data integration Product Availability Matrix (PAM).
Before performing these steps, place your files in <DPAgent_root>/lib. Note that you must manually create
the /lib folder.
Procedure
Specify the name of the agent to use and the name of the remote source to create, as well as any
connection and configuration information specific to your remote source.
Results
The configuration tool validates the configuration details for your remote source and generates a script that
can be used to create the remote source. You can view the validation results in the Data Provisioning Agent log.
By default, the configuration tool generates the remote source creation script in the user temporary directory.
For example, on Windows: C:\Users\<username>\AppData\Local\Temp\remoteSource-
<remote_source_name>.txt.
Related Information
Unlike like log reader functionality, which reads a remote database log to get changed data, trigger-based
replication is based on triggers capturing changed data, and then the adapter continuously queries the source
database to get the changed data. When a table is subscribed to replicate, the adapter creates three triggers
(INSERT, UPDATE, and DELETE) on the table for capturing data. The supported operations are:
● Add a column
● Delete a column
● Alter a column datatype
● Rename a column
The adapter also creates a shadow table for the subscribed table. Except for a few extra columns for
supporting replication, the shadow table has the same columns as its replicated table. Triggers record changed
data in shadow tables. For each adapter instance (remote source), the adapter creates a Trigger Queue table to
mimic a queue. Each row in shadow tables has a corresponding element (or placeholder) in the queue. The
adapter continuously scans the queue elements and corresponding shadow table rows to get changed data
and replicate them to the target SAP HANA database.
Creating a DML trigger requires ALTER permission on the table or view on which the trigger is being created.
Creating a DDL trigger with database scopes (ON DATABASE) requires ALTER ANY DATABASE DDL TRIGGER
permission in the current database.
GRANT SELECT, INSERT, UPDATE, DELETE, ALTER, EXECUTE, VIEW DEFINITION ON SCHEMA::[schema of
the target subscribed table] TO [pds_user].
GRANT VIEW SERVER STATE permission to view data processing state, such as transaction ID. This must be
granted on the master database.
Note
“GRANT VIEW SERVER STATE TO SDI_USER” isn’t permitted for SQL Server on Azure. On SQL Database
Premium Tiers, the VIEW DATABASE STATE permission is required in the database. On SQL Database Standard
and Basic Tiers, the Server admin or an Azure Active Directory admin account is required.
Related Information
sys.dm_tran_current_transaction (Transact-SQL)
When you create a remote source to use trigger-based replication, a few system objects—such as tables,
triggers, or procedures—are created on the SQL Server source.
All of the system objects are created under <pds_user>. The “SDI_” prefix is an example, and it is set in the
remote source configuration parameters.
(table) Table to show the table name and its corresponding shadow table: "<pds_user>"."SDI_SRC_TO_ST”
Related Information
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
Configuration parameters for the Microsoft SQL Server Log Reader adapter.
Note
Log Reader adapter preferences are no longer set in the Data Provisioning Agent Configuration Tool with
the exception of Number of wrapped log files, Enable verbose trace, and Maximum log file size. They are now
in the remote source configuration options in SAP HANA. If you have upgraded from a previous version, the
settings you find in the Agent Configuration Tool are the previous settings and are displayed for your
reference.
You can adjust Microsoft SQL Server Log Reader adapter preferences in the Data Provisioning Agent
Configuration Tool (<DPAgent_root>/configTool/dpagentconfigtool.exe).
Maximum scan queue size The maximum number of log records 1000
permitted in the log reader log scan
queue during replication.
Maximum wait interval between log The maximum wait interval between 2
scans Log Reader transaction log scans.
Note
● The value of the parameter is
the maximum number of sec
onds that can elapse before
the Log Reader component
scans the transaction log for a
transaction to be replicated,
after a previous scan yields no
such transaction.
● For reduced replication latency
in an infrequently updated da
tabase, we recommend lower
number settings for the pa
rameter.
● If the primary database is con
tinuously updated, the value of
the parameter is not signifi-
cant to performance.
Seconds to add to each log scan wait The number of seconds to add to each 0
interval wait interval before scanning the trans
action log, after a previous scan yields
no transaction to be replicated.
Note
● The value of the parameter is
the number of seconds added
to each wait interval before the
Log Reader component scans
the log for a transaction to be
replicated, after a previous
scan yields no such transac
tion.
● The number of seconds speci
fied by the parameter is added
to each wait interval, until the
wait interval reaches the value
specified by the Maximum wait
interval between log scans pa
rameter.
● For optimal performance, the
value of the parameter should
be balanced with the average
number of operations in the
primary database over a pe
riod of time. In general, better
performance results from
reading more operations from
the transaction log during
each Log Reader scan.
● With a primary database that
is less frequently updated, in
creasing the value of the pa
rameter may improve overall
performance.
● If the database is continuously
updated, the value of the pa
rameter may not be significant
to performance.
Timeout in seconds to retry connecting The number of seconds the agent waits 10
between retry attempts to connect to
the primary database.
Number of wrapped log files The maximum size in 1-K blocks of the 3
agent system log file before wrapping.
Maximum log file size Limits the size of the message log to 1000
conserve disk space.
Turn on asynchronous logging mode Specifies whether or not Log Reader True
should turn on asynchronized logging
mode. (True, False)
Maximum size of work queue for asyn The maximum size of the work queue 1000
chronous logging for the asynchronous logging file han
dler to collect the log records. The
range is 1 to 2147483647.
Discard policy for asynchronous logging Specifies the discard policy for the BLOCKING
file handler asynchronous logging file handler when
the work queue is saturated.
Note
When setting up a remote source and you use a remote source name longer than 30 characters, the
generated log reader folder name under <DPAgent_root>/LogReader/ is converted to AGENT<xxxx>.
The log file is located at <DPAgent_root>/log/framework.trc and reads: The instance name
<original_name> exceeds 30 characters and it is converted to <converted_name>.
Data Type Conversion Always Map Character Types to Determines whether a CHAR/
Unicode VARCHAR/TEXT column in the source
database is mapped to a Unicode
column type in SAP HANA when the
source database character set is non-
ASCII. The default value is False.
Map SQL Server Data Type Time to The value is False by default, which
Timestamp means TIME is mapped to TIME.
However, setting this parameter to
False can lead to the loss of precision.
When setting its value to True, TIME
maps to TIMESTAMP.
Generic Load and Replicate LOB columns When this parameter is set to False, the
LOB columns are filtered out when
doing an initial load and real-time
replication. The value of this parameter
can be changed when the remote
source is suspended.
Note
This option isn’t available for an
ECC adapter.
Database Data Server (serverName The Microsoft SQL Data Server name
\instanceName)
If your Microsoft SQL Server instance
is enabled with dynamic ports, you
must provide the Instance Name of the
Microsoft SQL Server instance instead
of the port number. Provide the data
server name and the instance name in
the format <serverName>
\<instanceName>.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Whitelist Table in Remote Database Enter the name of table that contains
the whitelist in the remote database.
Availability Group Listener Host The host name of the listener for the
Always On availability group
Availability Group Listener Port The port used by the listener for the
Always On availability group
Schema Alias Replacements Schema Alias Schema name to be replaced with the
schema given in Schema Alias
Replacement. If given, accessing tables
under this alias is considered to be
accessing tables under the schema
given in Schema Alias Replacement.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Host Name in Certificate Enter the host name that is in the SSL
certificate.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter may
not be changed when the remote
source is suspended.
Note
If set to True, the Credentials Mode
parameter must be set to Technical
user.
Use Agent Stored Credential Set to True to use credentials that are
stored in the Data Provisioning Agent
secure storage.
Note
You don’t need to enable Microsoft
SQL Server CDC.
Tip
Both of these database data
capture modes require the
SYSADMIN role to execute.
Note
When replicating using multiple
remote sources and a single
source database, each schema
name and each remote source
name must be unique.
Note
If the S-ID of this user is changed
using ALTER USER DDL, the
Maintenance User Filter doesn’t
work.
Ignore log record processing errors Specifies whether the Log Reader
ignores the errors that occur during log
record processing. If set to True, the
replication doesn’t stop if log record
processing errors occur. The default
value is False.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Maximum wait interval between log The default value is 2 seconds. The
scans value range is 1–3600.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Seconds to add to each log scan wait The default value is 0. The value range
interval is 0–3600.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Timeout in seconds to retry connecting The number of seconds the agent waits
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Trigger-based: Batch queue size The internal batch queue size. The
batch queue size determines the
maximum number of batches of
change data that are queued in
memory. The default value is 64.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Trigger-based: Triggers record PK only Set to True to have the triggers record
only primary keys of delta data during
CDC processing. This action may
improve the DML performance in the
source database.
Note
When this parameter is set to True,
only support DML (INSERT &
DELETE) is supported; no DDL is
supported.
Note
The value of this parameter can be
changed when the remote source
is suspended.
The following examples illustrate how to create a remote source using the SQL console.
Basic
Example
Example
Sample Code
Using a schema alias can help you manage multiple schema, remote sources, and tables more easily.
The Schema Alias and Schema Alias Replacement options, available in the remote source configuration
parameters for some Data Provisioning adapters, allow you to switch easily between schema, remote sources,
and tables. The Schema Alias is the name of the schema in the original system. The Schema Alias Replacement
is the name of the schema in the current system that replaces the Schema Alias name.
A common use case is to create a remote source pointing to a development database (for example, DB_dev),
and then create virtual tables under that remote source. Afterward, you may switch to the production database
(for example, DB_prod) without needing to create new virtual tables; the same tables exist in both DB_dev and
DB_prod, but under different schema and databases.
During the development phase, you may create a virtual table for a source table OWNER1.MYTABLE in DB_dev,
for example. Note that OWNER1.MYTABLE is the unique name of the source table, and it is a property of the
virtual table. With it, the adapter knows which table in the source database it is expected to access. However,
when you switch to the production database (DB_prod), there is no OWNER1.MYTABLE, only
OWNER2.MYTABLE. The unique name information of the virtual table cannot be changed once created.
You can resolve this problem using the Schema Alias options. In this case, we want to tell the adapter to replace
OWNER1 with OWNER2. For example, when we access OWNER1.MYTABLE, the adapter should access
OWNER2.MYTABLE. So here, OWNER1 is Schema Alias from the perspective of DB_prod, while OWNER2 is
Schema Alias Replacement.
Related Information
You can review processing information in the Log Reader log files.
Note
By default, the adapter instance name is the same as the remote source name when the remote source is
created from the SAP HANA Web-based Development Workbench.
Set up secure SSL communication between Microsoft SQL Server and the Data Provisioning Agent.
Context
If you want to use SSL communication between your Microsoft SQL Server source and the Data Provisioning
Agent, you must create and import certificates and configure the source database.
Procedure
1. On the Microsoft SQL Server host, create a certificate authority (CA) certificate using the sha1 algorithm.
You can also create a certificate using the makecert.exe utility included in the Windows SDK.
For example:
a. In the Microsoft Management Console, choose File Add/Remove Snap-in and add the
Certificates snap-in to the MMC.
In the wizard, specify the account and local computer.
b. In Certificates (Local Computer), right-click on the CA certificate that you created and choose All
Tasks Manage Private Keys .
Note
If the CA certificate does not appear, first choose All Tasks Import to import the certificate.
c. In Group or user names, click Add and specify the name of the account used by the Microsoft SQL
Server service.
d. Copy the certificate and paste it under Certificates (Local Computer) Trusted Root Certification
Authorities Certificates .
3. Specify the certificate for the Microsoft SQL Server instance.
Use the SQL Server Configuration Manager (SSCM) to specify the certificate.
a. Expand SQL Server Network Configuration, and choose Protocols for <SQL Server instance>
Properties .
b. In the Certificate tab, select the certificate that you imported and click OK.
Tip
If the certificate does not appear, verify that the hostname in the certificate is correct, and that the
Microsoft SQL Server service user has been added to the certificate.
4. Restart Microsoft SQL Server to ensure that the new certificate is picked up.
In the SQL Server error log, a message such as the following should appear:
a. In the Certificates snap-in for the Microsoft Management Console, navigate to Personal
Certificates .
b. Right-click on the certificate and choose All Tasks Export .
Export the certificate in the DER encoded binary X.509 (.CER) format. You do not need to export the
private key with the certificate.
6. Prepare the Data Provisioning Agent for SSL connections.
a. Copy the certificate from the Microsoft SQL Server host to the Data Provisioning Agent installation.
b. Import the certificate into the Data Provisioning Agent keystore.
Use the Java keytool to import the certificate. By default, keytool is located in <DPAgent_root>/
sapjvm/bin.
c. Configure the SSL password with the Data Provisioning Agent configuration tool.
Specify the same password used when importing the certificate, and then restart the Data Provisioning
Agent.
Next Steps
When you create a Microsoft SQL Server remote source, ensure that the following parameters are set
appropriately:
Related Information
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
Configure the Adapter Truststore and Keystore Using the Data Provisioning Agent Configuration Tool [page
513]
To configure Windows authentication, copy an installed DLL file to your Windows system.
To use integrated authentication, the sqljdbc_auth.dll file must be copied to a directory in the Windows
system path on the computer where the JAVA installation is located.
Note
When you run a 32-bit Java Virtual Machine (JVM), use the sqljdbc_auth.dll in the x86 folder, even if
the operating system is the x64 operation. When you run a 64-bit JVM on a x64 processor, use the
sqljdbc_auth.dll file in the x64 folder.
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
SAP HANA Smart Data Integration: Potential issues when connecting to SQL Server using the MssqlLogReader
Adapter and Windows Authentication
There are times when you may want to limit access to all of the tables in a source database. For data
provisioning log reader adapters, as well as SAP HANA and SAP ECC adapters, an efficient way to limit access
is to create a whitelist.
Restricting access to only those tables that are to be replicated is done by creating a whitelist of source
database objects in a separate table.
Note
The whitelist impacts only the virtual table created and the replications created after the whitelist was
created.
Note
● The whitelist table, which can have any name, must have two columns named
REMOTE_SOURCE_NAME and WHITELIST.
● The whitelist items are separated by a comma.
● You can use an asterisk (*) to represent any character or empty string. However, the asterisk must be
placed at the end of a whitelist item. Otherwise, it is treated as a normal character.
● You can add multiple rows of whitelisted tables for a single remote source.
To add a whitelist for the remote source called “localmssqldb”, insert a row into the whitelist table:
object.A, object.B*, and so on, means that the table (or procedure) object.A and the table (or procedure)
starting with object.B are filtered for the remote source “localmssqldb”.
To add a whitelist for the remote source called “localhadp”, insert a row into the whitelist table:
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
Install a JDBC driver and configure your remote source to enable Windows authentication.
Context
Procedure
1. Download and install the Windows JDBC driver to a location of your choice.
You will find the sqljdbc_auth.dll file in the /x64 directory at the installation location. For example,
<JDBC installation directory>\ sqljdbc_<version>\<language>\auth
\x64\sqljdbc_auth.dll
2. Set the system environment Path variable to the location of the sqljdbc_auth.dll file.
3. Start the Data Provisioning Agent.
Related Information
Microsoft SQL Server Log Reader Remote Source Configuration [page 325]
6.17 OData
Set up access to the OData service provider and its data and metadata.
Open Data Protocol (OData) is a standardized protocol for exposing and accessing information from various
sources, based on core protocols including HTTP, AtomPub (Atom Publishing Protocol), XML, and JSON (Java
Script Object Notation). OData provides a standard API on service data and metadata presentation, and data
operations.
The SAP OData adapter provides OData client access to the OData service provider and its data and metadata.
The OData service provider is created as a remote source. OData resources are exposed as metadata tables of
the remote source. These metadata tables can be added as virtual tables. An SAP HANA SQL query can then
access the OData data. Collections of OData data entries are represented as rows of the virtual table.
The data of the main navigation entities can be accessed via SQL with the following restrictions:
● Without a join, selected projection columns appear in the OData system query “$select”.
● With a join, columns of the joined table, which is the associated OData entity, can occur in the projection.
Selected projection columns appear in the OData system query “$select”. All joined tables appear in the
OData system query “$expand”.
● Due to a restriction of the OData system queries “$select” and “$orderby”, no expressions can occur
in the Projection and the Order By clause.
● The Where clause supports logical, arithmetic, and ISNULL operators, string functions, and date functions.
The expression is translated into the OData system query “$filter”.
Related Information
You must configure the SAP HANA server and provide the appropriate settings when you create a remote
source to connect to the service provider.
Unlike other adapters, the OData adapter is not installed with the Data Provisioning Agent.
Related Information
Follow these steps to set up the SAP HANA server before using the OData adapter.
Procedure
Procedure
Related Information
Configuration settings for accessing an OData source. Also included is sample code for creating a remote
source using the SQL console.
Option Description
Trust Store The trust store that contains the OData client public
certificate, either a file in SECUDIR or a database trust
store.
Is File Trust Store Select True if the trust store is a file in SECUDIR, or False if
the trust store resides in the SAP HANA database. The
default value is True.
Require CSRF Header Enter True if OData Service requires CSRF Header. The
default value is True.
CSRF Header Name Enter the name used for CSRF Header. The default value is
X-CSRF-Token.
CSRF Header Fetch Value Enter the value used for CSRF Header Fetch. The default
value is Fetch.
Support Date Functions Select False if the OData service site does not support the
date functions hour, minute, month, or year. The default
value is True.
Show Navigation Properties Select True or False for the OData Service to return
Navigation Properties. The default value is False.
Note
Due to an HTTP request maximum length restriction,
avoid using the select * query if the total lengths for
all Property and Navigation Property names exceed the
restriction.
Follow Redirects Select True for the OData adapter to follow redirected URLs.
The default value is False.
Verify Server Certificate Select True to have the OData adapter verify the server
certificate. The default value is False.
Convert to Local Timezone Select True or False for the OData adapter to convert the
timestamp value to a local timezone. The default value is
True
Example
Context
If you want to consume HTTPS-based OData Services, as opposed to non-secured HTTP-based OData
Services, you must import the SSL certificate from the OData Services provider into the trust store on your
SAP HANA platform.
You can use your browser to navigate to the OData URL and export the certificate from the browser.
2. Import the SSL certificate using the SAP HANA XS Admin Trust Manager.
○ For file trust stores, import the certificate to the Trust Manager SAML trust store. This action imports
the certificate to the sapsrv.pse file in SECUDIR.
○ For database trust stores, create a database trust store and import the certificate to that new trust
store.
See the SAP HANA Administration Guide for more information about the Trust Manager and trust
relationships.
3. Create the remote source:
○ For file trust stores, set Trust Store to the sapsrv.pse file.
○ For database trust stores, set Trust Store to the new database trust store and Is File Trust Store to
False.
For file trust stores, add the certificate to the file trust store using the following steps:
a. In the browser for the source site click Padlock DetailsDetails Copy to File Next DER
encoded binary Next Next Finish , and select a location to download.
b. Copy the certificate to the HANA machine, and use sapgenpse to import the server, logged in as the
<SID>adm user (for example a71adm).
Related Information
The Oracle Log Reader adapter provides real-time changed-data capture capability to replicate changed data
from a database to SAP HANA. You can also use it for batch loading.
The Log Reader service provider is created as a remote source, and it requires the support of artifacts like
virtual tables and remote subscriptions for each source table to perform replication.
With this adapter, you can add multiple remote sources using the same Data Provisioning Agent.
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
Adapter Functionality
Note
Note
Functionality Supported?
Real-time Yes
Functionality Supported?
ORDER BY Yes
GROUP BY Yes
Related Information
Oracle database users must have certain permissions granted to them in order to carry out real-time change
data capture or batch or initial load transactions.
You can run a script to assign all necessary permissions, or choose which ones suit users best. The following
scripts can be found in the oracle_init_example.sql file, which is located in the Scripts folder of the
Data Provisioning Agent installation at <DPAgent_root>\LogReader\scripts.
Note
Be aware that the oracle_init_example.sql file is a template script. You may need to alter the
following:
● Change LR_USER to the configured database user name in the remote source options, if it is not
LR_USER.
● Change <_replace_with_password> to the password of the database user.
For on-premise deployment, grant select access to the log reader user by issuing the following statement. In
these examples – taken from the oracle_init_example.sql file – the user is named LR_USER. Change this
user name to whatever you need.
However, for cloud deployment such as when accessing a database instance on Amazon Web Services (AWS)
as a Relational Database Service (RDS), some privileges require granting using the AWS rdsadmin package.
The following example shows how to GRANT SELECT on SYS.INDCOMPART$ to the LR_USER using the
rdsadmin package. The privileges that require this method are noted in the oracle_init_example.sql file.
begin
rdsadmin.rdsadmin_util.grant_sys_object(
p_obj_name => 'INDCOMPART$',
p_grantee => 'LR_USER',
p_privilege => 'SELECT',
p_grant_option => true);
end;
/
Note
● As of 2.0 SP 03 Patch 53, the list of Oracle users can be accessed through the RA_ALL_USERS_VIEW
instead of directly accessing SYS.USERS$.
GRANT CREATE TABLE TO LR_USER; Required to create tables in the primary CDC
database that SDI needs.
Note
Required only if your
RA_ALL_USERS_VIEW is based on
SYS.USER$.
Granting the EXECUTE CATALOG ROLE or the SELECT CATALOG ROLE is not necessary. Instead, you can grant
the following specific permissions that are part of those roles:
Role Permissions
To set permissions for a multitenant database, run the scripts in the following files. They are also located in
<DPAgent_root>\LogReader\Scripts. The same rules concerning <LR_USER> apply to this script, and so
on.
Note
The <C##LR_USER> in the container database must be the “common user” and the <LR_USER> user in the
pluggable database is the “local user”.
● oracle_multitenant_init_example_for_container_database.sql
● oracle_multitenant_init_example_for_pluggable_database.sql
If you want to process pool or cluster tables, you must uncomment the following lines from the
oracle_init_example.sql file:
For cluster and pool tables, multiple ECC tables are stored in a single physical Oracle table. Grant permission to
access the physical Oracle table associated with the ECC table that you want to access. For example, to access
the ECC BSEG table, you must grant access to the RFBLG Oracle physical. The oracle_init_example.sql
file contains many examples of these tables, such as:
Related Information
Information about setting up your source system and adapter for real-time replication.
Note
We have found that the Oracle Log Miner maximum throughput is approximately 1 TB/day. With anything
more than that, Oracle Log Miner begins to lag behind.
Therefore, no matter the amount of overage, if the replication volume is greater than 1 TB/day, there will be
a delay in replication.
Related Information
The remote Oracle database must be set up properly for this adapter to function correctly when using real-time
replication.
Multitenant databases are supported for Oracle 12c. Be aware that some of the setup procedures are different
for multitenant. For example, in remote sources, the configuration, permissions, and cleanup procedures are
different.
LOB replication
When attempting LOB replication, be sure to set the db_securefile parameter to “PERMITTED” in the
Oracle system. Depending on the Oracle version, the parameter may be set to a different value by default.
Note
During real-time (CDC) replication for Oracle to SAP HANA, if the table in Oracle has a BLOB column as the first
column, the replication fails due to NullPointerException, which LogMiner returns as an invalid SQL statement.
This exception occurs on Oracle 11.2.0.3 and 11.2.0.4.
Related Information
Decide which logging level is best for you and set it up.
Set your logging level in the Adapter Preferences window of the Data Provisioning Agent configuration tool for
the Oracle Log Reader adapter. Then, run the necessary scripts found in the oracle_init_example.sql file,
located in <DPAgent_root>\LogReader\Scripts.
Note
Be aware that the oracle_init_example.sql file is a template script. Execute only the DDL statements
for your logging level by commenting or uncommenting lines as necessary.
Table-level Logging
We recommend table-level logging, which turns on supplemental logging for subscribed tables and some
required system tables.
To configure table-level logging, execute the following DDL statements from oracle_init_example.sql on
your Oracle client and set the Oracle supplemental logging level Adapter Preferences option to Table.
When using Amazon Web Services in the cloud, you can configure only database-level supplemental
logging for Oracle.
Database-level Logging
Database-level logging turns on supplemental logging for all tables, including system tables.
To configure database-level logging, execute the following DDL statements from oracle_init_example.sql
on your Oracle client and set the Oracle supplemental logging level Adapter Preferences option to Database.
When using Amazon Web Services in the cloud, instead of using the ALTER DATABASE ADD SUPPLEMENTAL
commands, enable database-level supplemental logging as shown in the following example:
begin
RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING(
P_ACTION => 'ADD');
END;
/
begin
RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING(
P_ACTION => 'ADD',
P_TYPE => 'PRIMARY KEY');
RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING(
P_ACTION => 'ADD',
P_TYPE => 'UNIQUE');
END;
/
Related Information
Run SQL scripts to clean objects manually from the source database.
Cleanup scripts are used to drop database-level objects. Usually, you do not need to execute a cleanup script
after an adapter is dropped, because the adapter drops these database-level objects automatically. However, in
some cases, if any errors occur during or before automatically dropping these objects, the objects may not be
dropped. At that point, you may need to execute the cleanup script to drop the objects.
You can use the Data Provisioning Agent command-line configuration tool to configure and validate the Oracle
log reader environment before creating remote sources that use the Oracle Log Reader adapter.
Prerequisites
Before validating the log reader environment, be sure that you have downloaded and installed the correct JDBC
libraries. For information about the proper JDBC library for your source, see the SAP HANA Smart Data
Integration Product Availability Matrix (PAM).
Place your files in <DPAgent_root>/lib. Note that you must manually create the /lib folder.
Procedure
○ To test whether the Oracle environment is ready for replication, choose Oracle Replication Precheck.
○ To retrieve a list of all open transactions, choose List Open Transactions.
○ To create an Oracle user with the permissions required for replication, choose Create An Oracle User
With All Permissions Granted.
For each task, provide any additional parameters required by the task. For example, to test whether the
Oracle environment is ready for replication, you must specify the supplemental logging level when
prompted.
Next Steps
After you have validated the configuration of the Oracle log reader environment, you can create remote sources
with the Oracle Log Reader adapter. You can manually create remote sources or generate a creation script with
the command-line configuration tool.
Related Information
Use the Data Provisioning Agent command-line configuration tool to validate parameters and generate a
usable script to create a remote source for log reader adapters.
Prerequisites
Before generating a remote source creation script for your source, be sure that you have downloaded and
installed the correct JDBC libraries. For information about the proper JDBC library for your source, see the SAP
HANA smart data integration Product Availability Matrix (PAM).
Before performing these steps, place your files in <DPAgent_root>/lib. Note that you must manually create
the /lib folder.
Specify the name of the agent to use and the name of the remote source to create, as well as any
connection and configuration information specific to your remote source.
For more information each configuration parameter, refer to the remote source configuration section for
your source type.
Results
The configuration tool validates the configuration details for your remote source and generates a script that
can be used to create the remote source. You can view the validation results in the Data Provisioning Agent log.
By default, the configuration tool generates the remote source creation script in the user temporary directory.
For example, on Windows: C:\Users\<username>\AppData\Local\Temp\remoteSource-
<remote_source_name>.txt.
Related Information
You can connect multiple remote sources to the same remote database when the prerequisites are met.
To connect multiple remote sources to the same remote database, the following conditions must be met:
● Each remote source uses a unique Oracle database user to connect to the source database.
● A different source table is marked for replication; the same table cannot be marked for replication by
different remote sources.
Unlike like log reader functionality, which reads a remote database log to get changed data, trigger-based
replication is based on triggers capturing changed data, and then the adapter continuously queries the source
database to get the changed data. When a table is subscribed to replicate, the adapter creates three triggers
(INSERT, UPDATE, and DELETE) on the table for capturing data. The supported operations are:
● Add a column
● Delete a column
● Alter a column datatype
The adapter also creates a shadow table for the subscribed table. Except for a few extra columns for
supporting replication, the shadow table has the same columns as its replicated table. Triggers record changed
data in shadow tables. For each adapter instance (remote source), the adapter creates a Trigger Queue table to
mimic a queue. Each row in shadow tables has a corresponding element (or placeholder) in the queue. The
adapter continuously scans the queue elements and corresponding shadow table rows to get changed data
and replicate them to the target SAP HANA database.
Architecture
DDL Propagation
Related Information
When you create a remote source to use trigger-based replication, a few system objects--such as tables,
triggers, or procedures--are created on the Oracel source.
Note
Log Reader adapter preferences - with the exception of Number of wrapped log files, Enable
verbose trace, and Maximum log file size - are no longer set in the Data Provisioning Agent
Configuration Tool. They are part of the remote source configuration options in SAP HANA. If you have
upgraded from a previous version, then the settings you find in the Agent Configuration Tool are your
previous settings, displayed for your reference.
You can adjust Oracle Log Reader adapter settings in the Data Provisioning Agent Configuration Tool.
(<DPAgent_root>/configTool/dpagentconfigtool.exe)
Distinguished Name (DN) in Certificate The distinguished name (DN) of the pri
mary data server certificate.
Oracle supplemental logging level Specifies the level of supplemental log table
ging.
Maximum scan queue size The maximum number of log records 1000
permitted in the log reader log scan
queue during replication.
Maximum session cache size The maximum number of Oracle ses 1000
sions to be cached in memory during
replication.
Queue size of parallel scan tasks Specifies the number of tasks in the 0
queue.
Number of log record rows fetched by Specifies the number of log record rows 1
the scanner at a time fetched by the scanner.
Ignore log record processing errors Determines whether to ignore log re false
cord processing errors.
Replicate LOB columns Oracle logs all LOB data in the Oracle true
redo log, except for BFILE datatypes .
This allows the agent to apply each LOB
change. However, for BFILE data, the
same technique is used.
Ignore data of unsupported types Specifies whether you want to ignore false
stored in ANYDATA data with unsupported types housed in
ANYDATA wrapper.
Timeout in seconds to retry connecting The number of seconds the agent waits 10
between retry attempts to connect to
the primary database.
Maximum log file size Limit the size of the message log to 1000
conserve disk space.
Maximum size of work queue for asyn The maximum size of the work queue 1000
for asynchronous logging file handler to
chronous logging
collect the log records (1 to
2147483647)
Discard policy for asynchronous logging Specifies the discard policy for the BLOCKING
file handler asynchronous logging file handler when
the work queue is saturated.
Related Information
Note
When setting up a remote source and you use a remote source name longer than 30 characters, the
generated log reader folder name under <DPAgent_root>/LogReader/ is converted to AGENT<xxxx>.
The log file is located at <DPAgent_root>/log/framework.trc, and reads: “The instance name
<original_name> exceeds 30 characters and it is converted to <converted_name>.”
Note
Set this parameter to True only
when the remote database is
multibyte character sets, such as
UTF-8, GBK, and JA16SJIS.
Load and Replicate LOB columns When this parameter is set to false, the
LOB columns are filtered out when
doing an initial load and real-time
replication.
Note
This option isn’t available for an
ECC adapter.
Note
The tnsnames.ora file must be
local to the Data Provisioning
Agent machine or available to the
Data Provisioning Agent. Copy the
file from the Oracle location to the
Agent machine.
Whitelist Table in Remote Database Enter the name of table that contains
the whitelist in the remote database.
Note
The Oracle log reader adapter
doesn’t support the following
LDAP scenarios:
● Oracle multi-tenant
architecture
● LDAP + SSL authentication
● LDAP + Kerberos
authentication
● LDAP failover mode
Schema Alias Replacements Schema Alias Schema name to be replaced with the
schema provided in the Schema Alias
Replacement parameter. If given,
accessing tables under this alias is
considered to be accessing tables
under the schema given in Schema
Alias Replacement.
Note
● This parameter is valid only if
Use SSL is set to True.
● If this parameter is set, the DN
field in the server certificate is
verified to match this
parameter. If it doesn’t match,
the connection to the primary
data server fails.
Note
The value of this parameter can be
changed when the remote source
is suspended.
Use Agent Stored Credential Set to True to use credentials that are
stored in the Data Provisioning Agent
secure storage.
Note
When you use credentials stored in
the agent secure storage, you must
still specify the user name in
Credentials. Additionally, the
Credential Mode must not be none
or empty.
JDBC Driver Configuration Include Table/Columns Remarks ● True: Returns a description of the
table/column. If you have many
tables, setting this parameter to
“True” can impede performance.
● False (Default): Turns off the
return of descriptions.
CDC Properties > Database Oracle Supplemental logging level Specifies the level of supplemental
Configuration logging.
● Table: Enables supplemental
logging for subscribed tables and
some required system tables.
● Database: Enables supplemental
logging for all tables including
system tables.
CDC Properties > Parallel Scan Enable parallel scanning Specifies whether to enable parallel
scanning.
Queue size of parallel scan tasks Specifies the number of tasks in the
queue. The value range is 0–
2147483647 .
Note
This parameter isn’t supported for
an Oracle RAC remote source.
Note
Use this option only after
consultation with SAP support.
Note
Don’t use the same name as the
database user name.
Ignore log record processing errors Specifies whether the Log Reader
should ignore the errors that occur
during log record processing. If set to
True, the replication doesn’t stop if log
record processing errors occur. The
default value is False.
Number of log record rows fetched by Specifies the number of log record
the scanner at a time rows fetched by the scanner. The value
range is 1–1000 .
Use database link to query pluggable Indicates whether the LogReader uses
database
database link instead of the
CONTAINERS clause to query the
pluggable database. The default value
is “true”.
Note
Ensure that the user is granted the
GRANT CREATE DATABASE
LINK to C##LR_USER;
permission, located in the
oracle_multitenant_init
_example_for_pluggable_
database.sql file.
Caution
Before using this feature, SAP
Support must analyze to identify
the root cause of performance
issues and determine whether it's
appropriate to enable deferrable
rescan mode. Use the two
deferrable rescan options only
● True: If it encounters
UNSUPPORTED LogMiner
operations, it performs a deferred
rescan on the current transaction.
● False: Disables the deferred rescan
and uses the default transaction
processing logic. (Default value)
Caution
Before using this feature, SAP
Support must analyze to identify
the root cause of performance
issues and determine whether it's
appropriate to enable deferrable
rescan mode. Use the two
deferrable rescan options only
after consultation with SAP
support.
Note
Use this option only after
consultation with SAP support.
Allow to read Oracle SYS.USER$ Specifies whether the log reader can
access SYS.USER$ for Oracle user
identification.
● True Indicates that the log reader
can access SYS.USER$ directly.
● False Indicates that the log reader
must use a different view for user
identification.
The name of the view must be
specified in the View to obtain
users information parameter.
View to obtain users information Specifies the name of the view to use
(Upper Case) for user identification (upper case).
This option takes effect when Allow to
read Oracle SYS.USER$ is set to False.
Note
This view must be created in the
current database user schema and
named in all upper-case
characters.
Note
The default view is based on
ALL_USERS. Although this view
requires lesser permission
privileges, it may not include all
Oracle users and may affect the
replication process.
Trigger-based: Batch queue size The internal batch queue size. The
batch queue size determines the
maximum number of batches of
change data that are queued in
memory. The default value is 64.
Trigger-based: Triggers record PK only Set to True to have the triggers record
only primary keys of delta data during
CDC processing. This action may
improve the DML performance in the
source database.
Note
If this parameter is set to False,
during the time period between
when DDL changes occur on the
source database and when they
are replicated to the target HANA
database, there must be no DML
changes on the subscribed source
tables. Replicating DDL changes
would trigger the Oracle trigger-
based adapter to update (drop and
then re-create) triggers and
shadow tables on the changed
source tables. Errors may result if
any data is inserted, updated, or
deleted on the source tables
during this time period.
Trigger-based: Capture before and This option is only valid when Triggers
after images
Record PK Only is set to True.
Credentials > Oracle Connection User Name Oracle user name (case-sensitive)
Credential
Credentials > Oracle Multitenant Common User Name The common user name in the
Credential container database (case-sensitive)
SQL Example
Related Information
The Schema Alias and Schema Alias Replacement options, available in the remote source configuration
parameters for some Data Provisioning adapters, allow you to switch easily between schema, remote sources,
and tables. The Schema Alias is the name of the schema in the original system. The Schema Alias Replacement
is the name of the schema in the current system that replaces the Schema Alias name.
A common use case is to create a remote source pointing to a development database (for example, DB_dev),
and then create virtual tables under that remote source. Afterward, you may switch to the production database
During the development phase, you may create a virtual table for a source table OWNER1.MYTABLE in DB_dev,
for example. Note that OWNER1.MYTABLE is the unique name of the source table, and it is a property of the
virtual table. With it, the adapter knows which table in the source database it is expected to access. However,
when you switch to the production database (DB_prod), there is no OWNER1.MYTABLE, only
OWNER2.MYTABLE. The unique name information of the virtual table cannot be changed once created.
You can resolve this problem using the Schema Alias options. In this case, we want to tell the adapter to replace
OWNER1 with OWNER2. For example, when we access OWNER1.MYTABLE, the adapter should access
OWNER2.MYTABLE. So here, OWNER1 is Schema Alias from the perspective of DB_prod, while OWNER2 is
Schema Alias Replacement.
Related Information
Configure an Oracle Real Application Cluster (RAC) source by, among other requirements, setting up the
tnsnames.ora file.
When a Data Provisioning Adapter for an Oracle instance initializes, the Oracle database is queried to
determine how many nodes are supported by the cluster. Based on this information, the Data Provisioning
Adapter automatically configures itself to process the redo log information from all nodes.
You configure the Data Provisioning Adapter to connect to a single Oracle instance by supplying the required
Host, Port Number, and Database Name remote source configuration parameters. However, in an Oracle RAC
environment, the Data Provisioning Adapter must be able to connect to any node in the cluster in the event that
a node fails or otherwise becomes unavailable. To support the configuration of multiple node locations, the
Data Provisioning Adapter supports connectivity to all possible RAC nodes by obtaining necessary information
from an Oracle tnsnames.ora file for one specified entry. As a result, instead of configuring individual host, port,
and instance names for all nodes, the Data Provisioning Adapter requires only the location of a tnsnames.ora
file and the name of the TNS connection to use. Therefore, it's recommended that you point the Data
Provisioning Adapter to a tnsnames.ora entry that contains the address for all nodes in the cluster.
Refer to the following procedure for details on the correct configuration for an Oracle RAC source.
Configure the remote source for Oracle Real Application Cluster (RAC) as follows.
Procedure
1. Use the tnsnames.ora file to connect to Oracle, instead of providing individual host names and SIDs, by
setting the remote source property Database Use TNSNAMES file to true.
2. Ensure the tnsnames.ora file includes details for all nodes.
RAC11G =
(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE = yes)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = www.xxx.yyy.zz1)
(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = www.xxx.yyy.zz2)
(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = www.xxx.yyy.zz3)
(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = rac11g)
)
)
3. Configure tnsnames.ora with the entry of the global SID to the remote source.
<net_service_name> =
(DESCRIPTION =
(ADDRESS = (<protocol_address_information>))
(CONNECT_DATA =
(SERVICE_NAME = <service_name>)))
For example:
ABC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ABC) ) )
4. If the Data Provisioning Agent and Oracle source are on different computers, for all versions up to and
including HANA DP AGENT 1.0 SP03 Patch 2 (1.3.2), set the parameter CDC Properties Database
6. Set the location of the Database Oracle TNSNAMES File to tnsnames.ora. This location must be
available to the Data Provisioning Agent computer.
Extra configuration steps and tips for Oracle on Amazon Relational Database Service (RDS).
Procedure
1. To avoid remote access issues, in Amazon RDS ensure the database instance setting Publicly Acccessible
has been enabled.
2. To avoid remote access issues, in Amazon RDS configure the security group as follows.
a. Open the EC2 console.
b. Select Security Group in the left pane.
c. Choose the Security Group ID.
d. Click the Inbound tab and click Edit.
e. Click Add Rule and configure the following options:
○ Type = Oracle-RDS
○ Source = Anywhere
f. Click Save.
3. Grant access rights as described in the oracle_init_example.sql template file, which is located in the
Data Provisioning Agent installation folder <DPAgent_root>\LogReader\scripts.
4. Enable database-level supplemental logging as described in the oracle_init_example.sql template
file in <DPAgent_root>\LogReader\scripts.
Related Information
You can review processing information in the Log Reader log files.
Note
By default, the adapter instance name is the same as the remote source name when the remote source is
created from the SAP HANA Web-based Development Workbench.
Context
If there is timestamp with a local time zone column in an Oracle table, the Data Provisioning Agent must have
the same time zone. To change the timezone, use the following procedure before starting the Data Provisioning
Agent.
Procedure
1. Find the Oracle server time zone. For example, use “date -R” in linux. Example: -04:00.
2. Open the dpagent.ini file in Data Provisioning Agent install root directory.
3. Add “-Duser.timezone=GMT-4” to the dpagent.ini file.
4. Start the Data Provisioning Agent.
Set up secure SSL communication between Oracle and the Data Provisioning Agent.
Context
If you want to use SSL communication between your Oracle source and the Data Provisioning Agent, you must
create and import certificates and configure the source database.
Note
The SSLv3 protocol is disabled by default in JDK 8 Update 31 and newer. If SSLv3 is absolutely required for
your environment, you can reactivate the protocol by removing SSLv3 from the
jdk.tls.disabledAlgorithms property in the java.security file.
Procedure
1. On the Oracle source database host, create directories for the root certificate authority (CA) and server
certificates.
For example:
○ c:\ssl\oracle\root
○ c:\ssl\oracle\server
2. Create and export a self-signed CA certificate.
Use the orapki tool on the Oracle host system.
a. Create an empty wallet.
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA = (DIRECTORY = C:\ssl\oracle\server)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = PVGN50869480A.SAP.COM)(PORT =
1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCPS)(HOST = PVGN50869480A.SAP.COM)(PORT =
2484))
)
)
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_CIPHER_SUITES = (SSL_RSA_WITH_RC4_128_SHA) (1)
ssl =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCPS)(HOST = PVGN50869480A.SAP.COM)(PORT =
2484))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL)
)
(SSL_SERVER_CERT_DN ="CN=PVGN50869480A.SAP.COM,C=US")
)
oracle.net.ssl_cipher_suites=SSL_RSA_WITH_RC4_128_SHA
For TLS cipher protocols, add the additional jdk.tls.client.protocols parameter to the
dpagentconfig.ini file. For example:
jdk.tls.client.protocols=TLSv1.2
Next Steps
When you create an Oracle remote source, ensure that the following parameters are set appropriately:
Related Information
There are times when you may want to limit access to all of the tables in a source database. For data
provisioning log reader adapters, as well as SAP HANA and SAP ECC adapters, an efficient way to limit access
is to create a whitelist.
Restricting access to only those tables that are to be replicated is done by creating a whitelist of source
database objects in a separate table.
Note
The whitelist impacts only the virtual table created and the replications created after the whitelist was
created.
Note
● The whitelist table, which can have any name, must have two columns named
REMOTE_SOURCE_NAME and WHITELIST.
● The whitelist items are separated by a comma.
● You can use an asterisk (*) to represent any character or empty string. However, the asterisk must be
placed at the end of a whitelist item. Otherwise, it is treated as a normal character.
● You can add multiple rows of whitelisted tables for a single remote source.
To add a whitelist for the remote source called “localmssqldb”, insert a row into the whitelist table:
object.A, object.B*, and so on, means that the table (or procedure) object.A and the table (or procedure)
starting with object.B are filtered for the remote source “localmssqldb”.
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
Use the PostgreSQL Log Reader adapter to batch load or replicate changed data in real time from a PostgreSQL
database to SAP HANA.
The PostgreSQL adapter is designed for accessing and manipulating data from a PostgreSQL database.
Make sure that all users are assigned the SUPERUSER and REPLICATION roles.
Adapter Functionality
● Supports the following SQL statements: SELECT, INSERT, UPDATE, and DELETE
● Virtual table as a source, using a Data Source node in a flowgraph
● Real-time change data capture (CDC)
● Virtual table as a target using a Data Sink node in a flowgraph
● Batch loads (only) are supported for Greenplum databases.
● Replication monitoring and statistics
● DDL replication
Related Information
Information about how to configure your source system for real-time replication.
Context
If you plan on performing real-time replication from a PostgreSQL source, you must prepare that source by
configuring a few system parameters.
Procedure
Parameter Value
wal_level logical
archive_mode true
max_replication_slots 2
Note
If you want to replicate multiple databases in one server, set this
value to: “max_replication_slots = database_need_to_replicate * 2”.
You need 2 slots per instance.
Schema Alias Replacements Schema Alias Schema name to be replaced with the schema given in
the Schema Alias Replacement parameter. If given, ac
cessing tables under this alias is considered to be ac
cessing tables under the schema given in the Schema
Alias Replacemen parameter.
Schema Alias Replacement Schema name to use to replace the schema given in the
Schema Alias parameter.
LogReader Ignore log record processing er Specifies whether the Log Reader ignores the errors that
rors
occur during log record processing. If set to True, the rep
lication does not stop if log record processing errors oc
cur.
Maximum operation queue size The maximum number of log records permitted in the log
reader log operation queue during replication
Maximum scan queue size The maximum number of operations permitted in the log
reader scan queue during replication
Number of rows to fetch for Specifies the batch size for fetching the log record.
each scan
The default value is 1000.
Maximum wait interval for poll Specifies the amount of time in seconds to wait for an el
ing a message
ement to become available for processing
Number of parallel formatter Specifies the number of such threads to use to format
threads
the raw records to the change data row set in the Data
Provisioning Agent
Maximum number of rows sent Specifies the number of rows per batch to send to the
to server in a batch
Data Provisioning Server after the Data Provisioning
Agent processes them.
Amount of time to allow for a Specifies the amount of elapsed time in seconds before
batch to fill before flushing the
flushing the batch of rows
batch
Depending on the number of rows you specified, there
may be a lot of time to fill that batch. To avoid a long la
tency period, you can adjust the amount of time using
this parameter.
Ignore formatting errors Specifies whether to ignore any errors from the formatter
component and allow records to continue processing
Interval of transaction log trun The interval to truncate the PostgreSQL transaction log
cation
in minutes. Set to 0 to disable the truncation.
Security Use Agent Stored Credential Set to True to use credentials that are stored in the Data
Provisioning Agent secure storage.
Credentials Credentials Mode Remote sources support two types of credential modes
to access a remote source: technical user and secondary
credentials.
Related Information
More configuration steps and tips for PostgreSQL on Amazon Relational Database Service (RDS).
Procedure
1. To avoid remote access issues, in Amazon RDS ensure the database instance setting Publicly Acccessible
has been enabled.
2. Configure the PostgreSQL database for real-time replication by adding a parameter group in Amazon RDS
as follows.
Related Information
Using a schema alias can help you manage multiple schema, remote sources, and tables more easily.
The Schema Alias and Schema Alias Replacement options, available in the remote source configuration
parameters for some Data Provisioning adapters, allow you to switch easily between schema, remote sources,
and tables. The Schema Alias is the name of the schema in the original system. The Schema Alias Replacement
is the name of the schema in the current system that replaces the Schema Alias name.
A common use case is to create a remote source pointing to a development database (for example, DB_dev),
and then create virtual tables under that remote source. Afterward, you may switch to the production database
(for example, DB_prod) without needing to create new virtual tables; the same tables exist in both DB_dev and
DB_prod, but under different schema and databases.
During the development phase, you may create a virtual table for a source table OWNER1.MYTABLE in DB_dev,
for example. Note that OWNER1.MYTABLE is the unique name of the source table, and it is a property of the
virtual table. With it, the adapter knows which table in the source database it is expected to access. However,
when you switch to the production database (DB_prod), there is no OWNER1.MYTABLE, only
OWNER2.MYTABLE. The unique name information of the virtual table cannot be changed once created.
You can resolve this problem using the Schema Alias options. In this case, we want to tell the adapter to replace
OWNER1 with OWNER2. For example, when we access OWNER1.MYTABLE, the adapter should access
OWNER2.MYTABLE. So here, OWNER1 is Schema Alias from the perspective of DB_prod, while OWNER2 is
Schema Alias Replacement.
Related Information
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
The PostgreSQL adapter uses event triggers to capture changes to tables so that you can apply those same
changes to a target.
Only 'ALTER TABLE' events are supported, and only the following triggers are supported:
● Column added
● Column deleted
● Column modified
● Column renamed
You enable DDL replication by setting the PostgreSQL Enable DDL Replication remote source parameter to
True. No other configuration is necessary. The default value is True.
For informational purposes, the following repository objects are created in your PostgreSQL database:
● ddl_constraint_keys_shadow (table)
● ddl_tables_shadow (table)
● ddl_columns_shadow (table)
● dpagent_${<pds_database_name>}_replication_ddl_trigger (function)
● dpagent_${<pds_database_name>}_replication_ddl_trigger (event trigger)
Limitations
Currently, replicating DDL and DML in the same transaction is not supported.
Event Triggers
PostgreSQL Remote Source Configuration [page 398]
The ABAP adapter retrieves data from virtual tables through RFC for ABAP tables and ODP extractors. You can
find more information about setting up your environment and adapter by reading the topics in the Related
Information section of this topic.
The SAP ABAP Adapter is a client to functions delivered via modules that are delivered via PI_BASIS.
Extra coding was required in order for these functions to support RAW and/or STRING data types.
The valid PI_BASIS releases are listed in the Support Packages and Patches section of this SAP Note https://
launchpad.support.sap.com/#/notes/2166986
Please note that these functions were originally developed for SAP Data Services.
All references in this SAP Note relevant to PI_BASIS are also relevant for SAP HANA Smart Data Integration.
Ignore references to the SAP Data Services version. This SAP Note applies to all SAP HANA smart data
integration versions.
Prerequisites
You may need to perform extra tasks to access the data you need. For example:
● To access the M_MTVMA, M_MVERA, KONV, and NWECMD_PRPTDVS tables, via /SAPDS/RFC_READ_TABLES,
you must apply SAP Note 2166986 .
Adapter Functionality
Related Information
6.20.1 Authorizations
This section describes the authorizations that support SAP ABAP adapter operations. For improved security,
avoid using wildcards, generic values, or blank values for authorization fields, especially in a production
environment. Enter more specific values that are appropriate to your business applications.
Note
Even though some of the listed authorizations are described as being necessary for SAP Data Services,
they are also necessary for the ABAP adapter.
Related Information
6.20.1.1 G_800S_GSE
Field Values
Activity 03
6.20.1.2 S_BTCH_ADM
Class: Basis
Field Values
Background administrator ID Y
6.20.1.3 S_BTCH_JOB
Class: Basis
6.20.1.4 S_DEVELOP
Field Values
Activity 03
Purpose: This authorization allows Data Services to run generated programs on the SAP server.
Use: DEV
Field Values
Package $TMP
Object name List of temporary program names that are allowed to be gen
erated
Activity 01 and 02
Purpose: This implementation allows Data Services to import a table or to search for a table.
Object name List of tables and views that a user is allowed to access
Activity 03
6.20.1.5 S_RFC
Field Values
Activity 16
Name of RFC to be protected BAPI, CADR, RFC1, SCAT, SDIF, SLST, SUNI, SUTL, SDTX, SYST, /SAPDS/
SAPDS, RSAB, SDIFRUNTIME, and any other required function group
6.20.1.6 S_RFC_ADM
Class: Cross-application
Field Values
Activity 03
Field Values
Activity SHOW
6.20.1.8 S_SDSPGMCK
Use: PROD
Text (Description): SBOP Data Services Authorization Object for program names
Field Values
PROGRAM: ABAP program name Program names that are allowed to be executed in a produc
tion environment
Note
In previous SAP Data Services versions, this authorization was named ZPGMCHK in version 3.x and
S_DSPGMCHK in version 4.1 SP3 Patch 2, 4.2 SP1 Patch 5, 4.2 SP2, and some later versions.
6.20.1.9 S_SDSDEV
SAP Data Services general authorization object that is equivalent to the SAP S_DEVELOP authorization.
Activity 03
Note
In previous SAP Data Services versions, this authorization was named ZDSDEV in version 3.x and S_DSDEV
in version 4.1 SP3 Patch 2, 4.2 SP1 Patch 5, 4.2 SP2, and some later versions.
6.20.1.10 S_SDSAUTH
Field Values
Note
In previous SAP Data Services versions, this authorization was named ZDSAUTH in version 3.x and
S_DSAUTH in version 4.1 SP3 Patch 2, 4.2 SP1 Patch 5, 4.2 SP2, and some later versions.
6.20.1.11 S_TABU_DIS
Class: Basis
Field Value
Activity 03
Field Value
Purpose: This authorization allows Data Services to execute functions in the Data Warehousing Workbench.
Field Values
6.20.1.13 S_USER_GRP
Authorization for SAP Data Services to establish a connection to the SAP server.
Field Values
User group in user master maintenance User group for Data Services user
There are advantages and disadvantages to using either RFC or non-RFC streaming.
The non-RFC streaming is done by extracting the whole target recordset as one batch. That process is
anywhere between 0.1second and 10 seconds faster (depends on the SAP ECC response) than the small-batch
RFC streaming. So, non-RFC streaming is noticeably faster on very small queries, especially with a slow SAP
ECC system. Extracting a whole recordset at once comes with the obvious requirement to have enough
memory for the whole recordset. A general rule (depending on the record length) is 1 GB of RAM on the Data
Provisioning Agent machine per 1 million records, and several concurrent sessions would require further
calculations. Because the non-RFC streaming mode runs in the ECC “dialog mode,” it is also subject to various
limitations on the ECC side, like dialog mode timeout.
To activate RFC streaming, you must configure the following ABAP adapter remote source parameters:
● Streaming Read: This parameter must be set to True to expose the following parameters.
The following parameters must be set to have RFC streaming work:
○ Gateway Server
○ Gateway Host
○ RFC Destination
The following parameters are optional when RFC streaming is enabled:
○ Batch size
○ RFC Trace
○ Batch receive timeout
● Successful registration on an SAP Gateway requires that suitable security privileges are configured. For
example:
○ Set up an Access Control List (ACL) that controls which host can connect to the gateway. That file
should contain something similar to the following syntax: <permit> <ip-address[/mask]>
[tracelevel] [# comment]
○ You may also want to configure a reginfo file to control permissions to register external programs.
● The host where the Data Provisioning agent is running must have a service configured with the name
matching the remote SAP gateway name.
Related Information
Parameter Description
Context Whitelist The Context Whitelist parameter provides you the ability to restrict which
objects - tables, BAPI functions, ODP extractors - are available to a user. For exam
ple, shown/imported/executed/selected from/subscribed to.
If the property is empty, there are no restrictions; all objects that the ABAP adapter
reads from an ECC system are exposed to the target SAP HANA system.
To allow all BAPI functions and ABAPTABLES and exclude all extractors:
BAPI*,ABAPTABLES*
To allow all BAPI functions starting either with RODPS_* or BAPI_BANK*, and only
one ABAP table KNB1
BAPI.RODPS_*,BAPI.BAPI_BANK*,ABAPTABLES.KNB1
Note
The asterisk (*) is used only as the last character to distinguish a prefix from
an exact name.
Remote source configuration options for the SAP ABAP adapter. Also included is sample code for creating a
remote source using the SQL console.
Note
Depending on the values you choose for the remote source configuration parameters, different parameters
appear. Thus, some of the following parameters do not appear.
Application Server The name of the host to which you want to connect
Message Server Enter the name of the message server or its IP address.
Message Server Port (Optional) If the message server isn't on the default port, en
ter the port to use to connect to the message server.
System ID Specifies the system ID of the SAP system to which you want
to connect.
Connections Pool Size Maximum number of idle connections kept open for the re
mote source. The default value of 0 states that there is no
connection pooling; that is, connections will be closed after
each request.
Batch Size, MB (Optional) The size (in MB) of the data packet sent by ECC in
one callback. On the Data Provisioning Agent, upon receiving,
the batch is copied into a queue to be sent to Data Provision
ing server, and thus the memory requirements for that proc
ess is “2 x batchsize”. The default value is 1 MB.
RFC Trace (Optional) Set to On to turn RFC tracing on. By default, this
parameter is set to Off.
Batch Receive Timeout (Optional) The maximum time period in seconds that the
adapter would be waiting for the next batch to come or to
push the batch to Data Provisioning server. It wouldn't make
sense for this value to be larger than the value of the “frame
work.messageTimeout” parameter of Data Provisioning
server. Thus, the default value is the same as the default value
of the Data Provisioning server property (600 seconds).
SNC Library Specifies the path and file name of the external library.
SNC Name of Client Specifies the SNC name of SAP NetWeaver AS for Java.
SNC Name of SAP Server Specifies the SNC name of SAP NetWeaver Application
Server for ABAP.
You can find the application server SNC name in the profile
parameter snc/identity/as on SAP NetWeaver Applica
tion Server for ABAP.
SNC SSO Turn on/off the SSO mechanism of SNC. If you set this param
eter to OFF, you must provide alternative credentials.
SNC Quality of Protection Specifies the level of protection to use for the connection.
Possible values:
1: Authentication only
2: Integrity protection
Default value = 3
Credentials Credentials Mode Remote sources support two types of credential modes to ac
cess a remote source: technical user and secondary creden
tials.
User Name The user name that is used to connect to the SAP ECC sys
tem
After you have created the remote source, the directory structure will look similar to the following screenshot,
depending on the structure of the source system.
Sample Code
Related Information
The SAP ASE adapter provides real-time replication and change data capture functionality to SAP HANA or
back to a virtual table.
The SAP ASE Adapter receives the data stream from an SAP ASE database, reformats the data, and then sends
the data change to the downstream Data Provisioning Server to replicate from SAP ASE to SAP HANA.
The SAP ASE adapter service provider is created as a remote source, and requires the support of artifacts like
virtual tables and remote subscription for each source table to perform replication.
Restriction
For real-time replication, you can initialize each source database by only one instance of the adapter. You
cannot configure two adapter instances for real-time replication of the same source database, even when
using a different Data Provisioning Agent or schema in the source database.
Depending on the ASE Server and Data Provisioning Agent platforms you are using, there are restrictions on
what data transfer protocol you can use. Data transfer protocol is set in the remote source configuration
parameters.
Adapter Functionality
Related Information
The remote database must be set up properly when using the SAP ASE adapter.
Procedure
1. Connect to an SAP ASE data server using ISQL or another utility, and create a database to replicate if one
does not already exist.
2. Create the primary user and grant permissions.
Sample Code
4. Add an entry for the SAP ASE adapter in the interface file of the SAP ASE data server. For example:
Sample Code
<entry name>
master tcp ether <host name or IP> <port>
query tcp ether <host name or IP> <port>
Note
○ The entry name must be the same as the Adapter Instance Name specified when creating the
remote source.
○ The host name or IP must be the same IP of the computer where the SAP ASE adapter is running.
○ The port must be the same as the SAP ASE Adapter Server port that you set up in the SAP ASE
adapter interface file located in <DPAgent_root>/Sybase/interfaces.
Related Information
Parameter Description
Adapter Server Name The name of the SAP ASE adapter server that receives data changes from the SAP
ASE data server.
Adapter Server Port The port number for the SAP ASE adapter server.
Enable SSL for Adapter Server Specifies whether to use SSL for the adapter server.
SSL Certificate File Password The password for accessing the SSL certificate file.
Options for connecting to the remote SAP ASE data server. Also included is sample code for creating a remote
source using the SQL console.
Data Server Information Data Server Name The SAP ASE data server name.
Data Server Host Host name or IP address on which the remote SAP
ASE data server is running.
Data Server Port Number The SAP ASE data server port number.
Security Properties Enable SSL Encryption Specifies whether to SSL encryption between the
source SAP ASE data server and SAP ASE adapter.
Note
The CA certificate for the remote source must
be imported into the adapter truststore on the
Data Provisioning Agent host.
Common Name in Server Certifi- The common name in SAP ASE Adapter certificate
cate file.
Adapter Properties Adapter Instance Name The SAP ASE adapter instance name, which must
be specified when creating remote source. You can
name the adapter instance anything you want, and it
should be unique in the same SAP HANA server.
Data Transfer Protocol The protocol the SAP ASE data server and SAP ASE
adapter uses to transfer data.
Maintenance User The maintenance user that is used by the SAP ASE
LogReader Thread to filter transactions applied by
this user.
Read-only Remote Source Specifies that the remote source should be a read-
only resource and disables write-back functionality,
including INSERT, UPDATE, and DELETE queries.
Credential Properties Credentials Mode Remote sources support two types of credential
modes to access a remote source: technical user
and secondary credentials.
User Name The SAP ASE database user, which the adapter
needs to log on to the SAP ASE database to config-
ure the ASE log reader, query data for initial load,
and write back data into SAP ASE. Certain permis
sions may be required. See Configure Your SAP ASE
Database [page 420] for more information.
Example
Sample Code
Related Information
Configure SSL for SAP HANA On-Premise [Command Line Batch] [page 509]
Configure the Adapter Truststore and Keystore Using the Data Provisioning Agent Configuration Tool [page
513]
Configure Your SAP ASE Database [page 420]
SAP ERP Central Component (ECC) adapters are a set of data provisioning adapters to provide access to and
interaction with SAP ECC data and metadata.
All adapters designed to work with SAP ECC are built on top of Data Provisioning log reader adapters for the
same database. Currently supported are the following:
● IBM DB2
● Oracle
The Oracle Log Miner maximum throughput is approximately 1 TB per day; therefore, for replication
volumes greater than 1 TB per day, expect delays in replication.
These adapters provide extra ECC-specific functionality: ECC metadata browsing and support for cluster tables
and pooled tables in SAP ECC. Please see the description of Log Reader adapters for the common functionality.
Note
For IBM DB2, Oracle, and Microsoft SQL Server (does not apply to SAP ASE), before registering the adapter
with the SAP HANA system, download and install the correct JDBC libraries. See the SAP HANA smart data
integration Product Availability Matrix (PAM). In the <DPAgent_root> folder, create a /lib
Adapter Functionality
Restriction
For real-time replication, you can initialize each source database by only one remote source. You cannot
configure two remote sources for real-time replication of the same source database, even when using a
different Data Provisioning Agent or schema in the source database.
● DDL propagation (transparent tables only, not supported for SAP ASE ECC)
● Search for tables
● Agent-stored credentials (not supported for SAP ASE ECC)
● SELECT, WHERE, JOIN, GROUP BY, DISTINCT, TOP, LIMIT
Limitations
Related Information
6.22.1 Terminology
Setting up ECC adapters requires an understanding of certain SAP ERP and ECC concepts.
Here are some key terms and concepts that help you understand how to set up your ECC adapters.
Term Description
SAP ERP Enterprise Resource Planning software that allows you to leverage role-based access to criti
cal data, applications, and analytical tools – and streamline your processes across procure
ment, manufacturing, service, sales, finance, and HR.
SAP ECC (SAP ERP Central The central technical component of SAP ERP system.
Component)
Cluster table A logical table type, where the data of several such tables are stored together as a table clus
ter in the database. The intersection of the key fields of the cluster tables forms the primary
key of the table cluster. Therefore, a cluster table is known in the ABAP Dictionary, but not in
the database.
Pooled table A logical table type, where the data of several such tables are stored together as a table pool
in the database. Therefore, a pooled table is known in the ABAP Dictionary, but not in the
database.
Related Information
Refer to Log Reader and SAP ASE adapters for installation and setup information.
Because the SAP ECC adapters are built on top of existing Data Provisioning adapters, you must use the
procedures of those adapters to build your SAP ECC adapters.
You can adjust adapter settings specific to your source in the Data Provisioning Agent Configuration Tool by
running <DPAgent_root>/configTool/dpagentconfigtool.exe.
Related Information
Note
Log Reader and ECC adapter preferences, except for Number of wrapped log files, Enable verbose trace, and
Maximum log file size, are no longer set in the Data Provisioning Agent Configuration Tool. They are now
moved to the remote source configuration options in SAP HANA. If you have upgraded from a previous
version, then the settings you find in the Agent Configuration Tool are your previous settings, displayed for
your reference.
Maximum scan queue size The maximum number of log records 1000
permitted in the log reader log scan
queue during replication.
Ignore log record processing errors Determines whether to ignore log re false
cord processing errors.
Timeout in seconds to retry connecting The number of seconds the agent waits 10
between retry attempts to connect to
the primary database.
Number of wrapped log files The maximum size, in 1-K blocks, of the 3
LogReader system log file before wrap
ping.
Maximum log file size Limit the size of the message log to 1000
conserve disk space.
Maximum size of work queue for asyn The Maximum size of the work queue 1000
chronous logging for asynchronous logging file handler to
collect the log records (1 to
2147483647)
Discard policy for asynchronous logging Specifies the discard policy for asyn BLOCKING
file handler chronous logging file handler when the
work queue is saturated.
Note
Log Reader and ECC adapter preferences, except for Number of wrapped log files, Enable verbose trace, and
Maximum log file size, are no longer set in the Data Provisioning Agent Configuration Tool. They are now
moved to the remote source configuration options in SAP HANA. If you have upgraded from a previous
version, then the settings you find in the Agent Configuration Tool are your previous settings, displayed for
your reference.
Maximum scan queue size The maximum number of log records 1000
permitted in the log reader log scan
queue during replication.
Maximum wait interval between log The maximum wait interval between 60
scans Log Reader transaction log scans.
Note
● The value of the parameter is
the maximum number of sec
onds that can elapse before
the Log Reader component
scans the transaction log for a
transaction to be replicated,
after a previous scan yields no
such transaction.
● For reduced replication latency
in an infrequently updated da
tabase, we recommend lower
number settings for the pa
rameter.
● If the primary database is con
tinuously updated, the value of
the parameter is not signifi-
cant to performance.
Seconds to add to each log scan wait The number of seconds to add to each 5
interval wait interval before scanning the trans
action log, after a previous scan yields
no transaction to be replicated.
Note
● The value of the parameter is
the number of seconds added
to each wait interval before the
LogReader component scans
the log for a transaction to be
replicated, after a previous
scan yields no such transac
tion.
● The number of seconds speci
fied by the parameter is added
to each wait interval, until the
wait interval reaches the value
specified by the “Maximum
wait interval between log
scans” parameter.
● For optimum performance, the
value of the parameter should
be balanced with the average
number of operations in the
primary database over a pe
riod of time. In general, better
performance results from
reading more operations from
the transaction log during
each LogReader scan.
● With a primary database that
is less frequently updated, in
creasing the value of the pa
rameter may improve overall
performance.
● If the database is continuously
updated, the value of the pa
rameter may not be significant
to performance.
Timeout in seconds to retry connecting The number of seconds the agent waits 10
between retry attempts to connect to
the primary database.
Maximum log file size Limit the size of the message log to 1000
conserve disk space.
Maximum size of work queue for asyn The Maximum size of the work queue 1000
chronous logging for asynchronous logging file handler to
collect the log records (1 to
2147483647)
Discard policy for asynchronous logging Specifies the discard policy for asyn BLOCKING
file handler chronous logging file handler when the
work queue is saturated.
Note
Log Reader and ECC adapter preferences, except for Number of wrapped log files, Enable verbose trace, and
Maximum log file size, are no longer set in the Data Provisioning Agent Configuration Tool. They are now
moved to the remote source configuration options in SAP HANA. If you have upgraded from a previous
version, then the settings you find in the Agent Configuration Tool are your previous settings, displayed for
your reference.
Distinguished Name (DN) in Certificate The distinguished name (DN) of the pri
mary data server certificate.
Oracle supplemental logging level Specifies the level of supplemental log table
ging.
Maximum operation queue size Specifies the maximum number of op 1000
erations permitted in the log reader op
eration queue during replication.
Maximum scan queue size The maximum number of log records 1000
permitted in the log reader log scan
queue during replication.
Maximum session cache size The maximum number of Oracle ses 1000
sions to be cached in memory during
replication
Queue size of parallel scan tasks Specifies the number of tasks in the 0
queue.
Number of log record rows fetched by Specifies the number of log record rows 1
the scanner at a time fetched by the scanner.
Ignore log record processing errors Determines whether to ignore log re false
cord processing errors.
Replicate LOB columns Oracle logs all LOB data (except for true
BFILE datatypes) in the Oracle redo log.
This action allows the agent to apply
each individual LOB change. However,
for BFILE data, the same technique is
used.
Ignore data of unsupported types Specifies whether you want to ignore false
stored in ANYDATA data with unsupported types housed in
ANYDATA wrapper.
Timeout in seconds to retry connecting The number of seconds the agent waits 10
between retry attempts to connect to
the primary database.
Number of wrapped log files The maximum size, in 1-K blocks, of the 3
agent system log file before wrapping.
Maximum log file size Limit the size of the message log to 1000
conserve disk space.
Maximum size of work queue for asyn The Maximum size of the work queue 1000
chronous logging for asynchronous logging file handler to
collect the log records (1 to
2147483647)
Discard policy for asynchronous logging Specifies the discard policy for asyn BLOCKING
file handler chronous logging file handler when the
work queue is saturated.
Related Information
Parameter Description
Adapter Server Name The name of the SAP ASE adapter server that receives data changes from the SAP
ASE data server.
Adapter Server Port The port number for the SAP ASE adapter server.
Enable SSL for Adapter Server Specifies whether to use SSL for the adapter server.
SSL Certificate File Password The password for accessing the SSL certificate file.
To replicate SAP ECC dictionary tables, you need specific permissions, depending on the database you are
using.
Table 49:
Oracle Permissions are granted when setting up your adapter by running the script found in the
oracle_init_example.sql file, which is located in the Scripts folder of the Data
Provisioning Agent installation (<DPAgent_root>\LogReader\Scripts).
Related Information
Context
The following is an example of creating an ECC Adapter remote source in SAP HANA studio.
Procedure
Configuration settings for accessing ECC remote sources. Also included are sample codes for creating remote
sources using the SQL console.
The following ECC-specific parameter is for creating a remote source. You can find information about database-
specific parameter information in the remote source parameter topics for Log Reader adapters.
Dictionary Schema If you want to use pool or cluster tables, you have to
replicate a set of DD* tables into SAP HANA. The Dictionary
Schema must be the schema name where those tables are
replicated.
Note
This parameter is available only with the SAP ASE ECC
adapter.
Use empty string for values only containing white space Set this parameter to True so that any column value
containing only white space is converted to an empty string;
otherwise, the white space remains. The default value is
False.
Sample Code
Sample Code
Sample Code
There are times when you may want to limit access to all of the tables in a source database. For data
provisioning log reader adapters, as well as SAP HANA and SAP ECC adapters, an efficient way to limit access
is to create a whitelist.
Restricting access to only those tables that are to be replicated is done by creating a whitelist of source
database objects in a separate table.
Note
The whitelist impacts only the virtual table created and the replications created after the whitelist was
created.
Note
● The whitelist table, which can have any name, must have two columns named
REMOTE_SOURCE_NAME and WHITELIST.
● The whitelist items are separated by a comma.
● You can use an asterisk (*) to represent any character or empty string. However, the asterisk must be
placed at the end of a whitelist item. Otherwise, it is treated as a normal character.
● You can add multiple rows of whitelisted tables for a single remote source.
To add a whitelist for the remote source called “localmssqldb”, insert a row into the whitelist table:
object.A, object.B*, and so on, means that the table (or procedure) object.A and the table (or procedure)
starting with object.B are filtered for the remote source “localmssqldb”.
To add a whitelist for the remote source called “localhadp”, insert a row into the whitelist table:
There are limitations for SQL pushdown operations to SAP ECC pooled and cluster tables.
There is no SQL pushdown for pool tables. For cluster tables, there is limited SQL pushdown.
If SQL pushdown is not available for a given SQL statement, then the SQL is performed within the SAP HANA
system. Pushing down the SQL results in better performance.
Keep the following in mind when using pushdown for cluster tables:
● The SELECT statement’s WHERE clause must contain only fields that are keys in both the parent table
cluster and the contained cluster table.
● The SELECT clause is limited to field names from the cluster table or *.
For example, the cluster table KONV is contained within the table cluster KOCLU. The table cluster KOCLU has
three keys MANDT, KNUMV, and PAGENO. The cluster table KONV has keys MANDT, KNUMV, KPOSN, STUNR,
ZAEHK. The only keys which they have in common are MANDT and KNUMV. So, the WHERE clause cannot
refer to any fields other than MANDT and KNUMV.
However,
cannot be pushed down, because the SELECT clause contains something other than KONV field names or *.
Load cluster and pooled table metadata. This only applies to the SAP ASE ECC adapter.
Before working with SAP ASE ECC cluster or pooled tables, their metadata must be loaded into SAP HANA into
a schema specified by the dictionarySchema attribute. To do this, execute the
replicate_dictionary.sql script, and then create and execute the stored procedures that are listed
below.
Note
We previously created remote subscriptions for the dictionary tables (DD* tables). Because these tables
are typically static, it suffices to materialize these tables once. If there are changes to the contents of the
dictionary tables, you will need to truncate and reload these dictionary tables again by running Step 3.
Note
Beginning with the ABAP Platform 1808/1809 release, cluster and pooled tables are not supported in SAP
S/4 HANA.
Note
<HANA_SCHEMA> should be replaced with the name of the schema where you would replicate the DD*
tables. This schema is also specified as the Dictionary Schema while configuring the remote source.
The source (virtual tables) and target tables must also reside in the same schema as the Dictionary
Schema.
○ call materialize_dictionary_table('<HANA_SCHEMA>','<remote_source_name>','DD02L');
○ call materialize_dictionary_table('<HANA_SCHEMA>','<remote_source_name>','DD03L');
Note
Use this procedure to initial load the DD03L table if your SAP HANA system has plenty of free
memory.
Note
This procedure initial loads specific rows for the cluster/pooled table in DD03L if free memory in
your SAP HANA target is limited. Be sure that this procedure is run prior to creating a replication
task or flowgraph for this cluster/pooled table.
The SAP HANA adapter provides real-time change data capture capability in order to replicate data from a
remote SAP HANA database to a target SAP HANA database.
Unlike Log Reader adapters, which read a remote database log to get changed data, the SAP HANA adapter is
trigger-based: triggers capture changed data, and the adapter continuously queries the source database to get
the changed data. When a table is subscribed to replicate, the adapter creates three triggers (INSERT, UPDATE,
and DELETE) on the table for capturing data.
The adapter also creates a shadow table for the subscribed table. Except for a few extra columns for
supporting replication, the shadow table has the same columns as its replicated table. Triggers record changed
data in shadow tables. For each adapter instance (remote source), the adapter creates a Trigger Queue table to
mimic a queue. Each row in shadow tables has a corresponding element (or placeholder) in the queue. The
adapter continuously scans the queue elements and corresponding shadow table rows to get changed data
and replicate them to the target SAP HANA database.
Adapter Functionality
Functionality Supported?
Real-time Yes
Functionality Supported?
ORDER BY Yes
GROUP BY Yes
Related Information
● For real-time change data capture: TRIGGER on source tables or schema of source tables
● For SAP HANA virtual tables used as a source: SELECT
● For SAP HANA virtual tables used as a target (Data Sink) in an .hdbflowgraph: INSERT, UPDATE, and
DELETE
● If <Schema> is not empty and its value is not equal to <User> in Credentials, GRANT CREATE ANY ON
SCHEMA <Schema> TO <User> WITH GRANT OPTION
Set the thread pool size when executing jobs of querying shadow tables to get change data.
Parameter Description
Thread Pool Size The size of the SAPHANA adapter global thread pool. SAP HANA adapter remote
sources shares the thread pool. The thread pool is used to execute jobs of querying
shadow tables to get change data.
We recommend that you configure the thread pool size to the number of available
processors in the system, if possible.
Use the SAP HANA adapter to move data from one SAP HANA instance to another. Also included is sample
code for creating a remote source using the SQL console.
Privileges
The following schema privileges on the schemas, under which there are tables to be accessed, must be granted
to the configured user on the remote SAP HANA database:
● For real-time change data capture: TRIGGER on source tables or schema of source tables
● For SAP HANA virtual tables used as a source: SELECT
● For SAP HANA virtual tables used as a target (Data Sink) in an .hdbflowgraph: INSERT, UPDATE, and
DELETE
● If <Schema> is not empty and its value is not equal to <User> in Credentials, GRANT CREATE ANY ON
SCHEMA <Schema> TO <User> WITH GRANT OPTION
Database Host Auto Failover Enable auto failover for scale-out SAP HANA. The default is
False.
Note
The value of this parameter can be changed when the
remote source is suspended.
Port Number The port number of the remote SAP HANA server.
Note
Use an arbitrary port like 1234. Do not put 1433 or 1434
as the instance number.
Auto-Failover Hosts Connec The connection string for scale-out HANA auto failover, for
tion
mat is host1:port1;host2:port2;host3:port3.
This parameter is configurable if Host Auto-Failover is True.
Note
The value of this parameter can be changed when the
remote source is suspended.
Whitelist Table in Remote Da Enter the name of table that contains the whitelist in the re
tabase mote database.
Schema (Case Sensitive) If <Schema> is not empty and its value is not equal to
<User> in Credentials, the SAP HANA adapter creates a ser
ies of system objects under this schema, instead of <User>.
Note
This option is no longer required. It is visible solely for
backward compatibility purposes. It was used in previ
ous versions to restrict the viewing of tables to those ta
bles under the given schema. Now, you can view all ta
bles, regardless of the schema they are located under.
For those remote sources that were created in a previ
ous version, this option value must keep unchanged.
distribution=OFF&autocommit=true
Retrieve Last Modified Dates The process of creating a table dictionary queries metadata
for Objects in Dictionary LastModifiedTimestamp from the source database, which
may take a lot of time.
Schema Alias Replacements Schema Alias Schema name to be replaced with the schema given in
Schema Alias Replacement. If given, accessing tables under
it is considered to be accessing tables under the schema
given in Schema Alias Replacement.
Note
The value of this parameter can be changed when the
remote source is suspended.
Schema Alias Replacement Schema name to be used to replace the schema given in
Schema Alias.
Note
If the schema alias is not configured, leave this blank.
Note
The value of this parameter can be changed when the
remote source is suspended.
CDC Properties System Object Prefix (Case Insensitive) The prefix of the names of the SAP HANA
adapter system objects created in the source SAP HANA da
tabase by the adapter. We recommend keeping the default
value of HADP_.
In-Memory Sequences If In-Memory Sequence is set to True, the SAP HANA adapter
creates sequence objects with the statement CREATE
SEQUENCE ... RESET BY ... The default value is
True
Sequence Cache Size If Sequence Cache Size is set to a value of > 1, the SAP HANA
adapter creates sequence objects with statement CREATE
SEQUENCE ... CACHE <cache size> ..., where
<cache size> equals the value of the Sequence Cache Size
parameter. If Sequence Cache Size is set to value of <=1, no
CACHE parameter is appended in the CREATE SEQUENCE
statement.
Reserve System Sequences Set to True (default) if you want the SAP HANA adapter to
reserve the scan_seq and trigger_seq system sequences,
even though all of the subscriptions are dropped or reset.
If you do not want to use this remote source, and you want to
remove the environment, first, set this parameter to False,
then drop the subscription and drop the remote source.
Note
The value of this parameter can be changed when the
remote source is suspended.
Note
Minimum Scan Interval in The minimum interval in seconds that the adapter scans the
Seconds Trigger Queue table to get change data. The default value is
0 (seconds), which means there is no waiting time before the
next scan. The value of this parameter can be changed when
the remote source is suspended.
Note
The value of this parameter can be changed when the
remote source is suspended.
Maximum Scan Interval in The maximum interval in seconds that the adapter scans the
Seconds Trigger Queue table to get change data. The default value is
10 (seconds). If the adapter scans the queue and finds that
the queue is empty, it gradually increases the scan interval
from the minimum scan interval to the maximum scan inter
val.
Note
The value of this parameter can be changed when the
remote source is suspended.
DDL Scan Interval in Minutes The interval for detecting DDL changes in the source.
Note
The value of this parameter can be changed when the
remote source is suspended.
Maximum Batch Size The maximum number of consecutive change data on the
same table that is batched to process and send to Data Pro
visioning Server together. The default value is 128.
Note
The value of this parameter can be changed when the
remote source is suspended.
Batch Queue Size The internal batch queue size. The batch queue size deter
mines the maximum number of batches of change data that
are queued in memory. The default value is 64.
Note
The value of this parameter can be changed when the
remote source is suspended.
Maximum Scan Size The maximum number of rows being fetched from the trig
ger queue table in one scan and assigned to batch jobs for
further processing.
Maintenance User Filter Optional. Enter a source database user name. Source data
(Case Sensitive)
base transactions (INSERT, UPDATE, and DELETE) con
ducted by this user is filtered out (ignored) and not propa
gated to the SAP HANA target. For example, if you log in to
the source database with this maintenance user and delete a
row from a source table that is subscribed for replication,
this row is not deleted from the SAP HANA target table.
Note
Do not use the same name as the SAP HANA database
username.
Note
The value of this parameter can be changed when the
remote source is suspended. However, the changed
value takes effect only on newly created remote sub
scriptions afterward. The existing subscriptions are still
using the old value.
Manage System Objects Life- ● Create and Clear Normally: Normal behavior in system
Cycle objects life-cycle (Default). Support for dropping and
creating system objects.
● Clear Only: Support for dropping system objects, if they
exist. This setting is normally used in unsubscribing ta
bles and cleaning up the environment.
● Create and Reuse: Support for creating system objects,
if they do not exist. For the existing objects, the SAP
HANA adapter reuses them.
● Reuse Only: No support for dropping or creating system
objects. For the existing objects, the SAP HANA adapter
reuses them. This setting is normally used in a shadow
remote source that wants to retrieve the subscription in
a remote source and replicate continuously.
Last Committed Sequence Id Last Committed Sequence Id is required only when Manage
You can get its value by executing the following SQL state
ment in your target system:
SELECT
MAX(LAST_PROCESSED_COMMIT_SEQUENCE_ID)
FROM M_REMOTE_SUBSCRIPTIONS
WHERE SUBSCRIPTION_NAME IN (
SELECT
SUBSCRIPTION_NAME FROM
"PUBLIC"."REMOTE_SUBSCRIPTIONS"
WHERE
REMOTE_SOURCE_NAME = 'normal_rs'
)
Triggers Record PK Only Set to True to have the triggers record only primary keys of
delta data during CDC processing. This action may improve
the DML performance in the source database.
The default value is False and the SAP HANA adapter does
not support UPDATE primary key value in this mode. If you
want to enable this functionality, please add a system prop
erty in dpagentconfig.ini first:
hanaadapter.recordPKOnly.capture_before_
and_after_images=true
Enable Upsert Trigger This option is only valid when Triggers Record PK Only is set
to True.
Capture Before and After Im This option is only valid when Triggers Record PK Only is set
ages
to True.
Trigger Queue Table Type Configures the type of trigger queue table.
Source Data Pattern Analysis When set to True, files are created to record the scan history.
You can control the number and size of these files by tuning
the following Agent Adapter Framework Preferences logging
parameters: Log max backup and Log file max file size.
Note
The value of this parameter can be changed when the
remote source is suspended.
Transmit Data in Compact Specifies whether to transmit data in compact mode. If you
Mode
set Transmit Data in Compact Mode to True, the SAP HANA
adapter packs and sends out the data of one table together
with other tables, which could speed up applying data in
DPServer. However, doing so breaks referential integrity
among tables.
Enable Transaction Merge ● True: Transactions on same remote table are grouped
together into one transaction and replicated to the tar
get.
● False: Transactions are replicated as-is.
Connection Security Enable SSL Encryption Specifies whether to enable SSL encryption on connections
to a remote SAP HANA database. The default value is False.
Note
The value of this parameter can be changed when the
remote source is suspended.
Validate Server Certificate Specifies whether to validate the certificate of a remote SAP
HANA server.
Note
The value of this parameter can be changed when the
remote source is suspended.
Host Name in Server Certifi- Controls the verification of the host name field of the server
cate certificate:
● If not set, the host name used for the connection is used
for verification. Note that SSL is name-based; connect
ing to an IP address, or to “localhost” is unlikely to work.
● If set to a string,
○ If the string is “*”, any name matches.
○ If the string starts with “CN=”, it is treated as a
common name, and the textual representation of
the common name entry in the certificate must be
exactly the same.
○ Enable SSLOtherwise, the host name in the server
certificate must match this string (case insensi
tive).
Note
The value of this parameter can be changed when the
remote source is suspended.
Use Agent Stored Credential Set to True to use credentials that are stored in the Data Pro
visioning Agent secure storage.
Note
When you use credentials stored in the agent's secure
storage, you must still specify the user name in
Credentials User . Additionally, the Credential
Mode must not be None or empty.
Credentials Credentials Mode Remote sources support two types of credential modes to
access a remote source: technical user and secondary cre
dentials.
Note
The value of this parameter can be changed when the
remote source is suspended.
Note
The value of this parameter can be changed when the
remote source is suspended.
Basic
The following sample codes illustrate how to create a remote source using the SQL console.
Example
Sample Code
Example
Sample Code
Information about DDL propagation when using the SAP HANA adapter.
Enabling DDL propagation can impact the performance of the source SAP HANA database. Setting an
appropriate value for the remote source option DDL Scan Interval in Minutes matters.
From the time the DDL changes occur on the source database to the time the DDL changes are propagated to
the target SAP HANA database, no DML changes on the tables are allowed. At configured intervals (DDL Scan
Interval in Minutes. By default, 10 minutes), the HANA adapter queries the metadata of all subscribed tables
from the source HANA database, and it determines if changes to the DDL have occurred. If changes are
detected, it propagates the DDL changes to the target database through the Data Provisioning Server.
Because the HANA adapter detects DDL changes by querying source HANA system tables, the source
database might be burdened if you configure a small value for the DDL Scan Interval in Minutes option.
However, configuring a large value would increase the latency of DDL propagation. Therefore, you should
experiment to figure out what value works best for you. If changes to the DDL are rare, you might even want to
disable DDL propagation by setting the value of the DDL Scan Interval in Minutes option to zero. Setting to zero
prevents the HANA adapter from querying metadata from the source database periodically.
Limitation
Remember that during the time period between when DDL changes occur on the source database and when
they are replicated to the target HANA, there must be no DML changes on the subscribed source tables.
Related Information
Use a shadow remote source to reduce maintenance while performing real-time replication.
Context
During real-time replication, if there are exceptions that prevent replicating under a current remote source, and
these exceptions cannot be ignored, you can only drop and re-create the replication tasks. This limitation can
be very cumbersome in production environments. In this scenario, you can create a shadow remote source to
mitigate this problem.
The SAP HANA adapter is based on triggers, and it creates system objects when setting up the environment,
such as triggers, shadow tables, and trigger_queue tables. Every remote source has a trigger_queue table, and
every table has a relevant shadow table. A shadow remote source continues to replicate, so all the
subscriptions under it reuse those system objects.
Procedure
For the Manage System Objects Life-Cycle parameter, choose Reuse Only, and for the Last Committed
Sequence Id parameter, type in the ID. The Schema and System Object Prefix parameters must be the
same as the normal_rs remote source.
3. Create virtual tables at shadow_rs and create subscriptions (for example, subs1_shadow, subs2_shadow,
subs3_shadow...)
4. QUEUE and DISTRIBUTE your remote subscriptions.
When retrieving existing subscriptions, the HANA adapter checks to see if the subscribed tables are legal.
If a user subscribes the wrong table, the following exception occurs: “Add the subscription for table
[<table_name>] is prohibited when Manage System Objects Life-Cycle is Reuse Only! Please check
dpagent framework.trc for the recovery steps”.
Related Information
There are times when you may want to limit access to all of the tables in a source database. For data
provisioning log reader adapters, as well as SAP HANA and SAP ECC adapters, an efficient way to limit access
is to create a whitelist.
Restricting access to only those tables that are to be replicated is done by creating a whitelist of source
database objects in a separate table.
Note
The whitelist impacts only the virtual table created and the replications created after the whitelist was
created.
Note
● The whitelist table, which can have any name, must have two columns named
REMOTE_SOURCE_NAME and WHITELIST.
● The whitelist items are separated by a comma.
● You can use an asterisk (*) to represent any character or empty string. However, the asterisk must be
placed at the end of a whitelist item. Otherwise, it is treated as a normal character.
● You can add multiple rows of whitelisted tables for a single remote source.
To add a whitelist for the remote source called “localmssqldb”, insert a row into the whitelist table:
object.A, object.B*, and so on, means that the table (or procedure) object.A and the table (or procedure)
starting with object.B are filtered for the remote source “localmssqldb”.
To add a whitelist for the remote source called “localhadp”, insert a row into the whitelist table:
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
Results
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
SDI DB2 Mainframe adapter is designed to replicate transactional operations from IBM DB2 UDB on z/OS to
SAP HANA.
The adapter extracts data from IBM DB2 UDB on z/OS databases as initial load and real-time change data
capture.
Note
The SDI DB2 Mainframe adapter does not come pre-installed with the Data Provisioning Agent; you must
install it separately. Before installing the SDI DB2 Mainframe adapter, you must install the Data Provisioning
Agent.
Note
● Initial load
● Real-time change data capture
Note
DDLs are not supported. Also, transactional operations include only INSERT, UPDATE, and DELETE
Encoding schemes supported on the source system: ASCII, EBCDIC, and Unicode
● SELECT (WHERE, JOIN, GROUP BY, DISTINCT, TOP or LIMIT, ORDER BY)
Related Information
As you work with the SDI DB2 Mainframe adapter, it is helpful to know how the adapter interacts with the
various SAP HANA and IBM DB2 components.
The following leads you through a workflow of events that take place after the first subscription is enabled. The
Data Provisioning Agent routes this request to SDI DB2 Mainframe adapter.
Note
This workflow applies to both Auto and Manual modes, except for 4-11, which apply to only Auto mode.
Related Information
Ensure that you have made the necessary changes to your mainframe system.
Note
Be sure to grant permissions for these stored permissions to the replication TSO user.
● For manual mode, the following stored procedures must be installed on DB2:
You can use either the command-line utility or the GUI mode to install the adapter.
Context
Install the SDI DB2 Mainframe adapter separately; it is not included in the Data Provisioning Agent.
Procedure
1. Download the SDI DB2 MAINFRAME ADAPTER 2.x.x component TGZ file.
Find this file in the software download area where you commonly find the download for the Data
Provisioning Agent and other components.
2. Extract the TGZ file.
3. To use command line, execute the following command. To use the GUI installer, skip to the next step.
cd HANA_SDI_DB2MainframeAdapter_20_LIN_X86_64
./hdbinst --silent --batch --path=<dpagent installation path>
cd HANA_SDI_DB2MainframeAdapter_20_LIN_X86_64
./hdbsetup
○ <DPAgent_root>/plugins/com.sap.hana.dp.db2mainframelogreaderadapter-n.n.n.jar
○ <DPAgent_root>t/lib/libDB2MainframeLogReaderAdapter.so
10. Restart the Data Provisioning Agent.
After Data Provisioning Agent is started, you should be able to see the following messages in
<DPAgent_root>/log/framework.trc:
Related Information
Configuration parameters for the SDI DB2 Mainframe adapter in the Data Provisioning Agent.
Adapter server name The name of the SDI DB2 Mainframe DB2MFAdapterOCSServer
adapter server that receives data
changes from the DB2 Mainframe data
server.
Adapter server port The port number for the SDI DB2 Main 17000
frame adapter's OCS Listener.
Time to wait for Replication Agent to Max time SDI DB2 Mainframe adapter 60
connect (Seconds)
would wait for Replication Agent to start
and send the first transaction to the
adapter.
Note
Ensure the value of this parameter
is set to double or more than the
retry parameter value in the main
frame configuration file
Time to wait for Replication Agent to re The maximum seconds that the SDI 30
connect (Seconds) DB2 Mainframe adapter waits for Repli
cation Agent to reestablish the connec
tion from Replication Agent to the SDI
DB2 Mainframe adapter.
Note
In order to avoid data loss, rema
terialization is required when start
ing replication again.
Log level (ERROR|WARN|INFO|DEBUG) The log level for the SDI DB2 Mainframe INFO
adapter dynamic library.
Maximum log file size (KB) Limit the size of the dynamic library log 100000
file.
Change the OCS Server Port Number Using Command-Line Utility [page 472]
Start the Configuration Tool [Command Line] [page 51]
Install the Replication Agent to work with the SDI DB2 Mainframe adapter.
Context
The GUI installer walks you through the installation process. Provide information about your DB2 system, OCS
Server, logon credentials, and so on.
Procedure
1. Download the SDI DB2 MF ADAPT REP AGENT 2.0 component (REPAGENTSDI*.EXE), and execute the file.
Parameter Notes
For example, if the mainframe ID is “MDAWAR”, change the JCL Line 1 text box to
MDAWARAB JOB class="A",NOTIFY=&SYSUID
JCL Line 2
JCL Line 3
High Level Qualifier You can find all of your Replication Agent JCLs under the PDS whose qualifier starts
with HLQ.* (For example, “MDAWAR.*”)
Volume
Unit
Work Unit
Parameter Notes
DB2 Version
RepAgent System table creator Qualifier of LTMMARKER and LTMOBJECTS system tables
name
Data Sharing
Values for these parameters must be specified only when Replication Agent is started in manual mode;
otherwise the parameters can be left unaltered. In auto mode, these parameters are populated
automatically during run-time.
Parameter Notes
OCSServer Name SDI DB2 Mainframe adapter server name as updated in NOTIFY=&SYSUID
<DPAgent_root>/configuration/
com.sap.hana.dp.adapterframework/
DB2MainframeLogReaderAdapter/
DB2MainframeLogReaderAdapter.ini file
Parameter Notes
User ID Your mainframe user ID to be used for FTPing product libraries to the main
frame.
NOTIFY=Mainframe Host Name Mainframe server that you are using (either Hostname or IP address)
VOL/UNIT Assignment
Log FTP Session? Provide a file name for a file that is created and used for capturing log informa
tion during FTP. This file will help you analyze issues that might occur during da
taset upload.
The file name specified here has to be a location on the local machine where the
installer is being executed.
7. Click Install.
8. Click Installation Complete.
9. Log on to the mainframe, and go to PDS with the High Level Qualifier used in Step 3.
10. From the above list, open PDS with low-level qualifier as .JCL.
11. Run the following JCLs in HLQ.*.JCL in order:
○ RECEIVE: This job runs the IKJEFT01 program to use TSO Receive Command to build and populate the
product libraries
○ ALLOC: This job creates TRUNC point Dataset and Generations Data Group (GDG)
12. Execute the following SQL statements:
○ GRANT: To give authorization for BIND and EXECUTE on the plan used
○ SQLINT: To create LTMOBJECTS and LTMMARKER tables
13. Execute LTMBIND job from HLQ.*.JCL PDS
Note
If you get an authorization error, issue the GRANT command for the BIND privilege.
Results
Before starting Replication Agent, be sure that the following pre-requisites are established:
● Link libraries must be APF authorized. Contact your mainframe team for APF authorization.
● Tables to be replicated through Replication Agent must be created with the “DATA CAPTURE CHANGES”
option specified.
● The LTMOBJECT and LTMMARKER tables owner name must match the creator parameter value assigned
in the mainframe configuration file.
Related Information
Set up the configuration file for Replication Agent for SDI DB2 Mainframe adapter.
Update some of the parameters in the configuration file to work with the SDI DB2 Mainframe adapter.
Note
In manual mode, the configuration file must be updated manually. When using auto mode, most adapter-
related configuration parameters are updated automatically.
Section Parameters
DB2MFAdapterOCSServer Pa The following parameters are auto-populated in auto mode; there is no need to update
rameters them manually. However, in manual mode, these parameters should match the remote
source configuration and OCSServer configuration.
● DP_ccid=<HANA CCID>
● DPCsetname=<HANA Code Setname>
Trace Configs The trace configurations are helpful during troubleshooting; however, they are optional
during a normal execution.
Note
Remember to turn off trace functions after you obtain the necessary information. If
you allow trace functions to continue, the trace output files can fill and consume disk
space, causing abends or impaired LTM for MVS performance.
*------------------------RS configs------------------------------------
* Parameter names are not case sensitive.
*----------------------------------------------------------------------
*
*----------------------------------------------------------------------
The OCS port number (and other adapter preference parameters) can be changed by using Data Provisioning
Agent Configuration Tools (both GUI or Command-Line Utility).
Context
The following instructions are for changing the OCS port number by using the Command-Line Utility:
Procedure
dml_test@xiyl50833394a:/usr/sap/dataprovagent> bin/agentcli.sh –
configAdapters.
A wizard appears where you can change values for the adapter preferences. Press Enter to skip optional
or default settings. If a setting cannot be skipped, it is required.
4. Modify the port number in the Enter Adapter Server Port[17000] parameter.
Results
The changes will take place after you restart the Data Provisioning agent.
Note
In manual mode, make sure to update the Replication agent configuration file with the updated port
number.
Related Information
Prepare the IBM DB2 JDBC JAR files to use one of the DB2 Mainframe adapters.
To use one of the DB2 Mainframe adapters, you are required to copy the following IBM DB2 JDBC JAR files to
the /lib folder of the Data Provisioning Agent installation directory (<DPAgent_root>\lib).
● db2jcc4.jar (Required)
You can download this file here: http://www-01.ibm.com/support/docview.wss?uid=swg21363866 .
Download the JDBC JAR file according to your DB2 database version.
● db2jcc_license_cisuz.jar (Required)
You can find information about this file here: http://www-01.ibm.com/support/docview.wss?
uid=swg21191319
● These JAR files are available in the installation directory after you installed the IBM DB2 client. For
example, on a Windows system, the JAR files are located in C:\Program Files\IBM\SQLLIB\java.
● Download them from the IBM Support and Download Center.
Note
If the source z/OS DB2 system contains a non-English CCSID table space, you are required to update the
JVM to an internationalized version. At a minimum, the charsets.jar file within the current JVM should
contain the required CharToByteCP<XXX>.class, where <XXX> corresponds to the source system’s
language locale.
Options for connecting to the remote mainframe data server. Also included is sample code for creating a
remote source using the SQL console.
Database Name The DB2 database name where the LTMOBJECTS and
LTMMARKER tables reside.
.,,. . . . . . . . . . .
. . . . . . . . . . . .
. . . .,.
.,,
DB2 COMMANDS
SSID: P8L0 ,,.
.,,===>,
,,.
.,,
,,.
.,,Position cursor on the command
line you want to execute and press
ENTER ,,.
.,,
,,.
.,,Cmd 1,===>,-DIS
DDF
,,.
DSNL080I -P8L0 DSNLTDDF DISPLAY
DDF REPORT FOLLOWS:,
DSNL081I STATUS=STARTD,
DSNL082I LOCATION
LUNAME GENERICLU,
DSNL083I DDFP8L0
DESAPW00.DB2P8L0 -NONE,
DSNL084I TCPPORT=9023
SECPORT=0 RESPORT=9024 IPNAME=-
NONE,
DSNL085I IPADDR=::10.17.200.30,
DSNL086I SQL
DOMAIN=ihsapke.wdf.sap.corp,
DSNL105I CURRENT DDF OPTIONS ARE:,
DSNL106I PKGREL = COMMIT,
DSNL099I DSNLTDDF DISPLAY DDF
REPORT COMPLETE,
***,
z/OS DB2 Additional Info Bind Packages When this option is set to Yes, the DB2 mainframe
adapter automatically checks and binds all of the required
missing JAR files.
Note
If any necessary packages are missing, an error oc
curs.
Use Auto Mode If you set this parameter to True, the replication agent
stars automatically.
If you set the parameter to False, you must start the repli
cation agent manually and create LTMMARKER and
LTMOBJECTS manually on the mainframe.
LTM Configuration File Lo The Replication Agent's configuration file location
cation
Schema Alias Replacements Alias Name The name of the schema in the original system.
Alias Replacements The name of the schema in the current system that repla
ces the Schema Alias name.
Security Properties Use SSL Specifies whether you are using SSL.
Credential Properties Credentials Mode Remote sources support two types of credential modes to
access a remote source: technical user and secondary
credentials.
User Name The TSO user that would be used to start the replication
agent job in mainframe.
Note
This user name is only used as an owner of replica
tion agent start job. The qualifier of system tables
(LTMOBJECTS and LTMMARKER) should match cre
ator parameter in the configuration file. The creator
parameter in the configuration file does not need to
be the same as the user name parameter in the re
mote source.
Example
Related Information
Using a schema alias can help you manage multiple schema, remote sources, and tables more easily.
The Schema Alias and Schema Alias Replacement options, available in the remote source configuration
parameters for some Data Provisioning adapters, allow you to switch easily between schema, remote sources,
and tables. The Schema Alias is the name of the schema in the original system. The Schema Alias Replacement
is the name of the schema in the current system that replaces the Schema Alias name.
A common use case is to create a remote source pointing to a development database (for example, DB_dev),
and then create virtual tables under that remote source. Afterward, you may switch to the production database
(for example, DB_prod) without needing to create new virtual tables; the same tables exist in both DB_dev and
DB_prod, but under different schema and databases.
During the development phase, you may create a virtual table for a source table OWNER1.MYTABLE in DB_dev,
for example. Note that OWNER1.MYTABLE is the unique name of the source table, and it is a property of the
virtual table. With it, the adapter knows which table in the source database it is expected to access. However,
when you switch to the production database (DB_prod), there is no OWNER1.MYTABLE, only
OWNER2.MYTABLE. The unique name information of the virtual table cannot be changed once created.
You can resolve this problem using the Schema Alias options. In this case, we want to tell the adapter to replace
OWNER1 with OWNER2. For example, when we access OWNER1.MYTABLE, the adapter should access
OWNER2.MYTABLE. So here, OWNER1 is Schema Alias from the perspective of DB_prod, while OWNER2 is
Schema Alias Replacement.
Related Information
The SOAP adapter provides access to SOAP Web Services via HANA SQL.
The SOAP adapter is a SOAP web services client that can talk to a web service using the HTTP protocol to
download the data. The SOAP adapter uses virtual functions instead of virtual tables to expose server-side
operations.
Related Information
Configuration settings for creating a SOAP adapter remote source. Also included is sample code for creating a
remote source using the SQL console.
Connection WSDL File The location of the WSDL file. Enter a URL or a path to the local WSDL file
where the Data Provisioning Agent is installed.
Use System Proxy If set to Yes, the adapter uses the proxy information saved in the
dpagentconfig.ini file (http.proxyHost, http.proxyPort) or that is
set up in the DP Agent Configuration Tool.
Socket Timeout (milli The time to establish the connection with the remote host. The default
seconds) value is 6000 ms.
Connection Timeout The time waiting for data after the connection was established (maxi
(milliseconds) mum time of inactivity between two data packets). The default value is
6000 ms.
Treat WebServiceError If set to Yes, fail the web services call if there is a failure. The default value
(SOAP Fault) as failure
is No.
By default, the SOAP Adapter writes the fault to one of the output col
umns if the call fails. In certain scenarios, you may want to fail the call it
self. For example, the proxy was incorrectly configured.
Sample Code
● If set to No, the select call is a success and the actual error is popu
lated in the SOAP_FAULT column.
● If set to Yes, the select call fails itself and writes the error to a caller.
WS-Security Password ● None: Choose this option if you do not want to use WS-Security.
Type
● plainText: Choose this option if you intend to use WS-Security with a
plaintext password.
● Digest: Choose this option if you intend to use WS-Security with an
encrypted password.
Credentials Credentials Mode Remote sources support two types of credential modes to access a re
mote source: technical user and secondary credentials.
SQL Example
The following is an example of how to set up the SOAP adapter to use a remote web service (http://
www.webservicex.net/stockquote.asmx?WSDL). You can use the Web-based Development Workbench to
complete some of these tasks (for example, creating a virtual function).
Sample Code
The Teradata adapter can be used to connect to a Teradata remote source and create a virtual table to read
from and write to.
Note
Before registering the adapter with the SAP HANA system, ensure you have downloaded and installed the
correct JDBC libraries. See the SAP HANA smart data integration Product Availability Matrix (PAM) for
details. Place the files in the <DPAgent_root>/lib folder.
Adapter Functionality
Functionality Supported?
Real-time Yes
Functionality Supported?
ORDER BY Yes
GROUP BY Yes
Related Information
Authentication and user privileges requirements for connecting to a Teradata remote source.
Authentication
If you are using LDAP, provide a user name and password when setting up your remote source.
If you are using Kerberos, the adapter uses the default Kerberos settings of the computer on which it is
deployed. If the Kerberos configuration file (krb5.conf) is in a nonstandard location, the path must be
specified via the java.security.krb5.conf system property by adding it to dpagent.ini file. The Realm
and KDC connection parameters in the remote source are optional. Both must be specified in order to override
the computer's default krb5.conf settings. To use Kerberos, use the Kerberos principal name for the user
name with the corresponding password.
The following database user privileges are required for accessing databases, tables, and so on, so that the
adapter can read metadata. You need SELECT access on the following DBC tables:
● "DBC"."UDTInfo"
● "DBC"."DBase"
● "DBC"."AccessRights"
● "DBC"."TVM"
● "DBC"."TVFields"
● CREATE TABLE
● DROP TABLE
● CREATE TRIGGER
● DROP TRIGGER
● CREATE PROCEDURE
● DROP PROCEDURE
Parameter Description
Thread Pool Size The size of the Teradata global thread pool. Teradata adapter remote sources share
the thread pool. The thread pool is used to execute jobs of querying shadow tables
to get change data.
Options for setting up the connection to the remote Teradata data server. Also included is sample code for
creating a remote source using the SQL console.
Configuration parameters
Connection Host Host name or IP address on which the remote Teradata data
server is running
Encoding Session encoding between the adapter and Teradata. Some re
strictions apply to UTF-8. For example, character columns with
Graphic encoding are not supported.
Encrypt traffic Specifies whether the traffic between the adapter and the data
base is encrypted.
If turned off, data exchanged between the adapter and the data
base is unencrypted, and anyone with access to the network
may be able to the read the data. This setting does not affect
logon data, because this data is always sent encrypted by the
Teradata JDBC driver.
JDBC FastExport Speeds up retrieving large amounts of data from Teradata (disa
bled when using Kerberos authentication)
Additional Connection Pa Extra Teradata JDBC connection options. The parameters must
rameters be specified in the following format: key=value,key=value,...
http://developer.teradata.com/doc/connectivity/jdbc/refer
ence/current/frameset.html
Note
The value of this parameter can be changed when the re
mote source is suspended.
System Object Prefix The prefix of the names of the Teradata adapter system objects
created in the source Teradata database by the adapter. We rec
ommend keeping the default value of TADP_.
Shadow Table Prefix The prefix of the names of the Teradata adapter shadow tables
created in the source Teradata database by the adapter.
Stored Procedure Suffix The prefix of the names of the Teradata adapter stored proce
dures created in the source Teradata database by the adapter.
Trigger Suffix The suffix of the names of the Teradata adapter triggers created
in the source Teradata database by the adapter.
Note
The value of this parameter can be changed when the re
mote source is suspended.
Maximum Scan Interval in The minimum interval in seconds that the adapter scans theThe
Seconds maximum interval in seconds that the adapter scans the Trigger
Queue table to get change data. The default value is 10 (sec
onds). If the adapter scans the queue and finds that the queue
is empty, it gradually increases the scan interval from the mini
mum scan interval to the maximum scan interval.The value of
this parameter can be changed when the remote source is sus
pended. The minimum interval in seconds that the adapter
scans the Trigger Queue table to get change data. The default
minimum scan interval is 3 seconds to avoid putting excessive
load on the database with frequent repeat scans.
Note
The value of this parameter can be changed when the re
mote source is suspended.
DDL Scan Interval in Mi The interval for detecting DDL changes in the source.
nutes
A zero or negative integer disables this parameter.
Note
The value of this parameter can be changed when the re
mote source is suspThe value of this parameter can be
changed when the remote source is suspended.
When querying the trigger queue table, the scanner may en
counter a “deadlock exception”. Use this option to set the maxi
mum number of retries before failing (if the retries do not suc
ceed). The default value is 0, which means the adapter does not
retry any scans when encountering deadlock exceptions.
Scan Retry Wait Time in The number of seconds for the scanner to wait before trying
Seconds again to query the trigger queue table. A retry occurs only when
you encounter a “deadlock exception”. The default value is 30
seconds.
Note
The value of this parameter can be changed when the re
mote source is suspended.
Connection Security Use Agent Stored Creden Set to True to use credentials that are stored in the DP Agent se
tial
cure storage.
Credentials Credentials Mode Remote sources support two types of credential modes to ac
cess a remote source: technical user and secondary credentials.
Note
The value of this parameter can be changed when the re
mote source is suspended.
Note
The value of this parameter can be changed when the re
mote source is suspended.
Example
Sample Code
Related Information
Using Prefix and Suffix Options to Manage System Object Name Lengths [page 488]
Permissions for Accessing Multiple Schemas [page 489]
Teradata DDL Propagation Scan Interval [page 489]
Store Source Database Credentials in Data Provisioning Agent [Graphical Mode] [page 93]
The Teradata adapter creates a number of system objects on the source database in order for it to manage
real-time replication. These objects include shadow tables, triggers, and stored procedures. If your Teradata
database has a 30-character name limit, the default remote source settings can lead to Teradata adapter
system objects with names greater than 30 characters. By default, the Teradata adapter’s system object
prefixes and suffixes add up to 12 extra characters, which means that only tables with names of 18 characters
(or less) are supported.
To maximize the number of table name characters supported, edit the four system object prefix and suffix
properties to one character each (they cannot be empty). Doing so ensures that the Teradata adapter uses at
most five extra characters when creating its system objects, meaning that table names of up to 25 characters
can be supported when the 30-character database limit is in place. The following options are available to
configure:
Note
When upgrading, if the Teradata adapter tries to read those properties and they are not present (for
example, they are not part of the previous remote source before the upgrade), then the adapter uses the
default values. When the user edits the remote source after the upgrade, they will see those default values
in the remote source description.
Related Information
Grant the necessary permissions before accessing multiple schemas in a Teradata source.
To access multiple schemas, you need the following permissions assigned to you. In the following example, you
are USER2, and you are accessing tables, creating procedures, executing procedures, and so on, belonging to
USER1.
Note
The EXECUTE PROCEDURE permission allows USER1 to execute the procedures in database USER2.
The DDL Scan Interval in Minutes Adapter Preference option is important to review when setting up DDL
propagation.
Enabling DDL propagation can impact the performance of the source Teradata database. Setting an
appropriate value for the remote source option DDL Scan Interval in Minutes matters.
From the time the DDL changes occurs on the source database to the time the DDL changes are propagated to
the target Teradata database, no DML changes on the tables are allowed. At configured intervals (DDL Scan
Because the Teradata adapter detects DDL changes by querying source Teradata system tables, the source
database might be burdened if you configure a small value for the DDL Scan Interval in Minutes option.
However, configuring a large value would increase the latency of DDL propagation. Therefore, you should
experiment to figure out what value works best for you. If changes to the DDL are rare, you might even want to
disable DDL propagation by setting the value of the DDL Scan Interval in Minutes option to zero. This prevents
the Teradata adapter from querying metadata from the source database periodically.
Related Information
For critical scenarios determined by your business requirements, you can use the agent configuration tool to
disable write-back functionality on supported adapters and run the adapters in read-only mode.
Disabling write-back functionality may help to prevent unexpected modifications to source tables accessed by
an adapter.
Caution
Setting an adapter to read-only mode affects all remote sources that use the adapter.
Procedure
○ In graphical mode, choose Config Preferences , and then select Adapter Framework.
○ In command-line interactive mode, choose Set Agent Preferences in the Agent Preferences menu.
3. For the Read-only Adapters property, specify the list of adapters for which you want to disable write-back
functionality, separating each adapter with a comma.
For example, to disable write-back on the Microsoft SQL Server Log Reader, Oracle Log Reader, and SAP
HANA adapters:
MssqlLogReaderAdapter,OracleLogReaderAdapter,HanaAdapter
The specified adapters are switched to read-only mode and write-back functionality is disabled.
Tip
On adapters that are operating in read-only mode, attempted SQL statements other than SELECT result in
adapter exceptions that are logged in the Data Provisioning Agent framework trace file.
For example:
Related Information
6.27 Twitter
The Twitter adapter provides access to Twitter data via the Data Provisioning Agent.
Twitter is a social media Web site that hosts millions of tweets every day. The Twitter platform provides access
to this corpus of data. Twitter has exposed all its data via RESTful API so that it can be consumed with any
HTTP client. Twitter APIs allow you to consume tweets in different ways, from getting tweets from a specific
user, to performing a public search, or subscribing to real-time feeds for specific users or the entire Twitter
community.
Adapter Functionality
● SELECT, WHERE
The Twitter adapter is a streaming data provisioning adapter written in Java, and utilizes the Adapter SDK to
provide access to Twitter data via SAP HANA SQL (with or without Data Provisioning parameters) or via virtual
functions.
Using the Adapter SDK and the Twitter4j library, the Twitter adapter consumes the tweets from Twitter and
converts to AdapterRow objects to send to SAP HANA server. The tweet is exposed to SAP HANA server via
virtual tables. Each Status table is basically a map of JSON data returned from Twitter to tabular form.
Currently we expose the following columns in all Status tables.
Id BIGINT
Truncated TINYINT
InReplyToStatusId BIGINT
InReplyToUserId BIGINT
Favorited TINYINT
Retweeted TINYINT
FavoriteCount INTEGER
Retweet TINYINT
RetweetCount INTEGER
RetweedByMe TINYINT
PossiblySensitive TINYINT
CreatedAt DATE
Latitude DOUBLE
Longitude DOUBLE
UserId BIGINT
CurrentUserRetweetId BIGINT
Configure your Data Provisioning agent and SAP HANA server to use the Twitter adapter.
Though the Twitter adapter is installed with the Data Provisioning agent, you must configure your agent to
communicate with the SAP HANA server. In addition, you must configure your SAP HANA server and create a
remote source.
Configure proxy settings in the dpagentconfig.ini file by adding the following to the file:
Related Information
Procedure
Procedure
Results
The following directory structure is created, allowing you to create virtual tables or virtual functions as needed.
NAME
CAP_NON_TRANSACTIONAL_CDC
CAP_WHERE
CAP_LIKE
CAP_SIMPLE_EXPR_IN_WHERE
CAP_OR
CAP_SELECT
CAP_BIGINT_BIND
CAP_TABLE_CAP
CAP_COLUMN_CAP
CAP_METADATA_ATTRIBUTE
See the description of these capabilities in the Javadoc documentation, which can be found in
<DPAgent_root>/doc/javadoc.
Remote source configuration options for the Twitter adapter. Also included is sample code for creating a
remote source using the SQL console.
Option Description
Sample Code
SAP HANA smart data integration adds database objects and communication channels to the SAP HANA
security landscape.
Some aspects of SAP HANA smart data integration require specific security-related considerations such as the
communication channel between SAP HANA and the Data Provisioning Agent. However, in general, SAP HANA
smart data integration follows standard SAP HANA security concepts. For complete information, refer to the
SAP HANA Security Guide.
Related Information
7.1 Authentication
Authentication is the process of verifying the identity of database users accessing SAP HANA . SAP HANA
supports several authentication mechanisms, several of which can be used for the integration of SAP HANA
into single sign-on environments (SSO).
For complete information about authentication and single sign-on within SAP HANA, refer to the SAP HANA
Security Guide.
For remote source systems accessed by Data Provisioning adapters, user name and password authentication is
supported. That is, users authenticate themselves by entering their user name and password for the remote
source.
For custom adapters, the developer is free to implement any type of authentication.
Some Data Provisioning adapters, such as Hive, Teradata, and Impala, support Kerberos authentication. When
using Kerberos authentication, only encryption types whose key length is fewer than 256 characters are
supported.
This limitation comes from the SAP JVM packaged with the DP Agent. If you need to use strong encryption,
replace the SAP JCE policy files.
Related Information
Learn an overview of how to configure SSL connections in SAP HANA smart data integration.
You can configure SSL connections from the Data Provisioning Agent to SAP HANA server and, depending on
the adapter you are using, from Data Provisioning Agent to your remote database.
Note
To configure SSL for the Odata adapter, which does not use the Data Provisioning Agent, refer to Consume
HTTPS OData Services [page 348]
Successful configuration of SSL in SAP HANA smart data integration requires that the following be performed:
1. Create the Data Provisioning Agent keystore and truststore to house the remote source certificates.
For more information, see Configure the Adapter Truststore and Keystore Using the Data Provisioning
Agent Configuration Tool [page 513].
2. Configure your source database.
Configure your source database for SSL. This includes creating CA certificates and importing them into
Data Provisioning Agent.
3. Configure SSL on the remote source that you are creating.
Adapters that support SSL may have different configuration requirements. At a minimum, you need to
enable SSL in the remote source configuration. Other remote source parameters may also need to be
configured, depending on the adapter that you are using and your preferences.
Encryption Strength
If you require stronger encryption than 128 bit key length, update the existing JCE policy files.
Related Information
Use the Data Provisioning Agent Keystore Configuration utility to configure and set up SSL for SAP HANA by
getting a certificate from a certificate authority (CA). This secures connectivity from the SAP HANA database
to the SAP HANA Data Provisioning Agent via SSL.
Prerequisites
● Command-line access to SAP HANA Server using the HANA adm user account
● Command-line access to the Data Provisioning Agent using the DPAgent user account
● Back up the following files from the Data Provisioning Agent installation directory:
○ sec and secure_storage (encrypted files to store keystore passwords)
○ dpagentconfig.ini (configuration file to tell the Data Provisioning Agent to use SSL)
○ ssl/cacerts (Java keystore to store server and agent certificates)
● Set the PATH variable to include the Data Provisioning agent sapvim/bin subdirectory so agentcli can
find the keytool executable.
Example: export PATH=/hana1/sapjvm/bin:$PATH
● Set the DPA_INSTANCE variable to the directory where the Data Provisioning agent is installed.
Example: export DPA_INSTANCE=/hana1/bin/dpagent
Context
The Data Provisioning Agent Keystore Configuration utility is a guided interactive tool used to configure SSL for
SAP HANA. Perform the following steps to get a certificate from a certificate authority (CA).
Procedure
1. Start the Data Provisioning Agent Keystore Configuration utility from the terminal by entering ./
agentcli.sh.
Note
Don’t exit the tool when setting up SSL, even when copying certificates between agent and HANA
hosts.
dpagent@vm:/usr/sap/dpagent/bin> ./agentcli.sh
Environment variable DPA_INSTANCE not found
Environment variable DPA_INSTANCE must point to DPAgent's installation root
directory
Example:: export DPA_INSTANCE=/usr/sap/dataprovagent/ then try again
4. In the DPAgent Keystore Configuration Utility, use option 1 to configure SSL for TCP (HANA on-premise).
************************************************************
Configure SSL for TCP (HANA on-premise) (interactive only)
************************************************************
Enter Store Password: (*****) [The default password is changeit]
Enter Register Agent on HANA after SSL Configuration(true): Valid options:
true|false [You should always do this to ensure the setup is correct]
true
Enter Agent name to register with(ProductionAgent): [Agent name that you want
to register in HANA with]
SSLAgent
Enter Hana Server Host name(localhost): [This is hana server name and not
the dpagent server name]
mo-1a6803cc5.mo.sap.corp
Enter Hana Server Port Number(30015): [Hana port usually 3xx15 where xx is
your instance id]
30215
Enter Agent Admin HANA User: [HANA user that have AGENT ADMIN privilege]
system
Enter Password for Agent Admin HANA User:
Enter Password for Agent Admin HANA User: (confirm)
5. The following section defines the Data Provisioning Agent certificate and runs the following command. Use
the same key_password as the store_password.
6. Create a certificate signing request, so that the certificate can be signed by a certificate authority, by
selecting false.
7. The fully qualified domain name (FQDN) must match the Data Provisioning Agent host name and the SAP
HANA Server must be able to ping this machine. The HANA Server validates the Data Provisioning Agent
host name and requires it to match what is set up in the AGENTS table.
The utility creates a Data Provisioning Agent keystore and creates the certificate signing request,
dpagent_CSR.cer. Provide the content of this file to your certificate authority (CA). The CA signs this
certificate, and then provides a signed certificate response. In the response, some certificate authorities
may provide the root, interim, and response all in one file or in multiple files.
8. The Java keystore requires the root to be imported after a response. Using the utility, you must strip the
root certificate from the rest of the chain and import only the root at the first prompt. Save the root CA in
the same folder.
9. After you import the root, you can import the remaining chain or just the agent.
The Data Provisioning Agent keystore now has the Data Provisioning Agent certificate and exports the
certificate to the SSL folder. Now, copy the certificate to the SAP HANA machine.
10. On the SAP HANA Server, create the SAP HANA keystore, get the certificate signed, and import the signed
response.
************************************************************
HANA Configuration
************************************************************
Now for the HANA side setup you need to have HANA Shell Access.
* Once you have access please navigate to
=>cd $SECUDIR
*)If sapcli.pse exists there, the server certificate is created
already. If not, create sapcli.pse via the following
=> sapgenpse get_pse -p sapcli.pse "CN=hostname.fully.qualified,
OU=Support, O=SAP, C=CA"
*) Please request the CA authority to sign the Certificate
Request.
*) Get the CA root certificate as a cer file and save it here
(ex. CA_Root.cer)
*) Get the signed DPAgent Certificate response from the CA.
(ex. CA_Signed_Server.cer
*) Once you have all certificate import the CA Response along
with Optional Root Certificates.
=> sapgenpse import_own_cert -c CA_Signed_Server.cer -p
sapcli.pse <-r optional_CA_Root>
*) Import DPAgent Certificate /hana1/bin/dpagent/ssl/dpagent.cer,
which is required by Client Authentication.
-----BEGIN CERTIFICATE-----
MIIDqzCCApOgAwIBAgIJAOdNZo6S7awNMA0GCSqGSIb3DQEBBQUAMCgxCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UECgwDU0FQMB4XDTE3MDgyMjIwMjMx
OFoXDTE4MDgyMjIwMjMxOFowgYQxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTES
MBAGA1UEBxMJUGFsbyBBbHRvMRowGAYDVQQKExFUZXN0IE9yZ2FuaXphdGlvbjEV
MBMGA1UECxMMVGVzdCBQcm9kdWN0MSEwHwYDVQQDExhtby0xYTY4MDNjYzUubW8u
c2FwLmNvcnAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCrI9kfAGlq
baTSttC2I3GrbH4FF95/wJ+aMNpVe9quS3qH4cMpN+Bqh2YYq1qucRzjwOiWH8rN
t3eNd4lYw7HvDEN4u/3uhtHCle2tmoOHVdesGZ8Ui2250RXBBEhY2ug48uyFSHp2
60y0NQBLGfSDdV+8ZqGJZ0zZrxHMW9J5DsKB8Yblp5aC8TZHpu5JP6nC2rVM/BmB
LGX1YkTYmaHkzZaRnglWBwaK9l3x3qNOOiDgSFOxJGPrHBuWDM0LQJOQwibpFu6K
RlTlOV8wTYoiS/ETRzEQ2vcHT998uzqKRuaeKAtnMGq+CDHSRSYDb/Q152sJoMmK
GtOvoZZ2vE1hAgMBAAGjezB5MAkGA1UdEwQCMAAwLAYJYIZIAYb4QgENBB8WHU9w
ZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBQ6lejpKojc4Ilr
DUmw+ade6pXewTAfBgNVHSMEGDAWgBSjkBEgkX3wJIWR8Ms1LHm+GSlswzANBgkq
hkiG9w0BAQUFAAOCAQEANv1NoddAZVWxB8H02lpT3IYb38jpPqPp7wX0xlSPfhQJ
11. After you create the SAP HANA keystore, add the agent certificate, and then export the SAP HANA
certificate and import the SAP HANA certificate into the Data Provisioning Agent.
After exporting the HANA certificate with sapgenpse, copy the HANA certificate hana_server.cer back
to the Data Provisioning Agent host, and put the certificate in the Data Provisioning Agent /ssl
subdirectory (for example, /usr/sap/dataprovagent/ssl). If the Data Provisioning Agent keystore has
a previous alias called hanaserverrootca, delete this alias, because the agentcli imports the HANA
certificate using this alias name.
Note
************************************************************
Post HANA Configuration.
************************************************************
Enter path to Server certificate or CA Root certificate path? ()
/hana1/bin/dpagent/ssl/hana_server.cer
Executing ->[keytool, -importcert, -keystore, /hana1/bin/dpagent/ssl/
cacerts, -file, /hana1/bin/dpagent/ssl/hana_server.cer, -alias,
hanaserverrootca
Certificate was added to keystore
[Storing /hana1/bin/dpagent/ssl/cacerts]
Successfully imported certificate as hanaserverrootca
Use the Data Provisioning Agent Keystore Configuration utility to configure and set up SSL for SAP HANA by
creating a self-signed certificate. This secures connectivity from the SAP HANA database to the SAP HANA
Data Provisioning Agent via SSL.
Prerequisites
● Command-line access to SAP HANA Server using the HANA adm user account
● Command-line access to the Data Provisioning Agent using the DPAgent user account
● Back up the following files from the Data Provisioning Agent installation directory:
○ sec and secure_storage (encrypted files to store keystore passwords)
○ dpagentconfig.ini (configuration file to tell the Data Provisioning Agent to use SSL)
○ ssl/cacerts (Java keystore to store server and agent certificates)
● Set the PATH variable to include the Data Provisioning agent sapvim/bin subdirectory so agentcli can
find the keytool executable.
Example: export PATH=/hana1/sapjvm/bin:$PATH
● Set the DPA_INSTANCE variable to the directory where the Data Provisioning agent is installed.
Example: export DPA_INSTANCE=/hana1/bin/dpagent
Context
The Data Provisioning Agent Keystore Configuration utility is a guided interactive tool used to configure SSL for
SAP HANA. Perform the following steps to create a self-signed certificate.
Procedure
1. To start the Data Provisioning Agent Keystore Configuration utility from the terminal, enter ./
agentcli.sh.
Note
Don’t exit the tool when setting up SSL, even when copying certificates between agent and HANA
hosts.
dpagent@vm:/usr/sap/dataprovagent/bin> ./agentcli.sh
Environment variable DPA_INSTANCE not found
Environment variable DPA_INSTANCE must point to DPAgent's installation root
directory
Example:: export DPA_INSTANCE=/usr/sap/dataprovagent/ then try again
4. In the DPAgent Keystore Configuration Utility, use option 1 to configure SSL for TCP (HANA on-premise).
************************************************************
Configure SSL for TCP (HANA on-premise) (interactive only)
************************************************************
Enter Store Password: (*****) [The default password is changeit]
Enter Register Agent on HANA after SSL Configuration(true): Valid options:
true|false [You should always do this to ensure the setup is correct]
true
Enter Agent name to register with(ProductionAgent): [Agent name that you want
to register in HANA with]
SSLAgent
Enter Hana Server Host name(localhost): [This is hana server name and not
the dpagent server name]
mo-1a6803cc5.mo.sap.corp
Enter Hana Server Port Number(30015): [Hana port usually 3xx15 where xx is
your instance id]
30215
Enter Agent Admin HANA User: [HANA user that have AGENT ADMIN privilege]
system
Enter Password for Agent Admin HANA User:
Enter Password for Agent Admin HANA User: (confirm)
5. The following section defines the Data Provisioning Agent certificate and runs the following command. Use
the same key_password as the store_password.
7. The fully qualified domain name (FQDN) must match the Data Provisioning Agent host name, and the SAP
HANA Server must be able to ping this machine. The HANA Server validates the Data Provisioning Agent
host name and requires it to match what is set up in the AGENTS table.
The Data Provisioning Agent keystore now has the Data Provisioning Agent certificate and exports the
certificate to the SSL folder. Now, copy the certificate to the SAP HANA machine.
8. On the SAP HANA Server, because this certificate is self-signed, create the SAP HANA keystore with a
noreq flag.
************************************************************
HANA Configuration
************************************************************
Now for the HANA side setup you need to have HANA Shell Access.
* Once you have access please navigate to
=>cd $SECUDIR
*)If sapcli.pse exists there, the server certificate is created
already. If not, create sapcli.pse via the following
=> sapgenpse get_pse -p sapcli.pse -noreq
"CN=hostname.fully.qualified, OU=Support, O=SAP, C=CA"
*) Import DPAgent Certificate /hana1/bin/dpagent/ssl/dpagent.cer,
which is required by Client Authentication.
-----BEGIN CERTIFICATE-----
MIIDhTCCAm2gAwIBAgIEclVYkDANBgkqhkiG9w0BAQsFADBzMQswCQYDVQQGEwJV
UzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEMMAoGA1UEChMDU0FQ
MRIwEAYDVQQLEwlTQVAgVG9vbHMxITAfBgNVBAMTGG1vLTFhNjgwM2NjNS5tby5z
9. After you create the SAP HANA keystore, add the agent certificate, and then export the SAP HANA
certificate and import the SAP HANA certificate into the Data Provisioning Agent.
************************************************************
Post HANA Configuration.
************************************************************
Enter path to Server certificate or CA Root certificate path? ()
/hana1/bin/dpagent/ssl/hana_server.cer
Executing ->[keytool, -importcert, -keystore, /hana1/bin/dpagent/ssl/
cacerts, -file, /hana1/bin/dpagent/ssl/hana_server.cer, -alias,
hanaserverrootca
Certificate was added to keystore
[Storing /hana1/bin/dpagent/ssl/cacerts]
Successfully imported certificate as hanaserverrootca
When SAP HANA is installed on-premise, you must obtain a certificate for the agent and import certificates on
both the agent host machine and the SAP HANA system.
Prerequisites
Before configuring the agent, ensure that the SAP HANA system is already configured for SSL. For more
information, see the SAP HANA Security Guide.
You need the password for the keytool Java program to generate a keystore and import an SAP HANA
certificate. You can find the password, commands, and instructions in the keytool.txt file at
<DPAgent_root>\ssl\keytool.txt.
Note
Procedure
<agent_hostname> must be the fully qualified hostname of the machine where the agent is installed. (For
example, machine.company.com)
Note
Use the same passwords from above later in steps when accessing the .jks when using -keypass
and -storepass options.
The alias and keystore values can be any value, but if referencing them later on, when creating the
certificate request and when importing the signed agent certificate, these alias and keystore values —
defined here — should be used again. So, the Data Provisioning Agent alias and keystore values should
be the same.
Note
After generating the agent request, make sure that the alias entry is of type PrivateKeyEntry and
not trustedCertEntry. If the alias entry isn’t PrivateKeyEntry, we recommend that you delete the
alias and rerun the keytool -genkeypair … command to re-create the key pair, before
generating a certificate request for the Data Provisioning Agent.
3. Import the SAP HANA agent certificate into the Data Provisioning Agent keystore.
Note
If you’re importing more than one type of certificate, import them in the order shown. You don’t need to
import a self-signed certificate.
You can obtain the certificate by exporting it with the SAP Web Dispatcher. For more information, see SAP
Note 2009483 .
4. Use either the SAP HANA command-line tool (sapgenpse) or the Web Dispatcher user interface to import
the signed agent certificate. On SAP HANA, add the signed agent certificate to the sapcli Personal
Security Environment (PSE).
a. Back up sapcli.pse
mv sapcli.pse sapcli.pse.old
Note
Use sapcli.pse, and not sapsrv.pse trust store, which will be used for client authentication.
<DPAgent_root>/bin/dpagent_service.sh stop
<DPAgent_root>/bin/dpagent_service.sh start
Next Steps
Note
You can also add the certificate with the SAP Web Dispatcher. For more information, see SAP Note
2009483 .
If you require stronger encryption than 128-bit key length, update the existing JCE policy files.
Related Information
Specify connection information, user credentials, and SSL configuration information (using the Data
Provisioning Agent configuration tool) when the SAP HANA system is located on-premise and requires a secure
SSL connection.
Prerequisites
● Before you can configure the Data Provisioning Agent to use SSL with SAP HANA on-premise, you must
obtain the SSL certificates and import them to both the agent host machine and the SAP HANA system.
See Configure SSL for SAP HANA On-Premise [Command Line Batch] [page 509] for more information.
● The Agent Admin HANA User must have the following privileges:
Procedure
Note
Start the configuration tool using the Data Provisioning Agent installation owner. The installation owner
is the same user that is used to start the agent service.
The <IP address> is the IP address of the terminal where you want to send the dpagentconfigtool GUI.
3. In the configuration tool, choose Configure SSL, enter the SSL configuration information, and select Enable
SSL for Agent to HANA communication on TCP.
For all “...Path” parameters, enter “<DPAgent_root>/ssl/cacerts”, or enter the path where you will
find dpagent.jks, if you were using the command line batch process.
If you don’t specify a distinct Key Password, the value for Keystore Password is used for both the
keystore and individual keys. This value is checked only if a key password was specified during key
generation in the keytool utility.
Tip
To determine the correct port number when SAP HANA is deployed in a multi-database
configuration, execute the following SQL statement:
5. Click Register Agent, enter the Agent name, and click the Register button.
Next Steps
● In the Adapters section, select HANAAdapter to perform a test, and click Register Adapter.
● In SAP HANA studio, run SELECT * FROM AGENTS, and check column IS_SSL_ENABLED = true.
Related Information
The Data Provisioning Agent uses certificates stored in the adapter truststore and adapter keystore to manage
SSL connections to adapter-based remote sources.The adapter truststore contains the public keys and
Context
Configure the location, type, and password for both the adapter truststore and the adapter keystore with the
Data Provisioning Agent configuration tool.
Procedure
Note
The installation owner is the same user that is used to start the agent service.
Note
By default, both the adapter truststore and the adapter keystore use the same settings:
○ Location: <DPAgent_root>/ssl/cacerts
○ Type: jks
Tip
Use the Java keytool to change the default password to safeguard your certificates.
5. Click Save.
Next Steps
Use the Java keytool to import remote source CA certificates into the adapter truststore.
For example:
You can change the connection settings or certificates used by the Data Provisioning Agent Configuration tool
to connect to the SAP HANA server.
Context
By default, the Data Provisioning Agent Configuration tool does not validate the SAP HANA server against a
certificate stored in the agent truststore when connecting with SSL.
If you want to change the default behavior, advanced properties can be added to the dpagentconfig.ini file
for your agent:
sslEnforce is
turned off)
true (when
sslEnforce is
turned on)
jdbc.hostNameInCertificate string * Host name used to verify the server identity (CN value in cer
tificate)
Note
If you specify * as the host name, this parameter has no
effect. Other wildcards are not permitted.
Procedure
1. Add the advanced properties to the dpagentconfig.ini file for your agent.
jdbc.encrypt=true
jdbc.hostNameInCertificate=*
jdbc.validateCertificate=true
When you create and register a Data Provisioning Agent, you can choose to use SSL communication. If you do
not configure SSL during agent creation, you can enable it later.
Prerequisites
Before configuring the agent for SSL, ensure that the SAP HANA system is already configured for SSL. For
more information, see the SAP HANA Security Guide.
Note
You need the password for the keytool Java program to generate a keystore and import a HANA certificate.
You can find the password, commands, and the instructions in the keytool.txt file at <DPAgent_root>
\ssl\keytool.txt.
Procedure
You can suspend remote source subscriptions from the Data Provisioning Remote Subscription Monitor.
You can alter the agent registration from the Data Provisioning Agent Monitor.
You can resume remote source subscriptions from the Data Provisioning Remote Subscription Monitor.
Related Information
If the configuration process does not succeed due to misconfiguration, unopen ports, or other settings, you
can try these troubleshooting steps.
If you require stronger encryption for Kerberos or TLS/SSL implementations, you may need to update the
existing Java Cryptography Extension (JCE) policy files.
Context
Some TLS/SSL implementations, such as the connections between the Data Provisioning Agent and SAP
HANA and other remote sources, and Kerberos implementations require stronger encryption than what the
SAP JVM provides. If you require more than 128-bit key length encryption, update your JCE policy files to the
latest Oracle JCE policy files, which you can find on the Oracle download Web site.
Procedure
Related Information
SAP HANA smart data integration adds entities that are stored as catalog objects in the SAP HANA database.
Catalog objects such as adapters and remote subscriptions follow standard SAP HANA database security
concepts. That is, they follow standard processes for metadata management, system views, public views,
authorizations, and so on.
In addition to the privileges supported by the GRANT statement in the SAP HANA SQL and System Views
Reference, the following privileges are relevant to SAP HANA smart data integration and its associated catalog
objects:
System Privileges
ADAPTER ADMIN Controls the execution of the following adapter-related commands: CREATE
ADAPTER, DROP ADAPTER and ALTER ADAPTER. Also allows access to ADAPT
ERS and ADAPTER_LOCATIONS system views.
AGENT ADMIN Controls the execution of the following agent-related commands: CREATE
AGENT, DROP AGENT and ALTER AGENT. Also allows access to AGENTS and
ADAPTER_LOCATIONS system views.
Source Privileges
CREATE REMOTE SUBSCRIPTION This privilege allows the creation of remote subscriptions executed on this source
entry. Remote subscriptions are created in a schema and point to a virtual table
or SQL on tables to capture change data.
PROCESS REMOTE SUBSCRIPTION This privilege allows processing exceptions on this source entry. Exceptions that
EXCEPTION are relevant for all remote subscriptions are created for a remote source entry.
Object Privileges
AGENT MESSAGING Authorizes the user with which the agent communicates DDL
with the data provisioning server using HTTP protocol.
Activating and Executing Task Flowgraphs and Replication Tasks [page 520]
SAP HANA Security Guide
SAP HANA SQL and System Views Reference
_SYS_REPO requires additional object or schema authorizations to activate and execute objects such as task
flowgraphs and replication tasks.
To activate and execute these objects, _SYS_REPO requires the following authorizations:
For example, the following statement grants all necessary authorizations to _SYS_REPO on a specific schema:
Related Information
Communication channel security between SAP HANA and adapters hosted by the Data Provisioning Agent
depends on the SAP HANA deployment.
Additional components added to SAP HANA landscape by SAP HANA Smart Data Integration and SAP HANA
smart data quality require security considerations in addition to the information described in the SAP HANA
Security Guide.
When SAP HANA and the Data Provisioning Agent are both installed on-premise, or locally in relation to each
other, communication is performed using TCP/IP encrypted with SSL.
The Data Provisioning Server connects to a port listened to by the agent. The agent generates a key pair and
stores its public certificate in SAP HANA. The Data Provisioning Server then uses this public certificate to
perform SSL server authentication when connecting to the agent.
Caution
Passwords for remote systems accessed by adapters are sent in plain text over this communication
channel. Therefore, encryption is mandatory.
When SAP HANA is in the cloud, or a firewall exists between SAP HANA and the Data Provisioning Agent, the
agent connects to SAP HANA using a proxy XS application. The proxy performs authentication and
authorization before passing messages to or from the Data Provisioning Server.
The agent can connect using the user name and password scheme supported by SAP HANA XS applications.
Related Information
Auditing provides you with visibility on who did what in the SAP HANA database (or tried to do what) and when.
Actions performed on SAP HANA smart data integration objects can be audited using the standard auditing
tools and processes described in the SAP HANA Security Guide.
In addition to the audit actions listed in the SAP HANA SQL and System Views Reference, the following audit
actions are available:
Related Information
SAP HANA provides the technical enablement and infrastructure to allow you to run applications on SAP HANA
to conform to the legal requirements of data protection in the different scenarios in which SAP HANA is used.
SAP HANA smart data integration and SAP HANA smart data quality are applications based on SAP HANA, and
they rely on SAP HANA as the platform for security and data protection. For information about how SAP HANA
provides and enables data protection, see the SAP HANA Security Guide.
Related Information
This section contains information about SQL syntax and system views that can be used in SAP HANA smart
data integration and SAP HANA smart data quality.
For complete information about all SQL statements and system views for SAP HANA and other SAP HANA
contexts, see the SAP HANA SQL and System Views Reference.
For information about the capabilities available for your license and installation scenario,refer to the Feature
Scope Description (FSD) for your specific SAP HANA version on the SAP HANA Platform page.
Related Information
SAP HANA smart data integration and SAP HANA smart data quality support many SQL statements to allow
you to do such tasks as create agents and adapters, administer your system, and so on.
PROCESS REMOTE SUBSCRIPTION EXCEPTION Statement [Smart Data Integration] [page 556]
The PROCESS REMOTE SUBSCRIPTION EXCEPTION statement allows the user to indicate how an
exception should be processed.
SQL Reference for Additional SAP HANA Contexts (SAP HANA SQL and System Views Reference)
The ALTER ADAPTER statement alters an adapter. Refer to CREATE ADAPTER for a description of the AT
LOCATION clause.
Syntax
Syntax Elements
<adapter_name>
<agent_name>
<properties>
Description
The ALTER ADAPTER statement alters an adapter. Refer to CREATE ADAPTER for a description of the AT
LOCATION clause.
Role: sap.hana.im.dp.monitor.roles::Operations
Examples
Add or remove an Create two agents and an adapter at the first agent:
existing adapter at agent
or Data Provisioning CREATE AGENT TEST_AGENT_1 PROTOCOL 'TCP' HOST 'test_host1'
Server PORT 5050;
CREATE AGENT TEST_AGENT_2 PROTOCOL 'HTTP';
CREATE ADAPTER TEST_ADAPTER AT LOCATION AGENT TEST_AGENT_1;
Refresh configuration Read configuration and query optimization capabilities of an adapter from the
and query optimization adapter setup at the agent or Data Provisioning Server:
capabilities of an adapter
ALTER ADAPTER TEST_ADAPTER REFRESH AT LOCATION DPSERVER;
ALTER ADAPTER TEST_ADAPTER REFRESH AT LOCATION AGENT
TEST_AGENT_2;
Update display name Change display name for an adapter to 'My Custom Adapter':
property of an adapter
ALTER ADAPTER TEST_ADAPTER PROPERTIES 'display_name=My
Custom Adapter';
The ALTER AGENT statement changes an agent's host name and/or port and SSL property if it uses the TCP
protocol. It can also assign an agent to an agent group.
Syntax
Syntax Elements
<agent_name>
<agent_hostname>
<agent_port_number>
Specifies whether the agent's TCP listener on the specified port uses SSL.
<agent_group_name>
The name of the agent clustering group to which the agent should be attached.
Description
The ALTER AGENT statement changes an agent's host name and/or port if it uses the TCP protocol. It can also
assign an agent to an agent group.
Role: sap.hana.im.dp.monitor.roles::Operations
Examples
● Alter TEST_AGENT's hostname test_host and port to 5051, if it uses 'TCP' protocol
The ALTER REMOTE SOURCE statement modifies the configuration of an external data source connected to
the SAP HANA database.
The ALTER REMOTE SOURCE SQL statement is available for use in other areas of SAP HANA, not only SAP
HANA smart data integration. Refer to the ALTER REMOTE SOURCE topic for complete information. This
information is specific to smart data integration functionality.
Syntax
Syntax Elements
Syntax elements specific to smart data integration are described as follows. For information about syntax
elements that aren’t specific to smart data integration, refer to the ALTER REMOTE SOURCE topic.
<adapter_clause>
Adapter configuration.
ALTER REMOTE SOURCE SUSPEND Suspends the adapter and agent from
CAPTURE reading any more changes from source
system. This is helpful when the source
system or SAP HANA is preparing for
planned maintenance or an upgrade.
ALTER REMOTE SOURCE Clears all the data received from the adapter
<remote_source_name> CLEAR for this remote source from HANA tables.
OBJECTS
Starts the collection of latency statistics one time or at regular intervals. The user
specifies a target latency ticket in the monitoring view.
ALTER REMOTE SOURCE <remote_source_name> STOP LATENCY MONITORING <ticket_name>
Stops the collection of latency statistics into the given latency ticket.
ALTER REMOTE SOURCE <remote_source_name> CLEAR LATENCY HISTORY
Clears the latency statistics (for either one latency ticket, or for the whole remote
source, from the monitoring view.
Description
The ALTER REMOTE SOURCE statement modifies the configuration of an external data source connected to
the SAP HANA database. Only database users with the object privilege ALTER for remote sources may alter
remote sources.
Note
You may not change a user name while a remote source is suspended.
Permissions
This statement requires the ALTER object privilege on the remote source.
Examples
The configuration clause must be a structured XML string that defines the settings for the remote source. For
example, the CONFIGURATION string in the following example configures a remote source for an Oracle
database.
Related Information
ALTER REMOTE SOURCE Statement (Access Control) (SAP HANA SQL and System Views Reference)
The ALTER REMOTE SUBSCRIPTION statement allows the QUEUE command to initiate real-time data
processing, and the DISTRIBUTE command applies the changes.
Syntax
Syntax Elements
<subscription_name>
Description
The ALTER REMOTE SUBSCRIPTION statement allows the QUEUE command to initiate real-time data
processing, and the DISTRIBUTE command applies the changes. Typically, the initial load of data is preceded
by QUEUE command. The DISTRIBUTE command is used when initial load completes. The RESET command
can be used to reset the real-time process to start from the initial load again.
This statement requires the ALTER object privilege on the remote source.
Example
Now insert or update a material record in ECC system and see it updated to TGT_MARA table in SAP HANA.
Reset the real-time process and restart the load.
Syntax
Specifies the task execution ID to cancel. See the START TASK topic for more information about
TASK_EXECUTION_ID.
Number of seconds to wait for the task to cancel before returning from the command.
Description
The default behavior is for the CANCEL TASK command to return after sending the cancel request. Optionally,
a WAIT value can be specified where the command will wait for the task to actually cancel before returning. If
the command has waited the specified amount of time, then the CANCEL TASK will error out with the error
code 526 (request to cancel task was sent but task did not cancel before timeout was reached).
Note
If the WAIT value is 0, the command returns immediately after sending the cancel request, as it would if no
WAIT value were entered.
Permissions
The user that called START TASK can implicitly CANCEL; otherwise, the CATALOG READ and SESSION ADMIN
roles are required.
Examples
Assuming that a TASK performTranslation was already started using START TASK and has a task execution ID
of 255, it would be cancelled using the following commands. The behavior is the same for the following two
cases:
If the task was able to cancel within 5 seconds, the CANCEL TASK will return as a success. If it didn't cancel
within 5 seconds, then the return will be the error code 526.
SQL Script
You can call CANCEL TASK within the SQL Script CREATE PROCEDURE. Refer to the SAP HANA SQL Script
Reference for complete details about CREATE PROCEDURE.
● Table UDF
● Scalar UDF
● Trigger
● Read-only procedures
Related Information
The CREATE ADAPTER statement creates an adapter that is deployed at the specified location.
Syntax
<adapter_name>
<agent_name>
The agent name if the adapter is set up on the agent.
<properties>
AT LOCATION DPSERVER
The adapter runs inside the Data Provisioning Server process in SAP HANA.
AT LOCATION
Specify an agent that is set up outside of SAP HANA for the adapter to run inside.
Description
The CREATE ADAPTER statement creates an adapter that is deployed at the specified location. The adapter
must be set up on the location prior to running this statement. When the statement is executed, the Data
Provisioning Server contacts the adapter to retrieve its configuration details such as connection properties and
query optimization capabilities.
Permissions
Role: sap.hana.im.dp.monitor.roles::Operations
Create an adapter at the Data Create an adapter TEST_ADAPTER running in the Data Provisioning Server.
Provisioning Server
CREATE ADAPTER TEST_ADAPTER AT LOCATION DPSERVER;
The CREATE AGENT statement registers connection properties of an agent that is installed on another host.
Syntax
Syntax Elements
<agent_name>
PROTOCOL
The protocol for the agent.
HTTP Agent uses HTTP protocol for communication with DP server. Use this
protocol when the SAP HANA database is on the cloud.
PROTOCOL 'HTTP'
{ENABLE | Specifies if agent's TCP listener on the specified port uses SSL.
DISABLE}
SSL
<agent_group_name>
The name of the agent clustering group to which the agent should belong.
Description
The CREATE AGENT statement registers connection properties of an agent that is installed on another host.
The DP server and agent use these connection properties when establishing communication channel.
Permissions
Role: sap.hana.im.dp.monitor.roles::Operations
Examples
Create an agent with TCP Create an agent TEST_AGENT running on test_host and port 5050.
protocol
CREATE AGENT TEST_AGENT PROTOCOL 'TCP' HOST
'test_host' PORT 5050;
Create an agent with HTTP Create an agent TEST_AGENT that uses HTTP and belongs to agent
protocol in an agent group clustering group TEST_GROUP.
The CREATE AGENT GROUP statement creates an agent clustering group to which individual agents can be
assigned.
Syntax
Syntax Elements
<agent_group_name>
Description
The CREATE AGENT GROUP statement creates an agent clustering group to which individual agents can be
assigned. An agent group can be used instead of a single agent to provide fail-over capabilities.
Permissions
Role: sap.hana.im.dp.monitor.roles::Operations
Examples
Related Information
The CREATE AUDIT POLICY statement creates a new audit policy, which can then be enabled and cause the
specified audit actions to occur.
The CREATE AUDIT POLICY SQL statement is available for use in other areas of SAP HANA, not only SAP
HANA smart data integration. Refer to the CREATE AUDIT POLICY topic for complete information. The
information below is specific to smart data integration functionality.
Syntax
Refer to the SAP HANA SQL and System Views Reference for complete information about CREATE AUDIT
POLICY syntax.
Syntax Elements
Syntax elements specific to smart data integration are described below. For information about syntax elements
that are not specific to smart data integration, refer to the SAP HANA SQL and System Views Reference.
Description
The CREATE AUDIT POLICY statement creates a new audit policy. This audit policy can then be enabled and
cause the auditing of the specified audit actions to occur.
Permissions
Only database users with the CATALOG READ or INIFILE ADMIN system privilege can view information in the
M_INIFILE_CONTENTS view. For other database users, this view is empty. Users with the AUDIT ADMIN
privilege can see audit-relevant parameters.
Related Information
CREATE AUDIT POLICY Statement (Access Control) (SAP HANA SQL and System Views Reference)
The CREATE REMOTE SOURCE statement defines an external data source connected to SAP HANA database.
The CREATE REMOTE SOURCE SQL statement is available for use in other areas of SAP HANA, not only SAP
HANA smart data integration. Refer to the CREATE REMOTE SOURCE topic for complete information. The
information below is specific to smart data integration functionality.
Syntax
Refer to the SAP HANA SQL and System Views Reference for complete information about CREATE REMOTE
SOURCE syntax.
Syntax Elements
Syntax elements specific to smart data integration are described below. For information about syntax elements
that are not specific to smart data integration, refer to the SAP HANA SQL and System Views Reference.
<adapter_clause>
Configures the adapter.
Description
The CREATE REMOTE SOURCE statement defines an external data source connected to SAP HANA database.
Only database users having the system privilege CREATE SOURCE or DATA ADMIN are allowed to add a new
remote source.
Permissions
CREATE REMOTE SOURCE Statement (Access Control) (SAP HANA SQL and System Views Reference)
The CREATE REMOTE SUBSCRIPTION statement creates a remote subscription in SAP HANA to capture
changes specified on the entire virtual table or part of a virtual table using a subquery.
Syntax
Syntax Elements
<subscription_name>
ON [<schema_name>.]<virtual_table_name>
See "Remote subscription for TARGET TASK or TARGET TABLE using ON Clause"
below.
AS (<subquery>)
See "Remote subscription for TARGET TASK or TARGET TABLE using AS Clause"
below.
[WITH [RESTRICTED] SCHEMA CHANGES]
Include this clause to propagate source schema changes to the SAP HANA virtual table
and remote subscription target table.
WITH SCHEMA CHANGES corresponds to the replication task options Initial + realtime
with structure or Realtime only with structure and the flowgraph options Real-time and
with Schema Change.
<load_behavior>
For a target table that logs the loading history, these parameters specify the target
column names that will show the change type and corresponding timestamp for each
operation. The CHANGE TYPE COLUMN <column_name> displays I, U, or D for INSERT,
UPSERT, or DELETE. In the case when multiple operations of the same type occur on
the same source row with the same timestamp (because the operations are in the same
transaction), use the CHANGE SEQUENCE COLUMN <column_name>, which adds an
incremental digit to distinguish the operations.
The following example for INSERT is for the same remote subscription and includes the
CHANGE_TIME column.
<task_spec>
<start_task_var> specifies the name and value for a start task variable.
<var_name> is the name of variable that was defined within the task plan.
Variable values provided in this section will be used at runtime (for example, when
executing the task using START TASK).
<var_value> is the value that should be used in place of the variable name specified
when executing the task.
If the task uses table types for input and/or output, then the task expects actual table,
virtual table, or view names at runtime. These actual tables, virtual tables, or view
names are specified as task parameters. Depending on the type of remote subscription
being created, the task parameters may or may not need actual table, virtual table, or
view names for specific parameters (see below for more details).
<proc_spec>
{ PROCEDURE [<schema_name>.]<proc_name>[(<param_list>)] }
Description
The CREATE REMOTE SUBSCRIPTION statement creates a remote subscription in SAP HANA to capture
changes specified on the entire virtual table or part of a virtual table using a subquery. The changed data can
be applied to an SAP HANA target table or passed to a TASK or PROCEDURE if the changes require
transformation. The owner of the remote subscription must have the following privileges:
Note
If you create a remote subscription using the CREATE REMOTE SUBSCRIPTION SQL statement, use
technical user for the Credentials Mode parameter when creating a remote source.
Permissions
This statement requires the CREATE REMOTE SUBSCRIPTION object privilege on the remote source.
Each parameter in <param_list> is used in comparing its columns with columns for the corresponding table
type defined in the task plan. Hence, the order of parameters in <param_list> must match the order of table
types defined in the task plan for input and output sources.
The AS (<subquery>) part of the syntax lets you define the SQL and the columns to use for the subscription.
The subquery should be a simple SELECT <column_list> from <virtual_table> and should not contain a
WHERE clause. The <column_list> should match the target table schema in column order and name.
<param_list> must contain one of the parameters as table type and this table type (schema and name) must
be the same as the one defined in the task plan. This table type must also have the same columns as being
Each parameter in <param_list> is used in comparing its columns with columns for the corresponding table
type defined in the task plan. Hence the order of parameters in <param_list> must match the order of table
types defined in task plan for input and output sources.
Example
Create a remote subscription on a virtual table and apply changes using a real-time task.
Related Information
SQL Notation Conventions (SAP HANA SQL and System Views Reference)
Data Types (SAP HANA SQL and System Views Reference)
Creates a virtual procedure using the specified programming language that allows execution of the procedure
body at the specified remote source.
The CREATE VIRTUAL PROCEDURE SQL statement is available for use in other areas of SAP HANA, not only
SAP HANA smart data integration. Refer to the CREATE VIRTUAL PROCEDURE Statement (Procedural) topic
for complete information. The information below is specific to smart data integration functionality.
Syntax
CONFIGURATION <configuration_json_string>
Syntax Elements
<configuration_json_string>
Description
The CREATE VIRTUAL PROCEDURE statement creates a new virtual procedure from a remote source
procedure. When creating a virtual procedure using the SQL Console:
1. Return the metadata of the source procedure [number, types, and configuration (JSON) string] by invoking
the built-in SAP HANA procedure:
"PUBLIC"."GET_REMOTE_SOURCE_FUNCTION_DEFINITION"
('<remote_source_name>','<remote_object_unique_name>',?,?,?);
2. Edit the CONFIGURATION JSON string to include the appropriate parameter values.
Permissions
This statement requires the CREATE VIRTUAL PROCEDURE object privilege on the remote source.
If you use the SQL Console to create a virtual procedure, the following example illustrates an ABAP adapter.
TABLE (
BANK_CTRY NVARCHAR (6),
BANK_KEY NVARCHAR (30),
BANK_NAME NVARCHAR (120) ,
CITY NVARCHAR (70)
)
For more information about using the SQL Console, see the SAP HANA Administration Guide.
Syntax
Syntax Elements
<adapter_name>
<drop_option>
Description
Permissions
Role: sap.hana.im.dp.monitor.roles::Operations
Syntax
Syntax Elements
<agent_name>
<drop_option>
RESTRICT drops the agent only if it does not have any dependent objects.
Description
Role: sap.hana.im.dp.monitor.roles::Operations
Example
Create an agent TEST_AGENT and adapter CUSTOM_ADAPTER on the agent. Make sure that the custom
adapter is setup on the agent.
Syntax
Syntax Elements
<agent_group_name>
The DROP AGENT GROUP statement removes an agent clustering group. All dependent objects must be
removed before an agent clustering group can be dropped.
Permissions
Role: sap.hana.im.dp.monitor.roles::Operations
Example
Syntax
Syntax Elements
<subscription_name>
Description
The DROP REMOTE SUBSCRIPTION statement drops an existing remote subscription. If the remote
subscription is actively receiving changes from source table, then a RESET command is automatically called
before dropping it.
Permissions
This statement requires the DROP object privilege on the remote source.
Example
GRANT is used to grant privileges and structured privileges to users and roles. GRANT is also used to grant
roles to users and other roles.
The GRANT SQL statement is available for use in other areas of SAP HANA, not only SAP HANA smart data
integration. Refer to the GRANT topic for complete information. The information below is specific to smart data
integration functionality.
Syntax
Refer to the GRANT topic for complete information about GRANT syntax.
Syntax elements specific to smart data integration are described below. For information about syntax elements
that are not specific to smart data integration, refer to the GRANT topic.
<system_privilege>
<source_privilege>
Source privileges are used to restrict the access and modifications of a source entry.
CREATE REMOTE SUBSCRIP This privilege allows the creation of remote subscriptions exe
TION cuted on this source entry. Remote subscriptions are created in
a schema and point to a virtual table or SQL on tables to cap
ture change data.
PROCESS REMOTE SUB This privilege allows processing exceptions on this source entry.
SCRIPTION EXCEPTION Exceptions that are relevant for all remote subscriptions are
created for a remote source entry.
<object_privilege>
Object privileges are used to restrict the access and modifications on database objects.
Database objects are tables, views, sequences, procedures, and so on.
AGENT MESSAGING Authorizes the user with which the agent com DDL
municates with the data provisioning server us
ing HTTP protocol.
Not all object privileges are applicable to all kinds of database objects. To learn which
object types allow which privilege to be used, see the table below.
Remote
Schem Function / Subscrip
Privilege a Table View Sequence Procedure tion Agent
Related Information
GRANT Statement (Access Control) (SAP HANA SQL and System Views Reference)
The PROCESS REMOTE SUBSCRIPTION EXCEPTION statement allows the user to indicate how an exception
should be processed.
Syntax
Syntax Elements
<exception_id>
RETRY Indicates to retry the current failed operation. If the failure is due to opening a
connection to a remote source, then the connection is established. If the failure
IGNORE Indicates to ignore the current failure. If the failure happens when applying
changed data to a target table, then the IGNORE operation skips the current
transaction and proceeds with the next transaction. The exception is cleared.
Description
The PROCESS REMOTE SUBSCRIPTION EXCEPTION statement allows the user to indicate how an exception
should be processed.
Permissions
This statement requires the PROCESS REMOTE SUBSCRIPTION EXCEPTION object privilege on the remote
source.
Example
SESSION_CONTEXT is available for use in other areas of SAP HANA, not only SAP HANA smart data
integration. Refer to the SESSION_CONTEXT topic for complete information. The information below is specific
to smart data integration functionality.
Syntax
SESSION_CONTEXT(<session_variable>)
A predefined session variables that is set by the server and is read-only (cannot be SET or UNSET) is
‘TASK_EXECUTION_ID’.
Related Information
SESSION_CONTEXT Function (Miscellaneous) (SAP HANA SQL and System Views Reference)
Starts a task.
Syntax
Syntax Elements
<task_name>
<var_list>
Specifies one or more start task variables. Variables passed to a task are scalar
constants. Scalar parameters are assumed to be NOT NULL.
<start_task_var> Specifies the name and value for a start task variable. A task
can contain variables that allow for dynamic replacement of
task plan parameters. This section is where, at run time during
<param_list>
Task parameters. If the task uses table types for input and/or output, then those need
to be specified within this section. For more information about these data types, see
BNF Lowest Terms Representations and Data Types in the Notation topic.
Parameters are implicitly defined as either IN or OUT, as inferred from the task plan.
Arguments for IN parameters could be anything that satisfies the schema of the input
table type (for example, a table variable internal to the procedure, or a temporary
table). The actual value passed for tabular OUT parameters can be, for example, '?', a
physical table name, or a table variable defined inside the procedure.
Description
Starts a task.
START TASK when executed by the client the syntax behaves in a way consistent with the SQL standard
semantics, e.g. Java clients can call a procedure using a JDBC CallableStatement. Scalar output variables are a
scalar value that can be retrieved from the callable statement directly.
Note
Unquoted identifiers are implicitly treated as uppercase. Quoting identifiers will respect capitalization and
allow for using white spaces which are normally not allowed in SQL identifiers.
This statement requires the EXECUTE privilege on the schema in which the task was created.
Examples
The TASK performTranslation was already created, and the task plan has two table type input parameters and a
single table type output parameter. You call the performTranslation task passing in the table types to use for
execution.
SQL Script
You can call START TASK within the SQL Script CREATE PROCEDURE. Refer to the SAP HANA SQL Script
Reference for complete details about CREATE PROCEDURE.
● Table UDF
● Scalar UDF
● Trigger
● Read-only procedures
The TASK_EXECUTION_ID session variable provides a unique task execution ID. Knowing the proper task
execution ID is critical for various pieces of task functionality including querying for side-effect information and
task processing status, and canceling a task.
TASK_EXECUTION_ID is a read-only session variable. Only the internal start task code updates the value.
The value of TASK_EXECUTION_ID will be set during the START TASK command execution. In the case of
asynchronous execution (START TASK ASYNC), the value is updated before the command returns so it is
The users can obtain the value of TASK_EXECUTION_ID by using either of the following:
● The already existing SESSION_CONTEXT() function. If this function is used and if no tasks have been run
or a task was run and it was unsuccessful, then a NULL value will be returned.
● The M_SESSION_CONTEXT monitoring view. This would need to be queried using a KEY value of
“TASK_EXECUTION_ID”. If no row exists with that key, then that means that the session variable hasn’t
been set (no tasks run or last task execution was unsuccessful).
Note
Session variables are string values. The user needs to cast appropriately based on how they want to use the
value.
Related Information
SQL Notation Conventions (SAP HANA SQL and System Views Reference)
System views allow you to query for various information about the system state using SQL commands. The
results appear as tables.
System views are located in the SYS schema. In a system with tenant databases, every database has a SYS
schema with system views that contain information about that database only. In addition, the system database
has a further schema, SYS_DATABASES, which contains views for monitoring the system as a whole. The views
in the SYS_DATABASES schema provide aggregated information from a subset of the views available in the SYS
schema of all tenant databases in the system. These union views have the additional column DATABASE_NAME
to allow you to identify to which database the information refers. To be able to view information in these views,
you need the system privilege CATALOG READ or DATABASE ADMIN.
Related Information
System Views Reference for Additional SAP HANA Contexts (SAP HANA SQL and System Views Reference)
Structure
Structure
Structure
Agent configuration
Structure
Structure
IS_SSL_ENABLED VARCHAR(5) Specifies whether the agent listening on TCP port uses SSL
Provides the status of all agents registered in the SAP HANA database.
Structure
LAST_CONNECT_TIME TIMESTAMP The last time the session cookie was used for suc
cessful re-connection
Stores dictionary status information, remote source owner information, and the status of data collection.
Note
This system view is for keeping track of the status of metadata dictionaries for remote sources. If there is
no dictionary for a given remote source, it will not appear in the view.
For basic remote source information you can select from REMOTE_SOURCES. It includes the following.
● REMOTE_SOURCE_NAME
● ADAPTER_NAME
● CONNECTION_INFO
● AGENT_GROUP_NAME
Structure
● STARTED
● COMPLETED
● RUNNING (GET OBJECTS)
● RUNNING (GET OBJECT DETAILS)
● FAILED
● CANCELLED
● CLEARED
Provides details of current processing details of a remote subscription (e.g. number of messages or
transactions received, applied since the start of the SAP HANA database).
Structure
Structure
1 - CLUSTER
2 - POOL
Note
The M_SESSION_CONTEXT view is available for use in other areas of SAP HANA, not only SAP HANA
smart data integration. Refer to the M_SESSION_CONTEXT topic for complete information. The
information below is specific to smart data integration functionality.
Each variable is categorized in SECTION column to USER (user defined variable using SET command or client
API call) or SYSTEM (predefined variable or system property).
Related Information
M_SESSION_CONTEXT System View (SAP HANA SQL and System Views Reference)
If the adapter can provide column-level information for each table in the remote system, once this dictionary is
built you can search for relationships between tables. This table is useful for analyzing relationships between
tables in the remote source.
Structure
IS_AUTOINCREMENT
Structure
Stores browsable nodes as well as importable objects (virtual tables). This view is built from remote source
metadata dictionaries.
Structure
Remote sources
Related Information
REMOTE_SOURCES System View (SAP HANA SQL and System Views Reference)
Provides details about an exception that occurred during the execution of a remote subscription. The
exceptions can be processed using the PROCESS REMOTE SUBSCRIPTION EXCEPTION SQL statement.
Structure
Structure
Provides the client mapping when a task is created by the ABAP API.
CLIENT NVARCHAR(128) Name of the client that created the task with the ABAP API
Structure
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
COLUMN_NAME NVARCHAR(128) Name of the column used in the task plan within a table
MAPPED_NAME NVARCHAR(128) Mapped name of the column used in a task plan within a ta
ble
Data in this view is updated while the task is in progress. For example, STATUS, PROCESSED_RECORDS, and
TOTAL_PROGRESS_PERCENT are continuously updated until the task is complete.
Users may view information only for tasks that they ran themselves or were granted permissions to view.
PROCEDURE_PARAMETERS NVARCHAR(5000) Displays the input <param-list> values that were speci
fied in the START TASK SQL command
HAS SIDE EFFECTS VARCHAR(5) 'TRUE' if the task produces side effect data, else 'FALSE'
Contains all operations that exist for a given task, as well as details about those operations.
Structure
Data in this view is updated while the task is in progress. For example, STATUS, PROCESSED_RECORDS, and
OPERATIONS_PROGRESS_PERCENT are continuously updated until the task is complete.
Structure
● STARTING
● RUNNING
● FAILED
● COMPLETED
● CANCELLING
● CANCELLED
HAS_SIDE_EFFECTS VARCHAR(5) 'TRUE' if the task produces side effect data, else 'FAL
SE'
Contains all of the tables used by the various side-effect producing operation.
Structure
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for an operation
IS_PRIMARY_TABLE TINYINT Specifies whether this table is the primary table in a relation
ship
Structure
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for an operation
RELATED_TABLE_NAME NVARCHAR(128) Name of the table to which the table specified in TA
BLE_NAME is related
FROM_ATTRIBUTE NVARCHAR(128) Name of the column in the TABLE_NAME table that relates
to the TO_ATTRIBUTE
Structure
PLAN NCLOB Task plan used to define the task, or task plan gener
ated to call the procedure
HAS_TABLE_TYPE_INPUT VARCHAR(5) 'TRUE' if the task is modeled with a table type as in
put, meaning data would need to be passed at execu
tion time
HAS SDQ VARCHAR(5) 'TRUE' if the task contains SDQ (smart data quality)
functionality
IS_READ_ONLY VARCHAR(5) 'TRUE' if the task is read only (has only table type out
puts), 'FALSE' if it writes to non-table-type outputs
SQL_SECURITY VARCHAR(7) Security model for the task, either 'DEFINER' or 'IN
VOKER'
Lists the properties of the columns in a virtual table sent by the adapter via CREATE VIRTUAL TABLE SQL
statement.
Lists the properties of a virtual table sent by the adapter via the CREATE VIRTUAL TABLE SQL statement.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
Contains governance information for every column in every record that is updated in the best record process.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
DST_ROW_TYPE NVARCHAR(1) Identifies how the record was updated or if it was newly cre
ated
STRATEGY_GROUP_ID INTEGER Identification number that identifies the best record strategy
group
BEST_RECORD_RULE NVARCHAR(256) Name of the rule that updates one or more columns as it is
defined in the best record configuration
UPDATE_NUM INTEGER Number of times the column was updated in the best record
process
OPERATION_TYPE NVARCHAR(1) Identifies how the record was updated in the best record
process
Contains information on which strategies are used in each strategy group and in which order.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
STRATEGY_GROUP_NAME NVARCHAR(256) Name of the strategy group as defined in the best record
configuration
STRATEGY_NAME NVARCHAR(256) Name of the strategy as defined in the best record configu-
ration
Describes how well an address was assigned as well as the type of address.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
ASSIGNMENT_LEVEL NVARCHAR(4) Code that represents the level to which the address matched
data in the address reference data
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
ENTITY_ID NVARCHAR(12) Identifier describing the type of record that was processed
Identifies the location of parsed data elements in the input and output.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the input table where the component element was
found
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
COLUMN_NAME NVARCHAR(128) Name of the column in the input table where the component
element was found
OUTPUT_TABLE_NAME NVARCHAR(128) Name of the output table where the component element was
written
OUTPUT_COLUMN_NAME NVARCHAR(128) Name of the column in the output table where the compo
nent element was written
Contains one row per info code generated by the cleansing process.
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
ENTITY_ID NVARCHAR(12) Identifier describing the type of record that was processed
INFO_CODE NVARCHAR(10) Information code that gives information about the process
ing of the record
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
ENTITY_ID NVARCHAR(12) Identifier describing the type of record that was processed
NUM_RECORDS BIGINT Total number of records processed for the entity instance
NUM_VALIDS BIGINT Number of valid records processed for the entity instance
NUM_SUSPECTS BIGINT Number of suspect records processed for the entity instance
NUM_BLANKS BIGINT Number of blank records processed for the entity instance
NUM_HIGH_SIGNIFI BIGINT Number of records with high significance changes for the en
CANT_CHANGES tity instance
Contains one row per info code generated by the geocode transformation process.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
Contains one row for each match decision made during the matching process.
Structure
TASK_EXECUTION_ID BIGINT Unique identifier for a particular run of a task plan created
when START TASK is called
TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for the operation
ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
RELATED_TABLE_NAME NVARCHAR(128) Name of the table defined in the task plan for an operation
RELATED_ROW_ID BIGINT Unique identifier of the row processed for this execution of
the task plan
POLICY_NAME NVARCHAR(256) Name of the match policy that processed the related rows
RULE_NAME NVARCHAR(256) Name of the match rule that processed the related rows
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Gender-Related Language
We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.