SAP HANA Definitions
SAP HANA Definitions
SAP HANA Definitions
Audience
This tutorial has been prepared for anyone who has a basic knowledge of
SQL. After completing this tutorial, you will find yourself at a moderate level
of expertise in administration and operations or implantation and modeling
using SAP HANA.
Prerequisites
Before you start proceeding with this tutorial, we assume that you are well-
versed with basic database concepts. You should have a good exposure to
SQL, as SAP HANA is like a relational database. If you are not aware of these
concepts, then we recommend that you first go through our short tutorials
on SQL.
According to former SAP executive, Dr. Vishal Sikka, HANA stands for Hassos
New Architecture. HANA developed interest by mid-2011 and various fortune
500 companies started considering it as an option to maintain Business
Warehouse needs after that.
Features of SAP HANA
The main features of SAP HANA are given below
It is written in C++, supports and runs on only one Operating System Suse Linux
Enterprise Server 11 SP1/2.
It involves high maintenance cost for IT companies to store and maintain large
data volumes.
Due to unavailability of real time data, analysis and processing results are
delayed.
IBM
Dell
HP
Cisco
Fujitsu
Lenovo (China)
NEC
Huawei
It combines row based, column based and Object Oriented base technology.
It means, memory reads in HANA database are 1 million times faster than a
conventional database hard disk memory reads.
Analysts want to see current data immediately in real time and do not want
to wait for data until it is loaded to SAP BW system. SAP HANA In-Memory
processing allows loading of real time data with use of various data
provisioning techniques.
Advantages of In-Memory Database
HANA database takes advantage of in-memory processing to deliver the fastest
data-retrieval speeds, which is enticing to companies struggling with high-scale
online transactions or timely forecasting and planning.
Disk-based storage is still the enterprise standard and price of RAM has been
declining steadily, so memory-intensive architectures will eventually replace slow,
mechanical spinning disks and will lower the cost of data storage.
This speed advantages offered by RAM storage system are further enhanced by
the use of multi-core CPUs, multiple CPUs per node and multiple nodes per server
in a distributed environment.
It is a client tool, which can be used to access local or remote HANA system.
Microsoft Windows 32 and 64 bit versions of: Windows XP, Windows Vista,
Windows 7
Right Click in Navigator space and click on Add System. Enter HANA system
details, i.e. Host name & Instance number and click next.
Enter Database user name and password to connect to SAP HANA database.
Click on Next and then Finish.
Once you click on Finish, HANA system will be added to System View for
administration and modeling purpose. Each HANA system has two main sub-
nodes, Catalog and Content.
Content
The Content tab contains design time repository, which holds all information
of data models created with the HANA Modeler. These models are organized
in Packages. The content node provides different views on same physical
data.
SAP HANA - System Monitor
System Monitor in HANA studio provides an overview of all your HANA system
at a glance. From System Monitor, you can drill down into details of an
individual system in Administration Editor. It tells about Data Disk, Log disk,
Trace Disk, Alerts on resource usage with priority.
Also possible to use third party tools like MS Excel to connect to HANA and create
reports.
Attribute View
Analytic View
Calculation View
Column Store
In a Column store table, Data is stored vertically. So, similar data types come
together as shown in the example above. It provides faster memory read and
write operations with help of In-Memory Computing Engine.
Data Compression
Faster read and write access to tables as compared to conventional Row based
storage
There are various methods and algorithms how data can be stored in Column
based structure- Dictionary Compressed, Run Length Compressed and many
more.
Row based storage is preferred when output has to return complete row. The
example given below makes it easy to understand.
In the above example, while running an Aggregate function (Sum) in sales
column with Where clause, it will only use Date and Sales column while
running SQL query so if it is column based storage table then it will be
performance optimized, faster as data is required only from two columns.
While running a simple Select query, full row has to be printed in output so
it is advisable to store table as Row based in this scenario.
Analytic View
Analytic Views use power of SAP HANA to perform calculations and
aggregation functions on the tables in database. It has at least one fact table
that has measures and primary keys of dimension tables and surrounded by
dimension tables contain master data.
Important features are
Analytic views contain at least one fact table and multiple dimension tables with
master data and perform calculations and aggregations
They are similar to Info Cubes and Info objects in SAP BW.
Analytic views can be created on top of Attribute views and Fact tables and
performs calculations like number of unit sold, total price, etc.
Calculation Views
Calculation Views are used on top of Analytic and Attribute views to perform
complex calculations, which are not possible with Analytic Views. Calculation
view is a combination of base column tables, Attribute views and Analytic
views to provide business logic.
Calculation Views are defined either graphical using HANA Modeling feature or
scripted in the SQL.
It is created to perform complex calculations, which are not possible with other
views- Attribute and Analytic views of SAP HANA modeler.
One or more Attribute views and Analytic views are consumed with help of inbuilt
functions like Projects, Union, Join, Rank in a Calculation View.
SAP HANA - Core Architecture
SAP HANA was initially, developed in Java and C++ and designed to run only
Operating System Suse Linux Enterprise Server 11. SAP HANA system
consists of multiple components that are responsible to emphasize computing
power of HANA system.
Most important component of SAP HANA system is Index Server, which contains
SQL/MDX processor to handle query statements for database.
HANA system contains Name Server, Preprocessor Server, Statistics Server and
XS engine, which is used to communicate and host small web applications and
various other components.
Index Server
Index Server is heart of SAP HANA database system. It contains actual data
and engines for processing that data. When SQL or MDX is fired for SAP HANA
system, an Index Server takes care of all these requests and processes them.
All HANA processing takes place in Index Server.
Index Server contains Data engines to handle all SQL/MDX statements that
come to HANA database system. It also has Persistence Layer that is
responsible for durability of HANA system and ensures HANA system is
restored to most recent state when there is restart of system failure.
Index Server also has Session and Transaction Manager, which manage
transactions and keep track of all running and closed transactions.
It also ensures that all SQL/MDX requests are authorized and also provide
error handling for efficient processing of these statements. It contains several
engines and processors for query execution
MDX (Multi Dimension Expression) is query language for OLAP systems like SQL
is used for Relational database. MDX Engine is responsible to handle queries and
manipulates multidimensional data stored in OLAP cubes.
Persistence Layer
It is responsible for durability and atomicity of transactions in HANA system.
Persistence layer provides built in disaster recovery system for HANA
database.
It ensures database is restored to most recent state and ensures that all the
transactions are completed or undone in case of a system failure or restart.
It is also responsible to manage data and transaction logs and also contain
data backup, log backup and configuration back of HANA system. Backups
are stored as save points in the Data Volumes via a Save Point coordinator,
which is normally set to take back every 5-10 minutes.
Preprocessor Server
Preprocessor Server in SAP HANA system is used for text data analysis.
Index Server uses preprocessor server for analyzing text data and extracting
the information from text data when text search capabilities are used.
Name Server
NAME server contains System Landscape information of HANA system. In
distributed environment, there are multiple nodes with each node has
multiple CPUs, Name server holds topology of HANA system and has
information about all the running components and information is spread on
all the components.
Statistical Server
This server checks and analyzes the health of all components in HANA
system. Statistical Server is responsible for collecting the data related to
system resources, their allocation and consumption of the resources and
overall performance of HANA system.
XS Engine
XS engine helps external Java and HTML based applications to access HANA
system with help of XS client. As SAP HANA system contains a web server
which can be used to host small JAVA/HTML based applications.
XS Engine transforms the persistence model stored in database into
consumption model for clients exposed via HTTP/HTTPS.
LM Structure
LM structure of SAP HANA system contains information about current
installation details. This information is used by Software Update Manager to
install automatic updates on HANA system components.
HANA Modeling is done on the top of tables available in Catalog tab under
Schema in HANA studio and all views are saved under Content table under
Package.
You can create new Package under Content tab in HANA studio using right
click on Content and New.
All Modeling Views created inside one package comes under the same
package in HANA studio and categorized according to View Type.
Each View has different structure for Dimension and Fact tables. Dim tables
are defined with master data and Fact table has Primary Key for dimension
tables and measures like Number of Unit sold, Average delay time, Total
Price, etc.
Dimension Table contains master data and is joined with one or more fact
tables to make some business logic. Dimension tables are used to create
schemas with fact tables and can be normalized.
The fact table lists events that happen in our company (or at least the events
that we want to analyze- No of Unit Sold, Margin, and Sales Revenue). The
Dimension tables list the factors (Customer, Time, and Product) by which we
want to analyze the data.
SAP HANA - Schema in Data Warehouse
Schemas are logical description of tables in Data Warehouse. Schemas are
created by joining multiple fact and Dimension tables to meet some business
logic.
Database uses relational model to store data. However, Data Warehouse use
Schemas that join dimensions and fact tables to meet business logic. There
are three types of Schemas used in a Data Warehouse
Star Schema
Snowflakes Schema
Galaxy Schema
Star Schema
In Star Schema, Each Dimension is joined to one single Fact table. Each
Dimension is represented by only one dimension and is not further
normalized.
Dimension Table contains set of attribute that are used to analyze the data.
Example In example given below, we have a Fact table FactSales that has
Primary keys for all the Dim tables and measures units_sold and dollars_ sold
to do analysis.
Facts/Measures in Fact Table are used for analysis purpose along with
attribute in Dimension tables.
Snowflakes Schema
In Snowflakes schema, some of Dimension tables are further, normalized and
Dim tables are connected to single Fact Table. Normalization is used to
organize attributes and tables of database to minimize the data redundancy.
Galaxy Schema
In Galaxy Schema, there are multiple Fact tables and Dimension tables. Each
Fact table stores primary keys of few Dimension tables and measures/facts
to do analysis.
In the above example, there are two Fact tables FactSales, FactShipping and
multiple Dimension tables joined to Fact tables. Each Fact table contains
Primary Key for joined Dim tables and measures/Facts to perform analysis.
SAP HANA - Tables
Tables in HANA database can be accessed from HANA Studio in Catalogue tab
under Schemas. New tables can be created using the two methods given
below
Once SQL Editor is opened, Schema name can be confirmed from the name
written on the top of SQL Editor. New table can be created using SQL Create
Table statement
In this SQL statement, we have created a Column table Test1, defined data
types of table and Primary Key.
Once you write Create table SQL query, click on Execute option on top of SQL
editor right side. Once the statement is executed, we will get a confirmation
message as shown in snapshot given below
Execution statement also tells about the time taken to execute the statement.
Once statement is successfully executed, right click on Table tab under
Schema name in System View and refresh. New Table will be reflected in the
list of tables under Schema name.
Insert statement is used to enter the data in the Table using SQL editor.
Click on Execute.
You can right click on Table name and use Open Data Definition to see data
type of the table. Open Data Preview/Open Content to see table contents.
Creating Table using GUI Option
Another way to create a table in HANA database is by using GUI option in
HANA Studio.
Right Click on Table tab under Schema Select New Table option as shown
in snapshot given below.
Once you click on New Table It will open a window to enter the Table name,
Choose Schema name from drop down, Define Table type from drop down
list: Column Store or Row Store.
Once you Execute (F8), Right Click on Table Tab Refresh. New Table will
be reflected in the list of tables under chosen Schema. Below Insert Option
can be used to insert data in table. Select statement to see content of table.
Inserting Data in a table using GUI in HANA studio
You can right click on Table name and use Open Data Definition to see data
type of the table. Open Data Preview/Open Content to see table contents.
To use tables from one Schema to create views we should provide access on
the Schema to the default user who runs all the Views in HANA Modeling. This
can be done by going to SQL editor and running this query
SAP HANA Packages are shown under Content tab in HANA studio. All HANA
modeling is saved inside Packages.
You can create a new Package by Right Click on Content Tab New
Package
You can also create a Sub Package under a Package by right clicking on the
Package name. When we right click on the Package we get 7 Options: We can
create HANA Views Attribute Views, Analytical Views, and Calculation Views
under a Package.
You can also create Decision Table, Define Analytic Privilege and create
Procedures in a Package.
When you right click on Package and click on New, you can also create sub
packages in a Package. You have to enter Package Name, Description while
creating a Package.
SAP HANA - Attribute View
Attribute Views in SAP HANA Modeling are created on the top of Dimension
tables. They are used to join Dimension tables or other Attribute Views. You
can also copy a new Attribute View from already existing Attribute Views
inside other Packages but that doesnt let you change the View Attributes.
Attribute Views are used in Analytical and Calculation Views for analysis to pass
master data.
Attribute Views are used for performance optimization in large size Dimension
tables, you can limit the number of attributes in an Attribute View which are
further used for Reporting and analysis purpose.
Attribute Views are used to model master data to give some context.
Time subtype Attribute View is a special type of Attribute view that adds a
Time Dimension to Data Foundation. When you enter the Attribute name,
Type and Subtype and click on Finish, it will open three work panes
Details Pane shows attribute of all tables added to Data Foundation and joining
between them.
Output pane where we can add attributes from Detail pane to filter in the report.
You can add Objects to Data Foundation, by clicking on + sign written next
to Data Foundation. You can add multiple Dimension tables and Attribute
Views in the Scenario Pane and join them using a Primary Key.
When you click on Add Object in Data Foundation, you will get a search bar
from where you can add Dimension tables and Attribute views to Scenario
Pane. Once Tables or Attribute Views are added to Data Foundation, they can
be joined using a Primary Key in Details Pane as shown below.
Once joining is done, choose multiple attributes in details pane, right click
and Add to Output. All columns will be added to Output pane. Now Click on
Activate option and you will get a confirmation message in job log.
Now you can right click on the Attribute View and go for Data Preview.
Note When a View is not activated, it has diamond mark on it. However,
once you activate it, that diamond disappears that confirms that View has
been activated successfully.
Once you click on Data Preview, it will show all the attributes that has been
added to Output pane under Available Objects.
These Objects can be added to Labels and Value axis by right click and adding
or by dragging the objects as shown below
SAP HANA - Analytic View
Analytic View is in the form of Star schema, wherein we join one Fact table
to multiple Dimension tables. Analytic views use real power of SAP HANA to
perform complex calculations and aggregate functions by joining tables in
form of star schema and by executing Star schema queries.
Analytic Views are used to perform complex calculations and Aggregate functions
like Sum, Count, Min, Max, Etc.
Each Analytic View has one Fact table surrounded by multiple dimension tables.
Fact table contains primary key for each Dim table and measures.
Analytic Views are similar to Info Objects and Info sets of SAP BW.
Click on Data Foundation to add Dimension and Fact tables. Click on Star Join
to add Attribute Views.
Add Dim and Fact tables to Data Foundation using + sign. In the example
given below, 3 dim tables have been added: DIM_CUSTOMER,
DIM_PRODUCT, DIM_REGION and 1 Fact table FCT_SALES to Details Pane.
Joining Dim table to Fact table using Primary Keys stored in Fact table.
Select Attributes from Dim and Fact table to add to Output pane as shown in
snapshot shown above. Now change the data type of Facts, from fact table
to measures.
Click on Semantic layer, choose facts and click on measures sign as shown
below to change datatype to measures and Activate the View.
Once you activate view and click on Data Preview, all attributes and measures
will be added under the list of Available objects. Add Attributes to Labels Axis
and Measure to Value axis for analysis purpose.
Calculation Views are used to consume Analytic, Attribute and other Calculation
Views.
They are used to perform complex calculations, which are not possible with other
Views.
There are two ways to create Calculation Views- SQL Editor or Graphical Editor.
Data Category
Cube, in this default node, is Aggregation. You can choose Star join with Cube
dimension.
Example
The following example shows how we can use Calculation View with Star join
You have four tables, two Dim tables, and two Fact tables. You have to find
list of all employees with their Joining date, Emp Name, empId, Salary and
Bonus.
Copy and paste the below script in SQL editor and execute.
Create column table Empfact1 (empId nvarchar(3), Empdate date, Sal integer );
Insert into Empfact1 values('AA1','20100101',5000);
Insert into Empfact1 values('BB1','20110101',10000);
Insert into Empfact1 values('CC1','20120101',12000);
Create column table Empfact2 (empId nvarchar(3), deptName nvarchar(20), Bonus integer
);
Insert into Empfact2 values ('AA1','SAP', 2000);
Insert into Empfact2 values ('BB1','Oracle', 2500);
Insert into Empfact2 values ('CC1','JAVA', 1500);
Now we have to implement Calculation View with Star Join. First change both
Dim tables to Dimension Calculation View.
Create a Calculation View with Star Join. In Graphical pane, add 2 Projections
for 2 Fact tables. Add both fact tables to both Projections and add attributes
of these Projections to Output pane.
Add a join from default node and join both the fact tables. Add parameters
of Fact Join to output pane.
In Star Join, add both- Dimension Calculation views and add Fact Join to Star
Join as shown below. Choose parameters in Output pane and active the View.
Create 2 Analytical Views on Fact Tables Add both Attribute views and
Fact1/Fact2 at Data Foundation in Analytic view.
Now Create a Calculation View Dimension (Projection). Create Projections
of both Analytical Views and Join them. Add attributes of this Join to output
pane. Now Join to Projection and add output again.
Sometimes, it is required that data in the same view should not be accessible
to other users who do not have any relevant requirement for that data.
Example
Suppose you have an Analytic view EmpDetails that has details about
employees of a company- Emp name, Emp Id, Dept, Salary, Date of Joining,
Emp logon, etc. Now if you do not want your Report developer to see Salary
details or Emp logon details of all employees, you can hide this by using
Analytic privileges option.
Analytic Privileges are used to control read access on SAP HANA Information
views.
So we can restrict data by Empname, EmpId, Emp logon or by Emp Dept and
not by numerical values like salary, bonus.
You can click on Next button and add Modeling view in this window before
you click on finish. There is also an option to copy an existing Analytic
Privilege package.
Once you click on Add button, it will show you all the views under Content
tab.
Choose View that you want to add to Analytic Privilege package and click OK.
Selected View will be added under reference models.
Now to add attributes from selected view under Analytic Privilege, click on
add button with Associated Attributes Restrictions window.
Add objects you want to add to Analytic privileges from select object option
and click on OK.
In Assign Restriction option, it allows you to add values you want to hide in
Modeling View from specific user. You can add Object value that will not
reflect in Data Preview of Modeling View.
Search Analytic Privilege you want to apply with the name and click on OK.
That view will be added to user role under Analytic Privileges.
To delete Analytic Privileges from specific user, select view under tab and use
Red delete option. Use Deploy (arrow mark at top or F8 to apply this to user
profile).
SAP HANA - Information Composer
SAP HANA Information Composer is a self-service modeling environment for
end users to analyze data set. It allows you to import data from workbook
format (.xls, .csv) into HANA database and to create Modeling views for
analysis.
Information Composer is very different from HANA Modeler and both are
designed to target separate set of users. Technically sound people who have
strong experience in data modeling use HANA Modeler. A business user, who
does not have any technical knowledge, uses Information Composer. It
provides simple functionalities with easy to use interface.
http://<server>:<port>/IC
Login to SAP HANA Information Composer. You can perform data loading or
manipulation using this tool.
Once data is published to HANA database, you cannot rename the table. In
this case, you have to delete the table from Schema in HANA database.
Using Clipboard
Another way to upload data in IC is by use of the clipboard. Copy the data to
clipboard and upload it with help of Information Composer. Information
Composer also allows you to see preview of data or even provide summary
of data in temporary storage. It has inbuilt capability of data cleansing that
is used to remove any inconsistency in data.
Once data is cleansed, you need to classify data whether it is attributed. IC
has inbuilt feature to check the data type of uploaded data.
Final step is to publish the data to physical tables in HANA database. Provide
a technical name and description of table and this will be loaded inside
IC_Tables Schema.
IC_PUBLIC allows users to view information views created by other users. This
role does not allow the user to upload or create any information views using IC.
The Information Composer Server must be physically located next to the HANA
server.
Client Requirements
This option can be accessed from File menu at the top or by right clicking on
any table or Information model in HANA studio.
Users can use this option to export all the packages that make a delivery unit
and the relevant objects contained in it to a HANA Server or to local Client
location.
This can be done through HANA Modeler Delivery Unit Select System
and Next Create Fill the details like Name, Version, etc. OK Add
Packages to Delivery unit Finish
Once the Delivery Unit is created and the packages are assigned to it, user
can see the list of packages by using Export option
You can see list of all packages assigned to Delivery unit. It gives an option
to choose export location
Export to Server
Export to Client
You can export the Delivery Unit either to HANA Server location or to a Client
location as shown.
The user can restrict the export through Filter by time which means
Information views, which are updated within the mentioned time interval will
only be exported.
Select the Delivery Unit and Export Location and then Click Next Finish.
This will export the selected Delivery Unit to the specified location.
Developer Mode
This option can be used to export individual objects to a location in the local
system. User can select single Information view or group of Views and
Packages and select the local Client location for export and Finish.
Tables This option can be used to export tables along with its content.
Select Source file by browsing local system. It also gives an option if you
want to keep the header row. It also gives an option to create a new table
under existing Schema or if you want to import data from a file to an existing
table.
When you click on Next, it gives an option to define Primary Key, change data
type of columns, define storage type of table and also, allows you to change
the proposed structure of table.
When you click on finish, that table will be populated under list of tables in
mentioned Schema. You can do the data preview and can check data
definition of the table and it will be same as that of .xls file.
Delivery Unit
Select Delivery unit by going to File Import Delivery unit. You can choose
from a server or local client.
You can select Overwrite inactive versions which allows you to overwrite
any inactive version of objects that exist. If the user selects Activate
objects, then after the import, all the imported objects will be activated by
default. The user need not trigger the activation manually for the imported
views.
We know that with the use of Information Modeling feature in SAP HANA, we
can create different Information views Attribute Views, Analytic Views,
Calculation views. These Views can be consumed by different reporting tools
like SAP Business Object, SAP Lumira, Design Studio, Office Analysis and
even third party tool like MS Excel.
This generates the need for consuming HANA Modeling views by different
reporting tools and to generate reports and dashboards, which are easy to
understand for end users.
Main tools that are used for designing interactive dashboards- Design Studio
and Dashboard Designer. Design Studio is future tool for designing
dashboard, which consumes HANA views via BI consumer Service BICS
connection. Dashboard design (xcelsius) uses IDT to consume schemas in
HANA database with a Relational or OLAP connection.
SAP Lumira has an inbuilt feature of directly connecting or loading data from
HANA database. HANA views can be directly consumed in Lumira for
visualization and creating stories.
In the picture given above, it shows all BI tools with solid lines, which can be
directly connected and integrated with SAP HANA using an OLAP connection.
It also depicts tools, which need a relational connection using IDT to connect
to HANA are shown with dotted lines.
You can create a direct connection to SAP HANA using JDBC or ODBC drivers.
SAP HANA - Crystal Reports
Crystal Reports for Enterprise
In Crystal Reports for Enterprise, you can access SAP HANA data by using an
existing relational connection created using the information design tool.
You can also connect to SAP HANA using an OLAP connection created using
information design tool or CMC.
Design Studio
Design Studio can access SAP HANA data by using an existing OLAP
connection created in Information design tool or CMC same like Office
Analysis.
Dashboards
Dashboards can connect to SAP HANA only through a relational Universe.
Customers using Dashboards on top of SAP HANA should strongly consider
building their new dashboards with Design Studio.
Web Intelligence
Web Intelligence can connect to SAP HANA only through a Relational
Universe.
SAP Lumira
Lumira can connect directly to SAP HANA Analytic and Calculation views. It
can also connect to SAP HANA through SAP BI Platform using a relational
Universe.
Explorer
You can create an information space based on an SAP HANA view using JDBC
drivers.
Creating an OLAP Connection in CMC
We can create an OLAP Connection for all the BI tools, which we want to use
on top of HANA views like OLAP for analysis, Crystal Report for enterprise,
Design Studio. Relational connection through IDT is used to connect Web
Intelligence and Dashboards to HANA database.
These connection can be created using IDT as well CMC and both of the
connections are saved in BO Repository.
Click on Connect and choose modeling view by entering user name and
password.
Predefined It will not ask user name and password again while using this
connection.
Enter user user name and password for HANA system and save and new
connection will be added to existing list of connections.
Now open BI Launchpad to open all BI platform tools for reporting like Office
Analysis for OLAP and it will ask to choose a connection. By default, it will
show you the Information View if you have specified it while creating this
connection otherwise click on Next and go to folders Choose Views
(Analytic or Calculation Views).
SAP Lumira connectivity with HANA system
Open SAP Lumira from Start Program, Click on file menu New Add new
dataset Connect to SAP HANA Next
Difference between connect to SAP HANA and download from SAP HANA is
that it will download data from Hana system to BO repository and refreshing
of data will not occur with changes in HANA system. Enter HANA server name
and Instance number. Enter user name and password click on Connect.
It will show all views. You can search with the view name Choose View
Next. It will show all measures and dimensions. You can choose from these
attributes if you want click on create option.
Prepare You can see the data and do any custom calculation.
Visualize You can add Graphs and Charts. Click on X axis and Y axis + sign to
add attributes.
Share If it is built on SAP HANA, we can only publish to SAP Lumira server.
Otherwise you can also publish story from SAP Lumira to SAP Community Network
SCN or BI Platform.
Creating a Relational Connection in IDT to use with HANA views in WebI and
Dashboard
You can also test this connection by clicking on Test Connection option.
.cnx represents local unsecured connection. If you use this connection while
creating and publishing a Universe, it will not allow you to publish that to
repository.
Choose .cns connection type Right Click on this click on New Data
foundation Enter Name of Data foundation Next Single source/multi
source click on Next Finish.
It will show all the tables in HANA database with Schema name in the middle
pane.
Import all tables from HANA database to master pane to create a Universe.
Join Dim and Fact tables with primary keys in Dim tables to create a Schema.
Double Click on the Joins and detect Cardinality Detect OK Save All
at the top. Now we have to create a new Business layer on the data
foundation that will be consumed by BI Application tools.
Right Click on .dfx and choose new Business Layer Enter Name Finish
. It will show all the objects automatically, under master pane . Change
Dimension to Measures (Type-Measure change Projection as required)
Save All.
Now open WebI Report from BI Launchpad or Webi rich client from BI
Platform client tools New select Universe TEST_SAP_HANA OK.
All Objects will be added to Query Panel. You can choose attributes and
measures from left pane and add them to Result Objects. The Run query will
run the SQL query and the output will be generated in the form of Report in
WebI as shown below.
Microsoft Excel is considered the most common BI reporting and analysis tool
by many organizations. Business Managers and Analysts can connect it to
HANA database to draw Pivot tables and charts for analysis.
It will give you the list of all packages in drop down list that are available in
HANA system. You can choose an Information view click Next Select
Pivot table/others OK.
All attributes from Information view will be added to MS Excel. You can choose different
attributes and measures to report as shown and you can choose different charts like pie
charts and bar charts from design option at the top.
SAP HANA supports multiple databases in a single HANA system and this is
known as multitenant database containers. HANA system can also contain
more than one multitenant database containers. A multiple container system
always has exactly one system database and any number of multitenant
database containers. AN SAP HANA system that is installed in this
environment is identified by a single system ID (SID). Database containers
in HANA system are identified by a SID and database name. SAP HANA client,
known as HANA studio, connects to specific databases.
Authorization
If SAP HANA is integrated with BI platform tools and acts as reporting database,
then the end-user and role are managed in application server.
If the end-user directly connects to the SAP HANA database, then user and role
in database layer of HANA system is required for both end users and
administrators.
Every user wants to work with HANA database must have a database user
with necessary privileges. User accessing HANA system can either be a
technical user or an end user depending on the access requirement. After
successful logon to system, users authorization to perform the required
operation is verified. Executing that operation depends on privileges that user
has been granted. These privileges can be granted using roles in HANA
Security. HANA Studio is one of powerful tool to manage user and roles for
HANA database system.
User Types
User types vary according to security policies and different privileges
assigned on user profile. User type can be a technical database user or end
user needs access on HANA system for reporting purpose or for data
manipulation.
Standard Users
Standard users are users who can create objects in their own Schemas and
have read access in system Information models. Read access is provided by
PUBLIC role which is assigned to every standard users.
Restricted Users
Restricted users are those users who access HANA system with some
applications and they do not have SQL privileges on HANA system. When
these users are created, they do not have any access initially.
Restricted users cannot create objects in HANA database or their own Schemas.
They do not have access to view any data in database as they dont have generic
Public role added to profile like standard users.
Technical database users are used only for administrative purpose such as
creating new objects in database, assigning privileges to other users, on
packages, applications etc.
Create Users
Deleting Users
When you expand security tab, it gives option of User and Roles. To create a
new user right click on User and go to New User. New window will open where
you define User and User parameters.
The specified role name must not be identical to the name of an existing user
or role. The password rules include a minimal password length and a
definition of which character types (lower, upper, digit, special characters)
have to be part of the password.
Different Authorization methods can be configured like SAML, X509
certificates, SAP Logon ticket, etc. Users in the database can be authenticated
by varying mechanisms
It also gives an option to define validity of user, you can mention validity
interval by selecting the dates. Validity specification is an optional user
parameter.
Some users that are, by default, delivered with the SAP HANA database are
SYS, SYSTEM, _SYS_REPO, _SYS_STATISTICS.
Once this is done, the next step is to define privileges for user profile. There
are different types of privileges that can be added to a user profile.
Granted Roles to a User
This is used to add inbuilt SAP.HANA roles to user profile or to add custom
roles created under Roles tab. Custom roles allow you to define roles as per
access requirement and you can add these roles directly to user profile. This
removes need to remember and add objects to a user profile every time for
different access types.
Modeling
It contains all privileges required for using the information modeler in the
SAP HANA studio.
System Privileges
There are different types of System privileges that can be added to a user
profile. To add a system privileges to a user profile, click on + sign.
System privileges are used for Backup/Restore, User Administration,
Instance start and stop, etc.
Content Admin
It contains the similar privileges as that in MODELING role, but with the
addition that this role is allowed to grant these privileges to other users. It
also contains the repository privileges to work with imported objects.
Data Admin
This is a type of privilege, required for adding Data from objects to user
profile.
Audit Admin
Controls the execution of the following auditing-related commands CREATE
AUDIT POLICY, DROP AUDIT POLICY and ALTER AUDIT POLICY and the
changes of the auditing configuration. Also allows access to AUDIT_LOG
system view.
Audit Operator
It authorizes the execution of the following command ALTER SYSTEM
CLEAR AUDIT LOG. Also allows access to AUDIT_LOG system view.
Backup Admin
It authorizes BACKUP and RECOVERY commands for defining and initiating
backup and recovery procedures.
Backup Operator
It authorizes the BACKUP command to initiate a backup process.
Catalog Read
It authorizes users to have unfiltered read-only access to all system views.
Normally, the content of these views is filtered based on the privileges of the
accessing user.
Create Schema
It authorizes the creation of database schemas using the CREATE SCHEMA
command. By default, each user owns one schema, with this privilege the
user is allowed to create additional schemas.
Data Admin
It authorizes reading all data in the system views. It also enables execution
of any Data Definition Language (DDL) commands in the SAP HANA database
A user having this privilege cannot select or change data stored tables for
which they do not have access privileges, but they can drop tables or modify
table definitions.
Database Admin
It authorizes all commands related to databases in a multi-database, such as
CREATE, DROP, ALTER, RENAME, BACKUP, RECOVERY.
Export
It authorizes export activity in the database via the EXPORT TABLE command.
Note that beside this privilege the user requires the SELECT privilege on the
source tables to be exported.
Import
It authorizes the import activity in the database using the IMPORT
commands.
Note that beside this privilege the user requires the INSERT privilege on the
target tables to be imported.
Inifile Admin
It authorizes changing of system settings.
License Admin
It authorizes the SET SYSTEM LICENSE command install a new license.
Log Admin
It authorizes the ALTER SYSTEM LOGGING [ON|OFF] commands to enable or
disable the log flush mechanism.
Monitor Admin
It authorizes the ALTER SYSTEM commands for EVENTs.
Optimizer Admin
It authorizes the ALTER SYSTEM commands concerning SQL PLAN CACHE and
ALTER SYSTEM UPDATE STATISTICS commands, which influence the
behavior of the query optimizer.
Resource Admin
This privilege authorizes commands concerning system resources. For
example, ALTER SYSTEM RECLAIM DATAVOLUME and ALTER SYSTEM RESET
MONITORING VIEW. It also authorizes many of the commands available in
the Management Console.
Role Admin
This privilege authorizes the creation and deletion of roles using the CREATE
ROLE and DROP ROLE commands. It also authorizes the granting and
revocation of roles using the GRANT and REVOKE commands.
Savepoint Admin
It authorizes the execution of a savepoint process using the ALTER SYSTEM
SAVEPOINT command.
Components of the SAP HANA database can create new system privileges.
These privileges use the component-name as first identifier of the system
privilege and the component-privilege-name as the second identifier.
Object/SQL Privileges
Object privileges are also known as SQL privileges. These privileges are used
to allow access on objects like Select, Insert, Update and Delete of tables,
Views or Schemas.
Given below are possible types of Object Privileges
Object/SQL Privileges are collection of all DDL and DML privileges on database
objects.
There are multiple database objects in HANA database, so not all the
privileges are applicable to all kinds of database objects.
Analytic privileges are used to limit the access on HANA Information Views at
object level. We can apply row and column level security in Analytic
Privileges.
Allocation of row and column level security for specific value range.
Click on Package privilege tab in HANA studio under User creation Choose + to
add one or more packages. Use Ctrl key to select multiple packages.
In the Select Repository Package dialog, use all or part of the package name to
locate the repository package that you want to authorize access to.
Select one or more repository packages that you want to authorize access to, the
selected packages appear in the Package Privileges tab.
Given below are grant privileges, which are used on repository packages to
authorize user to modify the objects
Grantable to Others If you choose Yes for this, this allows assigned user
authorization to pass to the other users.
Application Privileges
Application privileges in a user profile are used to define authorization for
access to HANA XS application. This can be assigned to an individual user or
to the group of users. Application privileges can also be used to provide
different level of access to the same application like to provide advanced
functions for database Administrators and read-only access to normal users.
User name/Password
Kerberos
SAML 2.0
X.509
User Name/Password
This method requires a HANA user to enter user name and password to login
to database. This user profile is created under User management in HANA
Studio Security Tab.
You can change the password policy as per your organizations security
standards. Please note that password policy cannot be deactivated.
Kerberos
All users who connect to HANA database system using an external
authentication method should also have a database user. It is required to
map external login to internal database user.
It also allows HTTP access in HANA Extended Service using HANA XS engine.
It uses SPENGO mechanism for Kerberos authentication.
SAML
SAML stands for Security Assertion Markup Language and can be used to
authenticate users accessing HANA system directly from ODBC/JDBC clients.
It can also be used to authenticate users in HANA system coming via HTTP
through HANA XS engine.
SAML is used only for authentication purpose and not for authorization.
SAML
Kerberos
There are different types of privileges, which are used in SAP HANA as
mentioned under User role and Management
System Privileges
They are applicable to system and database authorization for users and
control system activities. They are used for administrative tasks such as
creating Schemas, data backups, creating users and roles and so on. System
privileges are also used to perform Repository operations.
Object Privileges
They are applicable to database operations and apply to database objects like
tables, Schemas, etc. They are used to manage database objects such as
tables and views. Different actions like Select, Execute, Alter, Drop, Delete
can be defined based on database objects.
They are also used to control remote data objects, which are connected
through SMART data access to SAP HANA.
Analytic Privileges
They are applicable to data inside all the packages that are created in HANA
repository. They are used to control modeling views that are created inside
packages like Attribute View, Analytic View, and Calculation View. They apply
row and column level security to attributes that are defined in modeling views
in HANA packages.
Package Privileges
They are applicable to allow access to and ability to use packages that are
created in repository of HANA database. Package contains different Modeling
views like Attribute, Analytic and Calculation views and also Analytic
Privileges defined in HANA repository database.
Application Privileges
They are applicable to HANA XS application that access HANA database via
HTTP request. They are used to control access on applications created with
HANA XS engine.
Permanent License Key Permanent License keys are valid only till the
predefine expiration date. License keys specify amount of memory licensed to
target HANA installation. They can installed from SAP Market place under Keys
and Requests tab. When a permanent License key is expired, a temporary license
key is issued, which is valid for only 28 days. During this period, you have to
install a permanent License key again.
There are two types of permanent License keys for HANA system
It tells about License type, Start Date and Expiration Date, Memory Allocation
and the information (Hardware Key, System Id) that is required to request a
new license through SAP Market Place.
Install License key Browse Enter Path, is used to install a new License
key and delete option is used to delete any old expiration key.
All Licenses tab under License tells about Product name, description,
Hardware key, First installation time, etc.
SAP HANA - Auditing
SAP HANA audit policy tells the actions to be audited and also the condition
under which the action must be performed to be relevant for auditing. Audit
Policy defines what activities have been performed in HANA system and who
has performed those activities at what time.
You can also choose Audit trail targets. The following audit trail targets are
possible
CSV text This type of audit trail is only used for test purpose in a non-
production environment.
You can also create a new Audit policy in the Audit Policies area choose
Create New Policy. Enter Policy name and actions to be audited.
Save the new policy using the Deploy button. A new policy is enabled
automatically, when an action condition is met, an audit entry is created in
Audit trail table. You can disable a policy by changing status to disable or you
can also delete the policy.
SAP HANA - Data Replication Overview
SAP HANA Replication allows migration of data from source systems to SAP
HANA database. Simple way to move data from existing SAP system to HANA
is by using various data replication techniques.
System replication can be set up on the console via command line or by using
HANA studio. The primary ECC or transaction systems can stay online during
this process. We have three types of data replication methods in HANA
system
It also provides data transformation and filtering capability before loading to HANA
database.
It allows real-time data replication, replicating only relevant data into HANA from
SAP and non-SAP source systems.
Open SAP ECC system using SAP logon. Enter transaction number sm59
this is transaction number to create a new Trusted RFC connection Click
on 3rdicon to open a new connection wizard click on Create and new window
will open.
Go to Technical Setting
Enter Target host ECC system name, IP and enter System number.
Go to Logon & Security tab, Enter Language, Client, ECC system user name
and password.
Click on New New Window will open Enter configuration name Click
Next Enter RFC Destination (connection name created earlier), Use search
option, choose name and click next.
In Specify Target system, Enter HANA system admin user name & password,
host name, Instance number and click next. Enter No of Data transfer jobs
like 007(it cannot be 000) Next Create Configuration.
It enables to read the business data at Application layer. You need to define
data flows in Data Services, scheduling a replication job and defining source
and target system in data store in Data Services designer.
This data store will come under local object library, if you expand this there
is no table inside it.
Right click on Table Import by name Enter ECC table to import from ECC
system (MARA is default table in ECC system) Import Now expand Table
MARA Right Click View Data. If data is displayed, Data store connection
is fine.
Now, to choose target system as HANA database, create a new data store.
Create Data store Name of data store SAP_HANA_TEST Data store type
(database) Database type SAP HANA Database version HANA 1.x.
Enter HANA server name, user name and password for HANA system and OK.
This data store will be added to Local Object Library. You can add table if you
want to move data from source table to some specific table in HANA database.
Note that target table should be of similar datatype as source table.
Creating a Replication Job
Create a new Project Enter Project Name Right Click on Project name
New Batch Job Enter job name.
From right side tab, choose work flow Enter work flow name Double
click to add it under batch job Enter data flow Enter data flow name
Double click to add it under batch job in Project area Save all option at top.
Drag table from First Data Store ECC (MARA) to work area. Select it and right
click Add new Template table to create new table with similar data types
in HANA DB Enter table name, Data store ECC_HANA_TEST2 Owner
name (schema name) OK
Drag table to front and connect both the table save all. Now go to batch
job Right Click Execute Yes OK
Once you execute the Replication job, you will get a confirmation that job has
been completed successfully.
Choose the repository from left side Navigate to 'Batch Job Configuration'
tab, where you will see the list of jobs Against the job you want to schedule
click on add schedule Enter the 'schedule name' and set the parameters
like (time, date, reoccurring etc.) as appropriate and click on 'Apply'.
SAP HANA - Log Based Replication
SAP Host agent manages the authentication between the source system and
target system, which is part of the source system. The Sybase Replication
agent detects any data changes at time of initial load and ensures every
single change is completed. When th ere is a change, update, and delete in
entries of a table in source system, a table log is created. This table log moves
data from source system to HANA database.
This method was part of initial offering for SAP HANA replication, but not
positioned/supported anymore due to licensing issues and complexity and
also SLT provides the same features.
Note This method only supports SAP ERP system as data source and DB2
as database.
DXC is a batch driven process and data extraction using DXC at certain
interval is enough in many cases. You can set an interval when batch job
executes example: every 20 minutes and in most of cases it is sufficient to
extract data using these batch jobs at certain time intervals.
DXC method reduces complexity of data modeling in SAP HANA as data sends to
HANA after applying all business extractor logics in Source System.
It provides semantically rich data from SAP Business Suite to SAP HANA
It requires a Business Suite System based on Net Weaver 7.0 or higher with at
least below SP: Release 700 SAPKW70021 (SP stack 19, from Nov 2008).
It enables ICM Web Dispatcher service in HANA system. Web dispatcher uses
ICM method for data read and loading in HANA system.
Import the unit using Import Dialog in SAP HANA Content Node Configure
XS Application server to utilize the DXC Change the application_container
value to libxsdxc
Creating a HTTP connection in SAP BW Now we need to create http
connection in SAP BW using transaction code SM59.
Input Parameters Enter Name of RFC Connection, HANA Host Name and
<Instance Number>
In Log on Security Tab, enter the DXC user created in HANA studio using
basic Authentication method
Prepare the data and save it to csv format. Now create file with ctl extension
with following syntax
---------------------------------------
import data into table Schema."Table name"
from 'file.csv'
records delimited by '\n'
fields delimited by ','
Optionally enclosed by '"'
error log 'table.err'
-----------------------------------------
Transfer this ctl file to the FTP and execute this file to import the data
SAP HANA supports both query languages SQL and MDX. Both languages
can be used: JDBC and ODBC for SQL and ODBO is used for MDX processing.
Excel Pivot tables use MDX as query language to read data from SAP HANA
system. MDX is defined as part of ODBO (OLE DB for OLAP) specification from
Microsoft and is used for data selections, calculations and layout. MDX
supports multidimensional data model and support reporting and Analysis
requirement.
MDX provider enables the consumption of Information views defined in HANA
studio by SAP and non-SAP reporting tools. Existing physical tables and
schemas presents the data foundation for Information models.
Once you choose SAP HANA MDX provider from the list of data source you
want to connect, pass HANA system details like host name, instance number,
user name and password.
Once the connection is successful, you can choose Package name HANA
Modeling views to generate Pivot tables.
MDX is tightly integrated into HANA database. Connection and Session
management of HANA database handles statements that are executed by
HANA. When these statements are executed, they are parsed by MDX
interface and a calculation model is generated for each MDX statement. This
calculation model creates an execution plan that generates standard results
for MDX. These results are directly consumed by OLAP clients.
To make MDX connection to HANA database, HANA client tools are required.
You can download this client tool from SAP market place. Once installation of
HANA client is done, you will see the option of SAP HANA MDX provider in the
list of data source in MS Excel.
The priority of alert raised in HANA system tells the criticality of problem and
it depends on the check that is performed on the component. Example If
CPU usage is 80%, a low priority alert will be raised. However, if it reaches
96%, system will raise a high priority alert.
The System Monitor is the most common way to monitor HANA system and
to verify the availability of all your SAP HANA system components. System
monitor is used to check all key component and services of a HANA system.
You can also drill down into details of an individual system in Administration
Editor. It tells about Data Disk, Log disk, Trace Disk, alerts on resource usage
with priority.
Alert tab in Administrator editor is used to check the current and all alerts in
HANA system.
It also tells about the time when an alert is raised, description of the alert,
priority of the alert, etc.
SAP HANA monitoring dashboard tells the key aspects of system health and
configuration
High and Medium priority alerts.
Data backups
It ensures that database can be restored to the most recent committed state
after a restart or after a system crash and transactions are executed
completely or completely undone. SAP HANA Persistent Layer is part of Index
server and it has data and transaction log volumes for HANA system and in-
memory data is regularly saved to these volumes. There are services in HANA
system that has their own persistence. It also provides save points and logs
for all the database transactions from the last save point.
It ensures that the database is restored to the most recent committed state after
a restart and that transaction are either completely executed or completely
undone.
Data and Transaction Log Volumes
Database can always be restored to its most recent state, to ensure these
changes to data in the database are regularly copied to disk. Log files
containing data changes and certain transaction events are also saved
regularly to disk. Data and logs of a system are stored in Log volumes.
Data volumes stores SQL data and undo log information and also SAP HANA
information modeling data. This information is stored in data pages, which
are called Blocks. These blocks are written to data volumes at regular time
interval, which is known as save point.
Log volumes store the information about data changes. Changes that are
made between two log points are written to Log volumes and called log
entries. They are saved to log buffer when transaction is committed.
Savepoints
In SAP HANA database, changed data is automatically saved from memory
to disk. These regular intervals are called savepoints and by default they are
set to occur every five minutes. Persistence Layer in SAP HANA database
performs these savepoint at regular interval. During this operation changed
data is written to disk and redo logs are also saved to disk as well.
The data belonging to a Savepoint tells consistent state of the data on disk
and remains there until the next savepoint operation has completed. Redo
log entries are written to the log volumes for all changes to persistent data.
In the event of a database restart, data from the last completed savepoint
can be read from the data volumes, and redo log entries written to the log
volumes.
/usr/sap/<SID>/SYS/global/hdb/data
/usr/sap/<SID>/SYS/global/hdb/log
These directories are defined in global.ini file and can be changed at later
stage.
Overview Tab
It tells the status of currently running data backup and last successful data
backup.
Configuration Tab
It tells about the Backup interval settings, file based data backup settings
and log based data backup setting.
Backup Interval Settings
Backint settings give an option to use third party tool for data and log back
up with configuration of backing agent.
You can also limit the size of data backup files. If system data backup exceeds
this set file size, it will split across the multiple files.
Log backup settings tell the destination folder where you want to save log
backup on external server. You can choose a destination type for log backup
You can choose backup interval from drop down. It tells the longest amount
of time that can pass before a new log backup is written. Backup Interval: It
can be in seconds, minutes or hours.
Enable Automatic log backup option: It helps you to keep log area vacant. If
you disable this log area will continue to fill and that can result database to
hang.
Backup wizard is used to specify backup settings. It tells the Backup type,
destination Type, Backup Destination folder, Backup prefix, size of backup,
etc.
A disk in the data area is unusable or disk in the log area is unusable.
Point in Time Used for recovering the database to the specific point in
time. For this recovery, the data backup and log backup have to be available
since last data backup and log area are required to perform the above type
recovery
Specific Log Position This recovery type is an advanced option that can
be used in exceptional cases where a previous recovery failed.
SAP HANA high availability provides fault tolerance and ability of system to
resume system operations after an outage with minimum business loss.
The following illustration shows the phases of high availability in HANA system
First phase is being prepared for the fault. A fault can be detected
automatically or by an administrative action. Data is backed up and stand by
systems take over the operations. A recovery process is put in action includes
repair of faulty system and original system to be restored to previous
configuration.
During a log backup process, only the actual data of the log segments is
written from the log area to service-specific log backup files or to a third-
party backup tool.
After a system failure, you may need to redo log entries from log backups to
restore the database to the desired state.
The log backup to the File and backup mode NORMAL are the default
settings for the automatic log backup function after installation of SAP HANA
system. Automatic log backup only works if at least one complete data
backup has been performed.
Once the first complete data backup has been performed, the automatic log
backup function is active. SAP HANA studio can be used to enable/disable the
automatic log backup function. It is recommended to keep automatic log
backup enabled otherwise log area will continue to fill. A full log area can
result a database freeze in HANA system.
System management
Session management
Transaction management
The set of SQL extensions, which allow developers to push data into
database, is called SQL scripts.
HANA Calculation views can be created in two ways - Graphical or using SQL
script. When we create more complex Calculation views, then we might have
to use direct SQL scripts.
SAP HANA can act both as Relational as well as OLAP database. When we use
BW on HANA, then we create cubes in BW and HANA, which act as relational
database and always produce a SQL Statement. However, when we directly
access HANA views using OLAP connection, then it will act as OLAP database
and MDX will be generated.
SAP HANA - Data Types
You can create row or Column store tables in SAP HANA using create table
option. A table can be created by executing a data definition create table
statement or using graphical option in HANA studio.
When you create a table, you also need to define attributes inside it.
When you create a table, you need to define the names of columns and SQL
data types. The Dimension field tells the length of value and the Key option
to define it as primary key.
Numeric
Character/ String
Boolean
Date Time
Binary
Large Objects
Multi-Valued
The following table gives the list of data types in each category
Date Time
These data types are used to store date and time in a table in HANA database.
DATE data type consists of year, month and day information to represent a
date value in a column. Default format for a Date data type is YYYY-MM-DD.
TIME data type consists of hours, minutes, and seconds value in a table in
HANA database. Default format for Time data type is HH: MI: SS.
SECOND DATE data type consists of year, month, day, hour, minute, second
value in a table in HANA database. Default format for SECONDDATE data type is
YYYY-MM-DD HH:MM:SS.
TIMESTAMP data type consists of date and time information in a table in HANA
database. Default format for TIMESTAMP data type is YYYY-MM-DD
HH:MM:SS:FFn, where FFn represents fraction of second.
Numeric
TinyINT stores 8 bit unsigned integer. Min value: 0 and max value: 255
SMALLINT stores 16 bit signed integer. Min value: -32,768 and max value:
32,767
Integer stores 32 bit signed integer. Min value: -2,147,483,648 and max
value: 2,147,483,648
SMALL Decimal and Decimal: Min value: -10^38 +1 and max value: 10^38 -
1
Boolean
Boolean data types stores Boolean value, which are TRUE, FALSE
Character
SHORTTEXT stores variable length character string which supports text search
features and string search features.
Binary
Large Objects
TEXT it enables text search features. This data type can be defined for only
column tables and not for row store tables.
BINTEXT supports text search features but it is possible to insert binary data.
Multivalued
Multivalued data types are used to store collection of values with same data
type.
Array
Arrays store collections of value with the same data type. They can also
contain null values.
SAP HANA - SQL Operators
An operator is a special character used primarily in SQL statement's with
WHERE clause to perform operation, such as comparisons and arithmetic
operations. They are used to pass conditions in a SQL query.
Arithmetic Operators
Comparison/Relational Operators
Logical Operators
Set Operators
Arithmetic Operators
Arithmetic operators are used to perform simple calculation functions like
addition, subtraction, multiplication, division and percentage.
Operator Description
Comparison Operators
Comparison operators are used to compare the values in SQL statement.
Operator Description
= Checks if the values of two operands are equal or not, if yes then
condition becomes true.
!= Checks if the values of two operands are equal or not, if values are not
equal then condition becomes true.
<> Checks if the values of two operands are equal or not, if values are not
equal then condition becomes true.
> Checks if the value of left operand is greater than the value of right
operand, if yes then condition becomes true.
< Checks if the value of left operand is less than the value of right
operand, if yes then condition becomes true.
>= Checks if the value of left operand is greater than or equal to the
value of right operand, if yes then condition becomes true.
<= Checks if the value of left operand is less than or equal to the value of
right operand, if yes then condition becomes true.
!< Checks if the value of left operand is not less than the value of right
operand, if yes then condition becomes true.
!> Checks if the value of left operand is not greater than the value of
right operand, if yes then condition becomes true.
Logical operators
Logical operators are used to pass multiple conditions in SQL statement or
are used to manipulate the results of conditions.
Operator Description
ALL The ALL Operator is used to compare a value to all values in another
value set.
ANY The ANY operator is used to compare a value to any applicable value
in the list according to the condition.
BETWEEN The BETWEEN operator is used to search for values that are within a
set of values, given the minimum value and the maximum value.
EXISTS The EXISTS operator is used to search for the presence of a row in a
specified table that meets certain criteria.
LIKE The LIKE operator is used to compare a value to similar values using
wildcard operators.
NOT The NOT operator reverses the meaning of the logical operator with
which it is used. Eg NOT EXISTS, NOT BETWEEN, NOT IN, etc. This
is a negate operator.
IS NULL The NULL operator is used to compare a value with a NULL value.
UNIQUE The UNIQUE operator searches every row of a specified table for
uniqueness (no duplicates).
Set Operators
Set operators are used to combine results of two queries into a single result.
Data type should be same for both the tables.
UNION ALL This operator is similar to Union but it also shows the duplicate
rows.
MINUS Minus operation combines result of two SELECT statements and return
only those results, which belong to first set of result and eliminate the rows in
second statement from the output of first.
SAP HANA - SQL Functions
Numeric Functions
String Functions
Fulltext Functions
Datetime Functions
Aggregate Functions
Window Functions
Miscellaneous Functions
Numeric Functions
These are inbuilt numeric functions in SQL and use in scripting. It takes
numeric values or strings with numeric characters and return numeric values.
ACOS, ASIN, ATAN, ATAN2 (These functions return trigonometric value of the
argument)
CEIL It returns the first integer that is greater or equal to the passed value.
COS, COSH, COT ((These functions return trigonometric value of the argument)
EXP It returns the result of the base of natural logarithms e raised to the power
of passed value.
FLOOR It returns the largest integer not greater than the numeric argument.
LOG It returns the algorithm value of a passed positive value. Both base and
log value should be positive.
Various other numeric functions can also be used MOD, POWER, RAND,
ROUND, SIGN, SIN, SINH, SQRT, TAN, TANH, UMINUS
String Functions
Various SQL string functions can be used in HANA with SQL scripting. Most
common string functions are
Other string functions that can be used are LPAD, LTRIM, RTRIM,
STRTOBIN, SUBSTR_AFTER, SUBSTR_BEFORE, SUBSTRING, TRIM,
UNICODE, RPAD, BINTOSTR
Most common data type conversion functions used in HANA in SQL scripts
There are also various Windows and other miscellaneous functions that can
be used in HANA SQL scripts.
Case Expressions
Function Expressions
Aggregate Expressions
Subqueries in Expressions
Case Expression
This is used to pass multiple conditions in a SQL expression. It allows the use
of IF-ELSE-THEN logic without using procedures in SQL statements.
Example
SELECT COUNT( CASE WHEN sal < 2000 THEN 1 ELSE NULL END ) count1,
COUNT( CASE WHEN sal BETWEEN 2001 AND 4000 THEN 1 ELSE NULL END ) count2,
COUNT( CASE WHEN sal > 4000 THEN 1 ELSE NULL END ) count3 FROM emp;
This statement will return count1, count2, count3 with integer value as per
passed condition.
Function Expressions
Function expressions involve SQL inbuilt functions to be used in Expressions.
Aggregate Expressions
Aggregate functions are used to perform complex calculations like Sum,
Percentage, Min, Max, Count, Mode, Median, etc. Aggregate Expression uses
Aggregate functions to calculate single value from multiple values.
Count ()
Maximum ()
Median ()
Minimum ()
Mode ()
Sum ()
Subqueries in Expressions
A subquery as an expression is a Select statement. When it is used in an
expression, it returns a zero or a single value.
A subquery is used to return data that will be used in the main query as a
condition to further restrict the data to be retrieved.
Subqueries can be used with the SELECT, INSERT, UPDATE, and DELETE
statements along with the operators like =, <, >, >=, <=, IN, BETWEEN etc.
A subquery can have only one column in the SELECT clause, unless multiple
columns are in the main query for the subquery to compare its selected columns.
An ORDER BY cannot be used in a subquery, although the main query can use an
ORDER BY. The GROUP BY can be used to perform the same function as the
ORDER BY in a subquery.
Subqueries that return more than one row can only be used with multiple value
operators, such as the IN operator.
The SELECT list cannot include any references to values that evaluate to a BLOB,
ARRAY, CLOB, or NCLOB.
Example
SELECT * FROM CUSTOMERS
WHERE ID IN (SELECT ID
FROM CUSTOMERS
WHERE SALARY > 4500) ;
+----+----------+-----+---------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+---------+----------+
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+---------+----------+
Stored Procedures can return data in the form of output parameters (integer
or character) or a cursor variable. It can also result in set of Select
statements, which are used by other Stored Procedures.
Example
Try out the following example. This will create table and after that it will insert
few rows in this table where it is not required to give record ID because it is
auto-incremented by MySQL.
PERL Example
Use the mysql_insertid attribute to obtain the AUTO_INCREMENT value
generated by a query. This attribute is accessed through either a database
handle or a statement handle, depending on how you issue the query. The
following example references it through the database handle
Alternatively, you can create the table and then set the initial sequence value
with ALTER TABLE.
Triggers could be defined on the table, view, schema, or database with which
the event is associated.
Benefits of Triggers
Triggers can be written for the following purposes
Auditing
Example
There is a table Customer of efashion, located on a Server1. To access this
from Server2, a client application would have to use name as
Server1.efashion.Customer. Now we change the location of Customer table
the client application would have to be modified to reflect the change.
Public Synonyms
Public Synonyms are owned by PUBLIC schema in a database. Public
synonyms can be referenced by all users in the database. They are created
by the application owner for the tables and other objects such as procedures
and packages so the users of the application can see the objects.
Syntax
CREATE PUBLIC SYNONYM Cust_table for efashion.Customer;
Private synonyms can be referenced only by the schema that owns the table
or object.
Syntax
CREATE SYNONYM Cust_table FOR efashion.Customer;
Drop a Synonym
Synonyms can be dropped using DROP Synonym command. If you are
dropping a public Synonym, you have to use the keyword public in the drop
statement.
Syntax
DROP PUBLIC Synonym Cust_table;
DROP Synonym Cust_table;
SAP HANA - SQL Explain Plans
SQL explain plans are used to generate detail explanation of SQL statements.
They are used to evaluate execution plan that SAP HANA database follows to
execute the SQL statements.
SQL Explain Plans cannot be used with DDL and DCL SQL statements.
COLUMN SEARCH value tells the starting position of column engine operators.
ROW SEARCH value tells the starting position of row engine operators.
It remove incorrect, incomplete data and improve data quality before it is loaded
into Data warehouse.
The Data Profiling task checks profiles that helps to understand a data source
and identify problems in the data that has to be fixed.
You can use the Data Profiling task inside an Integration Services package to
profile data that is stored in SQL Server and to identify potential problems
with data quality.
Note Data Profiling Task works only with SQL Server data sources and
does not support any other file based or third party data sources.
Access Requirement
To run a package contains Data Profiling task, user account must have
read/write permissions with CREATE TABLE permissions on the tempdb
database.
Wildcard columns
While configuring a profile request, the task accepts * wildcard in place of a
column name. This simplifies the configuration and makes it easier to
discover the characteristics of unfamiliar data. When the task runs, the task
profiles every column that has an appropriate data type.
Quick Profile
You can select Quick Profile to configure the task quickly. A Quick Profile
profiles a table or view by using all the default profiles and settings.
The Data Profiling Task can compute eight different data profiles. Five of
these profiles can check individual columns and the remaining three analyze-
multiple columns or relationships between columns.
You can save a local copy of the schema and view the local copy of the schema
in Microsoft Visual Studio or another schema editor, in an XML editor or in a
text editor such as Notepad.
SAP HANA - SQL Script
Set of SQL statements for HANA database which allows developer to pass
complex logic into database is called SQL Script. SQL Script is known as
collections of SQL extensions. These extension are Data Extensions, Function
Extensions, and Procedure Extension.
SQL Script supports stored Functions and Procedures and that allows pushing
complex parts of Application logic to database.
Calculations are executed at database layer to get benefits of HANA database like
column operations, parallel processing of queries, etc.
Select calculation view type from Type dropdown list, select SQL Script
Set Parameter Case Sensitive to True or False based on how you require
the naming convention for the output parameters of the calculation view
Choose Finish.
Select default schema Select the Semantics node Choose the View
Properties tab In the Default Schema dropdown list, select the default
schema.
Choose SQL Script node in the Semantics node Define the output
structure. In the output pane, choose Create Target. Add the required output
parameters and specify its length and type.
To add multiple columns that are part of existing information views or catalog
tables or table functions to the output structure of script-based calculation
views
In the Output pane, choose Start of the navigation path New Next navigation
step Add Columns from End of the navigation path Name of the object that
contains the columns you want to add to the output Select one or more
objects from the dropdown list Choose Next.
In the Source pane, choose the columns that you want to add to the output
To add selective columns to the output, then select those columns and
choose Add. To add all columns of an object to the output, then select the
object and choose Add Finish.
Save and activate all to activate the current view along with the required
and affected objects.