SAP HANA SQL Script Reference en
SAP HANA SQL Script Reference en
3 What is SQLScript? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 SQLScript Security Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 SQLScript Processing Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Orchestration Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Declarative Logic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
15 Supportability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
15.1 M_ACTIVE_PROCEDURES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
15.2 Query Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
SQLScript Query Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
15.3 Type and Length Check for Table Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
15.4 SQLScript Debugger. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Conditional Breakpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Watchpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Break on Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Save Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
15.5 EXPLAIN PLAN for Call. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
18 Appendix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
18.1 Example code snippets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
ins_msg_proc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
This reference describes how to use the SQL extension SAP HANA SQLScript to embed data-intensive
application logic into SAP HANA.
SQLScript is a collection of extensions to the Structured Query Language (SQL). The extensions include:
● Data extension, which allows the definition of table types without corresponding tables
● Functional extension, which allows the definition of (side-effect free) functions which can be used to
express and encapsulate complex data flows
● Procedural extension, which provides imperative constructs executed in the context of the database
process.
● Data extension, which allows the definition of table types without corresponding tables
● Functional extension, which allows the definition of (side-effect free) functions that can be used to express
and encapsulate complex data flows
● Procedural extension, which provides imperative constructs executed in the context of the database
process.
The motivation behind SQLScript is to embed data-intensive application logic into the database. Currently,
applications only offload very limited functionality into the database using SQL, most of the application logic is
normally executed on an application server. The effect of that is that data to be operated upon needs to be
copied from the database onto the application server and vice versa. When executing data-intensive logic, this
copying of data can be very expensive in terms of processor and data transfer time. Moreover, when using an
imperative language like ABAP or JAVA for processing data, developers tend to write algorithms which follow a
one-tuple-at-a-time semantics (for example, looping over rows in a table). However, these algorithms are hard
to optimize and parallelize compared to declarative set-oriented languages like SQL.
The SAP HANA database is optimized for modern technology trends and takes advantage of modern hardware,
for example, by having data residing in the main memory and allowing massive parallelization on multi-core
CPUs. The goal of the SAP HANA database is to support application requirements by making use of such
hardware. The SAP HANA database exposes a very sophisticated interface to the application, consisting of
many languages. The expressiveness of these languages far exceeds that attainable with OpenSQL. The set of
SQL extensions for the SAP HANA database, which allows developers to push data-intensive logic to the
database, is called SQLScript. Conceptually SQLScript is related to stored procedures as defined in the SQL
standard, but SQLScript is designed to provide superior optimization possibilities. SQLScript should be used in
cases where other modeling constructs of SAP HANA, for example analytic views or attribute views are not
sufficient. For more information on how to best exploit the different view types, see "Exploit Underlying Engine".
The set of SQL extensions are the key to avoiding massive data copies to the application server and to
leveraging sophisticated parallel execution strategies of the database. SQLScript addresses the following
problems:
● Decomposing an SQL query can only be performed by using views. However, when decomposing complex
queries by using views, all intermediate results are visible and must be explicitly typed. Moreover, SQL
views cannot be parameterized, which limits their reuse. In particular they can only be used like tables and
embedded into other SQL statements.
● SQL queries do not have features to express business logic (for example a complex currency conversion).
As a consequence, such business logic cannot be pushed down into the database (even if it is mainly based
on standard aggregations like SUM(Sales), and so on).
● An SQL query can only return one result at a time. As a consequence, the computation of related result
sets must be split into separate, usually unrelated, queries.
● As SQLScript encourages developers to implement algorithms using a set-oriented paradigm and not
using a one-tuple-at-a-time paradigm, imperative logic is required, for example by iterative approximation
algorithms. Thus, it is possible to mix imperative constructs known from stored procedures with
declarative ones.
Related Information
You can develop secure procedures using SQLScript in SAP HANA by observing the following
recommendations.
Using SQLScript, you can read and modify information in the database. In some cases, depending on the
commands and parameters you choose, you can create a situation in which data leakage or data tampering
can occur. To prevent this, SAP recommends using the following practices in all procedures.
● Mark each parameter using the keywords IN or OUT. Avoid using the INOUT keyword.
● Use the INVOKER keyword when you want the user to have the assigned privileges to start a procedure.
The default keyword, DEFINER, allows only the owner of the procedure to start it.
● Mark read-only procedures using READS SQL DATA whenever it is possible. This ensures that the data and
the structure of the database are not altered.
Tip
● Ensure that the types of parameters and variables are as specific as possible. Avoid using VARCHAR, for
example. By reducing the length of variables you can reduce the risk of injection attacks.
● Perform validation on input parameters within the procedure.
Dynamic SQL
In SQLScript you can create dynamic SQL using one of the following commands: EXEC and EXECUTE
IMMEDIATE. Although these commands allow the use of variables in SQLScript where they might not be
supported. In these situations you risk injection attacks unless you perform input validation within the
procedure. In some cases injection attacks can occur by way of data from another database table.
To avoid potential vulnerability from injection attacks, consider using the following methods instead of dynamic
SQL:
● Use static SQL statements. For example, use the static statement, SELECT instead of EXECUTE
IMMEDIATE and passing the values in the WHERE clause.
● Use server-side JavaScript to write this procedure instead of using SQLScript.
● Perform validation on input parameters within the procedure using either SQLScript or server-side
JavaScript.
● Use APPLY_FILTER if you need a dynamic WHERE condition
● Use the SQL Injection Prevention Function
Escape Code
You might need to use some SQL statements that are not supported in SQLScript, for example, the GRANT
statement. In other cases you might want to use the Data Definition Language (DDL) in which some <name>
To avoid potential vulnerability from injection attacks, consider using the following methods instead of escape
code:
Tip
For more information about security in SAP HANA, see the SAP HANA Security Guide.
Related Information
To better understand the features of SQLScript and their impact on execution, it can be helpful to understand
how SQLScript is processed in the SAP HANA database.
When a user defines a new procedure, for example using the CREATE PROCEDURE statement, the SAP HANA
database query compiler processes the statement in a similar way it processes an SQL statement. A step-by-
step analysis of the process flow follows below:
When the procedure starts, the invoke activity can be divided into two phases:
1. Compilation
○ Code generation - for declarative logic the calculation models are created to represent the data flow
defined by the SQLScript code. It is optimized further by the calculation engine, when it is instantiated.
For imperative logic the code blocks are translated into L-nodes.
○ The calculation models generated in the previous step are combined into a stacked calculation model.
2. Execution - the execution commences with binding actual parameters to the calculation models. When the
calculation models are instantiated they can be optimized based on concrete input provided. Optimizations
include predicate or projection embedding in the database. Finally, the instantiated calculation model is
executed by using any of the available parts of the SAP HANA database.
With SQLScript you can implement applications by using both imperative orchestration logic and (functional)
declarative logic, and this is also reflected in the way SQLScript processing works for those two coding styles.
Orchestration logic is used to implement data-flow and control-flow logic using imperative language constructs
such as loops and conditionals. The orchestration logic can also execute declarative logic, which is defined in
the functional extension by calling the corresponding procedures. In order to achieve an efficient execution on
both levels, the statements are transformed into a dataflow graph to the maximum extent possible. The
compilation step extracts data-flow oriented snippets out of the orchestration logic and maps them to data-
flow constructs. The calculation engine serves as execution engine of the resulting dataflow graph. Since the
language L is used as intermediate language for translating SQLScript into a calculation model, the range of
mappings may span the full spectrum – from a single internal L-node for a complete SQLScript script in its
simplest form, up to a fully resolved data-flow graph without any imperative code left. Typically, the dataflow
graph provides more opportunities for optimization and thus better performance.
To transform the application logic into a complex data-flow graph two prerequisites have to be fulfilled:
● All data flow operations have to be side-effect free, that is they must not change any global state either in
the database or in the application logic.
● All control flows can be transformed into a static dataflow graph.
In SQLScript the optimizer will transform a sequence of assignments of SQL query result sets to table variables
into parallelizable dataflow constructs. The imperative logic is usually represented as a single node in the
dataflow graph, and thus it is executed sequentially.
This procedure features a number of imperative constructs including the use of a cursor (with associated
state) and local scalar variables with assignments.
Declarative logic is used for efficient execution of data-intensive computations. This logic is represented
internally as data flows which can be executed in a parallel manner. As a consequence, operations in a data-
flow graph have to be free of side effects. This means they must not change any global state neither in the
database, nor in the application. The first condition is ensured by only allowing changes to the data set that is
passed as input to the operator. The second condition is achieved by allowing only a limited subset of language
features to express the logic of the operator. If those prerequisites are fulfilled, the following types of operators
are available:
Logically each operator represents a node in the data-flow graph. Custom operators have to be implemented
manually by SAP.
This document uses BNF (Backus Naur Form) which is the notation technique used to define programming
languages. BNF describes the syntax of a grammar by using a set of production rules and by employing a set of
symbols.
Symbol Description
<> Angle brackets are used to surround the name of a syntax element (BNF non-terminal) of the SQL
language.
::= The definition operator is used to provide definitions of the element appearing on the left side of
the operator in a production rule.
[] Square brackets are used to indicate optional elements in a formula. Optional elements may be
specified or omitted.
{} Braces group elements in a formula. Repetitive elements (zero or more elements) can be specified
within brace symbols.
| The alternative operator indicates that the portion of the formula following the bar is an alternative
to the portion preceding the bar.
... The ellipsis indicates that the element may be repeated any number of times. If ellipsis appears
after grouped elements, the grouped elements enclosed with braces are repeated. If ellipsis ap
pears after a single element, only that element is repeated.
!! Introduces normal English text. This is used when the definition of a syntactic element is not ex
pressed in BNF.
Throughout the BNF used in this document each syntax term is defined to one of the lowest term
representations shown below.
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<letter> ::= a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q |
r | s | t | u | v | w | x | y | z
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q |
R | S | T | U | V | W | X | Y | Z
<comma> ::= ,
<dollar_sign> ::= $
<hash_symbol> ::= #
<left_bracket> ::= [
<period> ::= .
<pipe_sign> ::= |
<right_bracket> ::= ]
<right_curly_bracket> ::= }
<sign> ::= + | -
<underscore> ::= _
Besides the built-in scalar SQL data types, SQLScript allows you to use user-defined types for tabular values.
The SQLScript type system is based on the SQL-92 type system. It supports the following primitive data types:
Note
This also holds true for SQL statements, apart from the TEXT and SHORTTEXT types.
Note
SQLScript currently allows a length of 8388607 characters for the NVARCHAR and the VARCHAR data
types, unlike SQL where the length of that data type is limited to 5000.
For more information on scalar types, see SAP HANA SQL and System Views Reference, Data Types.
The SQLScript data type extension allows the definition of table types. These types are used to define
parameters for procedures representing tabular results.
Syntax
Syntax Elements
Identifies the table type to be created and, optionally, in which schema it should be created.
For more information on data types, see Scalar Data Types [page 16].
Description
Example
Syntax
Syntax Elements
The identifier of the table type to be dropped, with optional schema name
When the <drop_option> is not specified, a non-cascaded drop is performed. This drops only the specified
type, dependent objects of the type are invalidated but not dropped.
The invalidated objects can be revalidated when an object with the same schema and object name is created.
Example
You can declare a row type variable, which is a collection of scalar data types, and use it to easily fetch a single
row from a table.
To declare row type variable, you can enumerate a list of columns, or use the TYPE LIKE keyword.
To assign values to a row type variable or to reference values of a row variable, proceed as follows.
DO BEGIN
DECLARE x, y ROW (a INT, b VARCHAR(16), c TIMESTAMP);
x = ROW(1, 'a', '2000-01-01');
x.a = 2;
y = :x;
SELECT :y.a, :y.b, :y.c FROM DUMMY;
-- Returns [2, 'a', '2000-01-01']
END;
You can fetch or select multiple values into a single row variable.
DO BEGIN
DECLARE CURSOR cur FOR SELECT 1 as a, 'a' as b, to_timestamp('2000-01-01')
as c FROM DUMMY;
DECLARE x ROW LIKE :cur;
OPEN cur;
FETCH cur INTO x;
SELECT :x.a, :x.b, :x.c FROM DUMMY;
-- Returns [1, 'a', '2000-01-01']
SELECT 2, 'b', '2000-02-02' INTO x FROM DUMMY;
SELECT :x.a, :x.b, :x.c FROM DUMMY;
-- Returns [2, 'b', '2000-02-02']
END;
Limitations
In SQLScript there are two different logic containers: Procedure and User-Defined Function.
The User-Defined Function container is separated into Scalar User-Defined Function and Table User-Defined
Function.
The following sections provide an overview of the syntactical language description for both containers.
6.1 Procedures
Procedures allows you to describe a sequence of data transformations on data passed as input and database
tables.
Data transformations can be implemented as queries that follow the SAP HANA database SQL syntax by
calling other procedures. Read-only procedures can only call other read-only procedures.
● You can parameterize and reuse calculations and transformations described in one procedure in other
procedures.
● You can use and express knowledge about relationships in the data; related computations can share
common sub-expressions, and related results can be returned using multiple output parameters.
● You can define common sub-expressions. The query optimizer decides if a materialization strategy (which
avoids recomputation of expressions) or other optimizing rewrites are best to apply. In any case, it eases
the task of detecting common sub-expressions and improves the readability of the SQLScript code.
● You can use scalar variables or imperative language features if required.
Syntax
Note
The default is IN. Each parameter is marked using the keywords IN/OUT/INOUT. Input and output
parameters must be explicitly assigned a type (that means that tables without a type are note
supported)
● The input and output parameters of a procedure can have any of the primitive SQL types or a table type.
INOUT parameters can only be of the scalar or the array type.
Array variables or constant arrays can be passed to procedures as input, output, and inout parameters
with the following limitations:
○ LOB type array parameter is not supported.
○ DEFAULT VALUE for an array parameter is not supported.
○ Using an array parameter in the USING clause of Dynamic SQL is not supported.
Note
For more information on data types see Data Types in the SAP HANA SQL and System Views Reference
on the SAP Help Portal.
● A table type previously defined with the CREATE TYPE command, see CREATE TYPE [page 17].
LANGUAGE <lang>
<lang> ::= SQLSCRIPT | R
Tip
● Indication that that the execution of the procedure is performed with the privileges of the definer of the
procedure
DEFINER
● Indication that the execution of the procedure is performed with the privileges of the invoker of the
procedure
INVOKER
● Specifies the schema for unqualified objects in the procedure body; if nothing is specified, then the
current_schema of the session is used.
● Marks the procedure as being read-only and side-effect free - the procedure does not make modifications
to the database data or its structure. This means that the procedure does not contain DDL or DML
statements and that it only calls other read-only procedures. The advantage of using this parameter is that
certain optimizations are available for read-only procedures.
● By default, every SQLScript procedure or function runs with AUTOCOMMIT mode OFF and AUTOCOMMIT
DDL mode OFF. Now you can explicitly specify whether the procedure should be run with AUTOCOMMIT
DDL mode ON or OFF.
Caution
In some cases AUTOCOMMIT DDL mode ON may be required. For example, in administrative
operations, like IMPORT, which cannot run with DDL AUTOCOMMIT mode OFF.
You can find out the AUTOCOMMIT DDL mode for each procedure by using the column
'AUTO_COMMIT_DDL' in the monitoring view 'PROCEDURES'.
The following restrictions apply:
○ It cannot be used in functions
○ It cannot be used in non-SQLScript procedures
● This statement forces sequential execution of the procedure logic. No parallelism takes place.
SEQUENTIAL EXECUTION
For more information on inserting, updating and deleting data records, see Modifying the Content of Table
Variables [page 115].
● You can modify a data record at a specific position. There are two equivalent syntax options:
● You can delete data records from a table variable. Wth the following syntax you can delete a single record.
● To delete blocks of records from table variables, you can use the following syntax:
● Sections of your procedures can be nested using BEGIN and END terminals
● Assignment of values to variables - an <expression> can be either a simple expression, such as a character,
a date, or a number, or it can be a scalar function or a scalar user-defined function.
● The ARRAY_AGG function returns the array by aggregating the set of elements in the specified column of
the table variable. Elements can optionally be ordered.
The CARDINALITY function returns the number of the elements in the array, <array_variable_name>.
The TRIM_ARRAY function returns the new array by removing the given number of elements,
<numeric_value_expression>, from the end of the array, <array_value_expression>.
The ARRAY function returns an array whose elements are specified in the list <array_variable_name>. For
more information see the chapter Array Variables [page 202].
● Assignment of values to a list of variables with only one function evaluation. For example,
<function_expression> must be a scalar user-defined function and the number of elements in
<var_name_list> must be equal to the number of output parameters of the scalar user-defined function.
● The MAP_MERGE operator is used to apply each row of the input table to the mapper function and unite all
intermediate result tables. For more information, see Map Merge Operator [page 98].
● For more information about the CE operators, see Calculation Engine Plan Operators [page 217].
● The UNNEST function returns a table including a row for each element of the specified array.
WITH ORDINALTIY
● You use WHILE to repeatedly call a set of trigger statements while a condition is true.
● You use FOR - EACH loops to iterate over all elements in a set of data.
● Terminates a loop
● You use the SIGNAL statement to explicitly raise an exception from within your trigger procedures.
● You use the RESIGNAL statement to raise an exception on the action statement in an exception handler. If
an error code is not specified, RESIGNAL will throw the caught exception.
● You use SET MESSAGE_TEXT to deliver an error message to users when specified error is thrown during
procedure execution.
For information on <insert_stmt>, see INSERT in the SAP HANA SQL and System Views Reference.
For information on <delete_stmt>, see DELETE in the SAP HANA SQL and System Views Reference.
For information on <update_stmt>, see UPDATE in the SAP HANA SQL and System Views Reference.
For information on <replace_stmt> and <upsert_stmt>, see REPLACE and UPSERT in the SAP HANA
SQL and System Views Reference.
For information on <truncate_stmt>, see TRUNCATE in the SAP HANA SQL and System Views Reference.
● <var_name> is a scalar variable. You can assign selected item value to this scalar variable.
● Cursor operations
● Procedure call. For more information, see CALL: Internal Procedure Call [page 35]
Description
The CREATE PROCEDURE statement creates a procedure by using the specified programming language
<lang>.
Example
The procedure features a number of imperative constructs including the use of a cursor (with associated state)
and local scalar variables with assignments.
Syntax
Syntax Elements
If you do not specify the <drop_option>, the system performs a non-cascaded drop. This will only drop the
specified procedure; dependent objects of the procedure will be invalidated but not dropped.
The invalidated objects can be revalidated when an object that uses the same schema and object name is
created.
CASCADE
RESTRICT
This parameter drops the procedure only when dependent objects do not exist. If this drop option is used and a
dependent object exists an error will be sent.
Description
This statement drops a procedure created using CREATE PROCEDURE from the database catalog.
Examples
You drop a procedure called my_proc from the database using a non-cascaded drop.
You can use ALTER PROCEDURE if you want to change the content and properties of a procedure without
dropping the object.
For more information about the parameters, see CREATE PROCEDURE [page 21].
For instance, with ALTER PROCEDURE you can change the content of the body itself. Consider the following
GET_PROCEDURES procedure that returns all procedure names on the database.
The procedure GET_PROCEDURES should now be changed to return only valid procedures. In order to do so, use
ALTER PROCEDURE:
Besides changing the procedure body, you can also change the language <lang> of the procedure, the default
schema <default_schema_name> as well as change the procedure to read only mode (READS SQL DATA).
Note
If the default schema and read-only mode are not explicitly specified, they will be removed. The default
language is SQLScript.
Note
You must have the ALTER privilege for the object you want to change.
Syntax
Syntax Elements
The identifier of the procedure to be altered, with the optional schema name.
Description
Example
You trigger the recompilation of the my_proc procedure to produce debugging information.
A procedure can be called either by a client on the outer-most level, using any of the supported client
interfaces, or within the body of a procedure.
Recommendation
SAP recommends that you use parameterized CALL statements for better performance. The advantages
follow.
● The parameterized query compiles only once, thereby reducing the compile time.
● A stored query string in the SQL plan cache is more generic and a precompiled query plan can be
reused for the same procedure call with different input parameters.
6.1.5.1 CALL
Syntax
Syntax Elements
Procedure parameters
For more information on these data types, see Backus Naur Form Notation [page 14] and Scalar Data Types
[page 16].
Parameters passed to a procedure are scalar constants and can be passed either as IN, OUT or INOUT
parameters. Scalar parameters are assumed to be NOT NULL. Arguments for IN parameters of type table can
be either physical tables or views. The actual value passed for tabular OUT parameters must be`?`.
WITH OVERVIEW
Defines that the result of a procedure call will be stored directly into a physical table.
Calling a procedure WITH OVERVIEW returns one result set that contains information about which table
contains the result of which table output variable. Scalar outputs are returned as regular scalar output
parameters. When you pass existing tables to the output parameters, WITH OVERVIEW inserts the result-set
tuples of the procedure into the provided tables. When you pass '?' to the output parameters, temporary tables
holding the result sets are generated. These tables are dropped automatically once the database session is
closed.
CALL returns a list of result sets with one entry for every tabular result. An iterator can be used to iterate over
these results sets. For each result set you can iterate over the result table in the same manner as you do for
query results. SQL statements that are not assigned to any table variable in the procedure body are added as
result sets at the end of the list of result sets. The type of the result structures will be determined during
compilation time but will not be visible in the signature of the procedure.
CALL when executed by the client the syntax behaves in a way consistent with the SQL standard semantics, for
example, Java clients can call a procedure using a JDBC CallableStatement. Scalar output variables are a
scalar value that can be retrieved from the callable statement directly.
Note
Unquoted identifiers are implicitly treated as upper case. Quoting identifiers will respect capitalization and
allow for using white spaces that are normally not allowed in SQL identifiers.
Examples
It is also possible to use scalar user defined function as parameters for procedure call:
CALL proc(udf(),’EUR’,?,?);
CALL proc(udf()* udf()-55,’EUR’, ?, ?);
In this example, udf() is a scalar user-defined function. For more information about scalar user-defined
functions, see CREATE FUNCTION [page 47]
Syntax:
Syntax Elements:
Note
Description:
For an internal procedure, in which one procedure calls another procedure, all existing variables of the caller or
literals are passed to the IN parameters of the callee and new variables of the caller are bound to the OUT
parameters of the callee. The result is implicitly bound to the variable given in the function call.
Example:
When the procedure addDiscount is called, the variable <:lt_expensive_books> is assigned to the
function and the variable <lt_on_sales> is bound by this function call.
You can call a procedure passing named parameters by using the token =>.
For example:
When you use named parameters, you can ignore the order of the parameters in the procedure signature. Run
the following commands and you can try some of the examples below.
or
Parameter Modes
The following table lists the parameters you can use when defining your procedures.
Parameter modes
Mode Description
IN An input parameter
INOUT Specifies a parameter that passes in and returns data to and from the procedure
Note
This is only supported for scalar values. The parameter needs to be parameterized if you
call the procedure, for example CALL PROC ( inout_var=>?). A non-parameter
ized call of a procedure with an INOUT parameter is not supported.
Both scalar and table parameter types are supported. For more information on datatypes, see Datatype
Extension
Related Information
Scalar Parameters
Table Parameters
You can pass tables and views to the parameter of this function.
Note
You should always use SQL special identifiers when binding a value to a table variable.
Note
In the signature you can define default values for input parameters by using the DEFAULT keyword:
The usage of the default value will be illustrated in the next example. Therefore the following tables are needed:
The procedure in the example generates a FULLNAME by the given input table and delimiter. Whereby default
values are used for both input parameters:
END;
For the tabular input parameter INTAB the default table NAMES is defined and for the scalar input parameter
DELIMITER the ‘,’ is defined as default. To use the default values in the signature, you need to pass in
FULLNAME
--------
DOE,JOHN
Now we want to pass a different table, i.e. MYNAMES but still want to use the default delimiter value, the call
looks then as follows:
And the result shows that now the table MYNAMES was used:
FULLNAME
--------
DOE,ALICE
Note
Please note that default values are not supported for output parameters.
Related Information
For a tabular IN and OUT parameter the EMPTY keyword can be used to define an empty input table as a
default:
Although the general default value handling is supported for input parameters only, the DEFAULT EMPTY is
supported for both tabular IN and OUT parameters.
In the following example use the DEFAULT EMPTY for the tabular output parameter to be able to declare a
procedure with an empty body.
END;
call CHECKINPUT(result=>?)
OUT(1)
-----------------
'Input is empty'
For Functions only tabular input parameter supports the EMPTY keyword :
An example of calling the funtion without passing an input table looks as follows:
When a procedure is created, information about the procedure can be found in the database catalog. You can
use this information for debugging purposes.
The procedures observable in the system views vary according to the privileges that a user has been granted.
The following visibility rules apply:
Procedures can be exported and imported as are tables, see the SQL Reference documentation for details. For
more information see Data Import Export Statements in the SAP HANA SQL and System Views Referenece.
Related Information
6.1.7.1 SYS.PROCEDURES
Structure
Structure
6.1.7.3 SYS.OBJECT_DEPENDENCIES
Dependencies between objects, for example, views that refer to a specific table
Structure
● 0: NORMAL (default)
● 1: EXTERNAL_DIRECT (direct de
pendency between dependent ob
ject and base object)
● 2: EXTERNAL_INDIRECT (indirect
dependency between dependent
object und base object)
● 5: REFERENTIAL_DIRECT (foreign
key dependency between tables)
This section explores the ways in which you can query the OBJECT_DEPENDENCIES system view.
Find all the (direct and indirect) base objects of the DEPS.GET_TABLES procedure using the following
statement.
Look at the DEPENDENCY_TYPE column in more detail. You obtained the results in the table above using a
select on all the base objects of the procedure; the objects shown include both persistent and transient
objects. You can distinguish between these object dependency types using the DEPENDENCY_TYPE column,
as follows:
To obtain only the base objects that are used in DEPS.MY_PROC, use the following statement.
Finally, to find all the dependent objects that are using DEPS.MY_PROC, use the following statement.
PROCEDURE_PARAMETER_COLUMNS provides information about the columns used in table types which
appear as procedure parameters. The information is provided for all table types in use, in-place types and
externally defined types.
There are two different kinds of user-defined functions (UDF): Table User-Defined Functions and Scalar User-
Defined Functions. They are referred to as Table UDF and Scalar UDF in the following table. They differ in terms
Functions Calling A table UDF can only be called in the A scalar UDF can be called in SQL state
FROM-clause of an SQL statement in ments in the same parameter positions
the same parameter positions as table as table column names. That takes
names. For example, SELECT * place in the SELECT and WHERE
FROM myTableUDF(1) clauses of SQL statements. For exam
ple, SELECT myScalarUDF(1) AS
myColumn FROM DUMMY
Output Must return a table whose type is de Must return scalar values specified in
fined in <return_type>. <return_parameter_list>.
Supported functionality The function is tagged as read only by The function is tagged as a read-only
default. DDL and DML are not allowed function by default.
and only other read-only functions can
be called.
This SQL statement creates read-only user-defined functions that are free of side effects. This means that
neither DDL, nor DML statements (INSERT, UPDATE, and DELETE) are allowed in the function body. All
functions or procedures selected or called from the body of the function must be read-only.
Syntax
Syntax Elements
Scalar user-defined functions (SUDF) support the following primitive SQL types. Table types (table variables,
physical tables, or views) are also supported as input in SUDFs. Arrays are supported as input and return types.
SUDFs with table parameters can be used like any other SUDF with following exceptions:
Note
Take in to consideration the following note on performance. SUDFs operate on table data row by row. In
the following example, the operation would be at least O(record_count(t1) *
record_count(t2)).
Table user-defined functions (TUDF) allow the following range of primitive SQL types. They also support table
types and array types as input.
To look at a table type previously defined with the CREATE TYPE command, see CREATE TYPE [page 17].
Table UDFs must return a table whose type is defined by <return_table_type>. And scalar UDF must return
scalar values specified in <return_parameter_list>.
The following expression defines the structure of the returned table data.
LANGUAGE <lang>
<lang> ::= SQLSCRIPT
Default: SQLSCRIPT
Note
DEFINER
Specifies that the execution of the function is performed with the privileges of the definer of the function.
INVOKER
Specifies that the execution of the function is performed with the privileges of the invoker of the function.
Specifies the schema for unqualified objects in the function body. If nothing is specified, then the
current_schema of the session is used.
Defines the main body of the table user-defined functions and scalar user-defined functions. Since the function
is flagged as read-only, neither DDL, nor DML statements (INSERT, UPDATE, and DELETE), are allowed in the
Note
Scalar functions can be marked as DETERMINISTIC, if they always return the same result any time they are
called with a specific set of input parameters.
Defines one or more local variables with associated scalar type or array type.
An array type has <type> as its element type. An array has a range from 1 to 2,147,483,647, which is the
limitation of underlying structure.
You can assign default values by specifying <expression>s. For more information, see Expressions in the SAP
HANA SQL and System Views Reference on the SAP Help Portal.
For more information of the definitions in <func_stmt>, see CREATE PROCEDURE [page 21].
Note
Statements that require DDL AUTOCOMMIT ON, like imports, cannot be used in functions. For more
information, see CREATE PROCEDURE [page 21].
How to call the table function scale is shown in the following example:
How to create a scalar function of name func_add_mul that takes two values of type double and returns two
values of type double is shown in the following example:
In a query you can either use the scalar function in the projection list or in the where-clause. In the following
example the func_add_mul is used in the projection list:
Besides using the scalar function in a query you can also use a scalar function in scalar assignment, e.g.:
You can use ALTER FUNCTION if you want to change the content and properties of a function without dropping
the object.
For more information about the parameters please refer to CREATE FUNCTION. For instance, with ALTER
FUNCTION you can change the content of the body itself. Consider the following procedure GET_FUNCTIONS
that returns all function names on the database.
AS
BEGIN
return SELECT schema_name AS schema_name,
function_name AS name
FROM FUNCTIONS;
END;
The function GET_FUNCTIONS should now be changed to return only valid functions. In order to do so, we will
use ALTER FUNCTION:
AS
BEGIN
return SELECT schema_name AS schema_name,
function_name AS name
FROM FUNCTIONS
WHERE IS_VALID = 'TRUE';
END;
Besides changing the function body, you can also change the default schema <default_schema_name>.
Note
Note
You need the ALTER privilege for the object you want to change.
Syntax
Syntax Elements
When <drop_option> is not specified a non-cascaded drop will be performed. This will only drop the specified
function, dependent objects of the function will be invalidated but not dropped.
The invalidated objects can be revalidated when an object that has same schema and object name is created.
CASCADE
RESTRICT
Drops the function only when dependent objects do not exist. If this drop option is used and a dependent
object exists an error will be thrown.
Description
Drops a function created using CREATE FUNCTION from the database catalog.
Examples
You drop a function called my_func from the database using a non-cascaded drop.
The following tables list the parameters you can use when defining your user-defined functions.
Function Parameter
Table user-defined functions ● Can have a list of input parameters and must return a
table whose type is defined in <return type>
● Input parameters must be explicitly typed and can have
any of the primitive SQL type or a table type.
Scalar user-defined functions ● Can have a list of input parameters and must returns
scalar values specified in <return parameter list>.
● Input parameters must be explicitly typed and can have
any primitive SQL type.
● Using a table as an input is not allowed.
The implicit SELECT statements used within a procedure (or an anonymous block) are executed after the
procedure is finished and scalar user-defined functions (SUDF) are evaluated at the fetch time of the SELECT
statement, due to the design of late materialization. To avoid unexpected results for statements, that are out of
the statement snapshot order within a procedure or a SUDF, implicit result sets will now be materialized in case
the SUDF references a persistent table.
When a function is created, information about the function can be found in the database catalog. You can use
this information for debugging purposes. The functions observable in the system views vary according to the
privileges that a user has been granted. The following visibility rules apply:
● CATALOG READ or DATA ADMIN – All functions in the system can be viewed.
● SCHEMA OWNER, or EXECUTE – Only specific functions where the user is the owner, or they have
execute privileges, will be shown.
6.2.6.1 SYS.FUNCTIONS
Structure
6.2.6.2 SYS.FUNCTION_PARAMETERS
Structure
6.2.6.3 FUNCTION_PARAMETER_COLUMNS
FUNCTION_PARAMETER_COLUMNS provides information about the columns used in table types which
appear as function parameters. The information is provided for all table types in use, in-place types and
externally defined types.
In the signature you can define default values for input parameters by using the DEFAULT keyword:
The usage of the default value will be illustrated in the next example. Therefore the following tables are needed:
The function in the example generates a FULLNAME by the given input table and delimiter. Whereby default
values are used for both input parameters:
END;
For the tabular input parameter INTAB the default table NAMES is defined and for the scalar input parameter
DELIMITER the ‘,’ is defined as default.
That means to query the function FULLNAME and using the default value would be done as follows:
FULLNAME
--------
DOE,JOHN
And the result shows that now the table MYNAMES was used:
FULLNAME
--------
DOE,ALICE
In a scalar function, default values can also be used, as shown in the next example:
Calling that function by using the default value of the variable delimiter would be the following:
Note
Please note that default values are not supported for output parameters.
Related Information
SQLScript allows a table function to be embedded inside an SQL query without the creation of any additional
metadata. The HANA SQL query now accepts SQL FUNCTION block as a table that can embed imperative
SQLScript logic inside a single query.
Syntax
Description
It is possible to create a one-time SQLScript function that can embed imperative SQLScript logic inside an SQL
query. Earlier it was necessary to create an SQLScript function as a metadata object and consume it inside a
single query. Similarly to the anonymous procedure block DO BEGIN…END, the SQL FUNCTION RETURNS…
BEGIN… END block supports that kind of one-time table functions.
Example
Sample Code
-- input parameter
select a from
sql function (in a int => 1)
returns table (a int)
begin
return select :a as a from dummy;
end;
Limitations
If the SQL FUNCTION clause is nested inside another SQLScript object, most of the SQLScript system
variables are not available, if they are not defined as INPUT parameters.
● ROWCOUNT is not shared between the caller object and the SQL FUNCTION but it can still show the
selected ROWCOUNT from the SELECT statement itself.
● SQL_ERROR_CODE and SQL_ERROR_MESSAGE are not inherited, although it is possible to define them
explicitly within the SQL FUNCTION
Deterministic scalar user-defined functions always return the same result any time they are called with a
specific set of input values.
When you use such functions, it is not necessary to recalculate the result every time - you can refer to the
cached result. If you want to make a scalar user-defined function explicitly deterministic, you need to use the
optional keyword DETERMINISTIC when you create your function, as demonstrated in the example below. The
lifetime of the cache entry is bound to the query execution (for example, SELECT/DML). After the execution of
the query, the cache is destroyed.
Sample Code
Note
In the system view SYS.FUNCTIONS, the column IS_DETERMINISTIC provides information about whether a
function is deterministic or not.
Non-Deterministic Functions
The following not-deterministic functions cannot be specified in deterministic scalar user-defined functions.
They return an error at function creation time.
● nextval/currval of sequence
● current_time/current_timestamp/current_date
● current_utctime/current_utctimestamp/current_utcdate
● rand/rand_secure
● window functions
Procedure Result Cache (PRC) is a server-wide in-memory cache that caches the output arguments of
procedure calls using the input arguments as keys.
Note
Syntax
create procedure add (in a int, in b int, out c int) deterministic as begin
c = :a + :b;
end
Description
You can use the keyword DETERMINISTIC when creating a new procedure, if the following conditions are met:
● The procedure always returns the same output arguments when it is called with the same input arguments,
even if the session and database state is not the same.
● The procedure has no side effects.
You can also create a procedure with the keyword DETERMINISTIC, even if it does not satisfy the above
conditions, by changing the configuration parameters described in the configuration section. Procedures
created with the keyword DETERMINISTIC are described below as "deterministic procedures", regardless of
whether they are logically deterministic or not.
By default, you cannot create a deterministic procedure that contains the following:
You can skip the determinism check when creating deterministic procedures on your responsibility. It is useful
when you want to create logically deterministic procedures that may contain non-deterministic statements.
When disabling the check, please be aware that the cache can be shared among users, so if the procedure
results depend on the current user (for example, the procedure security is invoker and there are user-specific
functions or use of tables with analytic privileges), it may not behave as you expect. Disabling the check is not
recommended.
● If a deterministic procedure has side effects, the side effects may or may not be visible when you call the
procedure.
● If a deterministic procedure has implicit result sets, they may or may not be returned when you call the
procedure.
● If a deterministic procedure returns different output arguments for the same input arguments, you may or
may not get the same output arguments when you call the procedure multiple times with the same input
arguments.
Configuration
The configuration parameters below refer to Procedure Result Cache (PRC) under the section "sqlscript".
There are also session variables that can be set for each session and which override the settings above.
__SQLSCRIPT_ENABLE_DETERMINISTIC_PROCE enable_deterministic_procedure_check
DURE_CHECK
__SQLSCRIPT_ENABLE_DETERMINISTIC_PROCEDURE_RE enable_deterministic_procedure_cache
SULT_CACHE
Note
Description
The scope of the cache is the current server (for example, indexserver or cacheserver). If you call the same
deterministic procedure in the same server with the same arguments multiple times, the cached results will be
used except for the first call, unless the cached results are evicted. Since the cache is global in the current
server, the results are shared even among different query plans.
Note
Currently, only scalar parameters are supported for PRC. You can create deterministic procedures having
table parameters, but automatic caching will be disabled for such procedures.
The same keyword, DETERMINISTIC, can be used for both procedures and functions, but currently the
meaning is not the same.
For scalar user-defined functions, a new cache is created for each statement execution and destroyed after
execution. The cache is local to the current statement which has a fixed snapshot of the persistence at a point
in time. Due to this behavior, more things can be considered "deterministic" in deterministic scalar UDFs, such
as reading a table.
Related Information
Syntax
Code Syntax
Description
A library is a set of related variables, procedures and functions. There are two types of libraries: built-in libraries
and user-defined libraries. A built-in library is a system-provided library with special functions. A user-defined
● A single metadata object is created for multiple procedures and functions. By combining all relevant
procedures and functions into a single metadata object, you reduce metadata management cost. On the
other hand, if one function or a procedure of the library becomes invalid, the whole library becomes invalid.
● The atomicity of the relevant objects is guaranteed because they are managed as a single object.
● It is easy to handle the visibility of a procedure or a function in a library. When an application gets bigger
and complex, developers might want to use some procedures or functions only in their application and not
to open them to application users. A library can solve this requirement easily by using the access modes
PUBLIC and PRIVATE for each library member.
● Constant and non-constant variables are available in a library. You can declare a constant variable for a
frequently used constant value and use the variable name instead of specifying the value each time. A non-
constant value is alive during a session and you can access the value at any time if the session is available.
Note
Any user having the EXECUTE privilege on a library can use that library by means of the USING
statement and can also access its public members.
Limitations
● The usage of library variables is currently limited. For example, it is not possible to use library variables in
the INTO clause of a SELECT INTO statement and in the INTO clause of dynamic SQL. This limitation can
be easily circumvented by using a normal scalar variable as intermediate value.
● It is not possible to call library procedures with hints.
● Since session variables are used for library variables, it is possible (provided you the necessary privileges)
to read and modify arbitrary library variables of (other) sessions.
● Variables cannot be declared by using LIKE for specifying the type.
● Non-constant variables cannot have a default value.
● The table type library variable is not supported.
● A library member function cannot be used in queries.
Related Information
Syntax
Code Syntax
Description
Access Mode
Each library member can have a PUBLIC or a PRIVATE access mode. PRIVATE members are not accessible
outside the library, while PUBLIC members can be used freely in procedures and functions.
Example
Sample Code
Setup
do begin
declare idx int = 0;
for idx in 1..200 do
insert into data_table values (:idx);
end for;
end;
Sample Code
Library DDL
public procedure get_data(in size int, out result table(col1 int)) as begin
result = select top :size col1 from data_table;
end;
end;
Sample Code
Result
call myproc(10);
Result:
count(*)
10
call myproc(150);
Result:
count(*)
100
Related Information
LIBRARIES
LIBRARY_MEMBERS
Related Information
Description
Until now it was possible to use library members of user-defined libraries (UDL) only within the scope of other
SQLScript objects like procedures, functions or anonymous blocks. For example, even if you only wanted to run
a single library member procedure, you had to create a procedure or execute the member procedure within an
anonymous block. Wrapping the member access into an anonymous block is simple when there are no
parameters, but it can get more complex, if there are input and output parameters. You can now directly call
library member procedures without the use of additional SQLScript objects.
Syntax
Code Syntax
Library members can be referenced by library name and library member name. If a library alias is set by a
USING statement, the alias can be used instead of the library name.
If an alias is specified, SQLScript first tries to resolve the unqualified library name as a library alias. If the name
is not found in the list of library aliases, then SQLScript will resolve the name with a default schema. However, if
a schema name is specified, the library is always searched for inside the schema and any existing alias is
ignored.
Examples
Sample Code
Example Library
Sample Code
Example 1
In this example, the library name in the CALL statement is not fully qualified and there is an alias with the same
name. In that case, mylib is resolved as library mylib and it refers to myschema1.mylib.
Sample Code
Example 2
In this example, the library name in the CALL statement is not fully qualified and there is no alias with the same
name. In that case, mylib is found only in the default schema and refers to myschema2.mylib.
Sample Code
Exaple 3
In this example, the library name in the CALL statement is mylib and there is an alias with the same name.
However, the library name is fully qualified with the schema name myschema2 and is resolved as
myschema2.mylib.
Limitations
● WITH option is not supported for library member CALL statement. For example CALL MYLIB:PROC() WITH
HINT (...)
● EXPLAIN PLAN is not supported.
● QUERY EXPORT is not supported.
● Built-in library member procedures with variable arguments are not supported.
Library member functions and variables can be used directly in SQL or expressions in SQLScript.
Syntax
The syntax for library table functions, scalar functions and variables accepts a library member reference.
Code Syntax
Behavior
Sample Code
ERR-00007: feature not supported: using library member Succeed: [(314), (628)]
function on the outer boundary of SQLScript: CIRCUMFER
ENCE: line 1 col 8 (at pos 7)
ERR-00257: sql syntax error: incorrect syntax near "(": line 1 Succeed: [(314), (628)]
col 40 (at pos 40)
Limitations
Related Information
When creating a SQLScript procedure or function, you can use the OR REPLACE option to change the defined
procedure or function, if it already exists.
Syntax
Behavior
The behavior of this command depends on the existence of the defined procedure or function. If the procedure
or function already exists, it will be modified according to the new definition. If you do not explicitly specify a
property (for example, read only), this property will be set to the default value. Please refer to the example
below. If the procedure or function does not exist yet, the command works like CREATE PROCEDURE or
CREATE FUNCTION.
Compared to using DROP PROCEDURE followed by CREATE PROCEDURE, CREATE OR REPLACE has the
following benefits:
● DROP and CREATE incur object revalidation twice, while CREATE OR REPLACE incurs it only once
● If a user drops a procedure, its privileges are lost, while CREATE OR REPLACE preserves them.
Example
Sample Code
Sample Code
-- new parameter
CREATE OR REPLACE PROCEDURE test1 (IN i int) as
begin
select :i from dummy;
select * from dummy;
end;
call test1(?);
-- default value
CREATE OR REPLACE PROCEDURE test1 (IN i int default 1) as
begin
select :i from dummy;
end;
call test1();
-- table type
create column table tab1 (a INT);
create column table tab2 (a INT);
CREATE OR REPLACE PROCEDURE test1(out ot1 table(a INT), out ot2 table(a INT))
as begin
insert into tab1 values (1);
select * from tab1;
insert into tab2 values (2);
select * from tab2;
insert into tab1 values (1);
insert into tab2 values (2);
ot1 = select * from tab1;
ot2 = select * from tab2;
end;
call test1(?, ?);
-- security
CREATE OR REPLACE PROCEDURE test1(out o table(a int))
sql security invoker as
begin
o = select 5 as a from dummy;
end;
call test1(?);
-- change security
ALTER PROCEDURE test1(out o table(a int))
sql security definer as
begin
o = select 8 as a from dummy;
end;
call test1(?);
-- result view
ALTER PROCEDURE test1(out o table(a int))
reads sql data with result view rv1 as
begin
o = select 0 as A from dummy;
end;
call test1(?);
-- table function
-- scalar function
CREATE OR REPLACE FUNCTION sfunc_param returns a int as
begin
A = 0;
end;
select sfunc_param() from dummy;
All SQLScript statements supported in procedures are also supported in anonymous blocks.
Note
Statements that require DDL AUTOCOMMIT ON, like imports, cannot be used in anonymous blocks. For
more information, see CREATE PROCEDURE [page 21].
Compared to procedures, anonymous blocks have no corresponding object created in the metadata catalog -
they are cached in the SQL Plan Cache.
An anonymous block is defined and executed in a single step by using the following syntax:
DO [(<parameter_clause>)]
BEGIN [SEQUENTIAL EXECUTION]
<body>
END WITH HINT (...)
<body> ::= !! supports the same feature set as the procedure
For more information on <body>, see <procedure_body> in CREATE in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
Note
The following example illustrates how to call an anonymous block with a parameter clause:
For output parameters only ? is a valid value and cannot be omitted, otherwise the query parameter cannot be
bound. Any scalar expression can be used for the scalar input parameter.
You can also parameterize the scalar parameters, if needed. For example, for the example above, it would look
as follows:
Contrary to a procedure, an anonymous block has no container-specific properties (for example, language,
security mode, and so on). However, the body of an anonymous block is similar to the procedure body.
Note
It is now possible to use HINTs for anonymous blocks. However, not all hints that are supported for CALL, are
also supported for anonymous blocks (for example, routing hints).
Sample Code
DO BEGIN
DECLARE i INT;
FOR i in 1..5 DO
SELECT * FROM dummy;
END FOR;
END WITH HINT(dev_se_use_llvm)
DO
BEGIN
DECLARE I INTEGER;
CREATE TABLE TAB1 (I INTEGER);
FOR I IN 1..10 DO
INSERT INTO TAB1 VALUES (:I);
END FOR;
END;
This example contains an anonymous block that creates a table and inserts values into that table.
Example 2
DO
BEGIN
T1 = SELECT * FROM TAB;
CALL PROC3(:T1, :T2);
SELECT * FROM :T2;
END
Example 3
Procedure and function definitions may contain delicate or critical information but a user with system
privileges can easily see all definitions from the public system views PROCEDURES, FUNCTIONS or from
traces, even if the procedure or function owner has controlled the authorization rights in order to secure their
objects. If application developers want to protect their intellectual property from any other users, even system
users, they can use SQLScript encryption.
Decryption of an encrypted procedure or function is not supported and cannot be performed even by SAP.
Users who want to use encrypted procedures or functions are responsible for saving the original source
code and providing supportability because there is no way to go back and no supportability tools for that
purpose are available in SAP HANA.
Syntax
Code Syntax
Code Syntax
Code Syntax
Behavior
If a procedure or a function is created by using the WITH ENCRYPTION option, their definition is saved as an
encrypted string that is not human readable. That definition is decrypted only when the procedure or the
function is compiled. The body in the CREATE statement is masked in various traces or monitoring views.
Encrypting a procedure or a function with the ALTER PROCEDURE/FUNCTION statement can be achieved in
the following ways. An ALTER PROCEDURE/FUNCTION statement, accompanying a procedure body, can make
use of the WITH ENCRYPTION option, just like the CREATE PROCEDURE/FUNCTION statement.
If you do not want to repeat the procedure or function body in the ALTER PROCEDURE/FUNCTION statement
and want to encrypt the existing procedure or function, you can use ALTER PROCEDURE/FUNCTION
Note
A new encryption key is generated for each procedure or function and is managed internally.
SQLScript Debugger, PlanViz, traces, monitoring views, and others that can reveal procedure definition are
not available for encrypted procedures or functions.
Additional Considerations
Not encrypted procedures or functions can be used inside encrypted procedures or functions. However,
encryption in the outer call does not mean that nested calls are also secured. If a nested procedure or a
function is not encrypted, then its compilation and execution details are available in monitoring views or traces.
Object Dependency
The object dependency of encrypted procedures or functions is not secured. The purpose of encryption is to
secure the logic of procedures or functions and object dependency cannot reveal how a procedure or a
function works.
There is a large amount of information related to a procedure or a function and hiding all information is hard
and makes problem analysis difficult. Therefore, compilation or execution information, which cannot reveal the
logic of a procedure or a function, can be available to users.
Limitation in Optimization
Some optimizations, which need analysis of the procedure or function definition, are turned off for encrypted
procedures and functions.
Calculation Views
An encrypted procedure cannot be used as a basis for a calculation view. It is recommended to use table user-
defined functions instead.
System Views
FUNCTIONS
SCHEMA_NAME FUNCTION_NAME ... IS_ENCRYPTED DEFINITION
For every public interface that shows procedure or function definitions, such as PROCEDURES or FUNCTIONS,
the definition column displays only the signature of the procedure, if it is encrypted.
Sample Code
Sample Code
Result:
PROCEDURE_NAME DEFINITON
Sample Code
Result:
FUNCTION_NAME DEFINITON
For every monitoring view showing internal queries, the internal statements will also be hidden, if its parent is
an encrypted procedure call. Debugging tools or plan analysis tools are also blocked.
● SQLScript Debugger
● EXPLAIN PLAN FOR Call
● PlanViz
● Statement-related views
● Plan Cache-related views
● M_ACTIVE_PROCEDURES
In these monitoring views, the SQL statement string is replaced with the string <statement from
encrypted procedure <proc_schema>.<proc_name> (<sqlscript_context_id>)>.
Default Behavior
Encrypted procedures or functions cannot be exported, if the option ENCRYPTED OBJECT HEADER ONLY is
not applied. When the export target is an encrypted object or if objects, which are referenced by the export
object, include an encrypted object, the export will fail with the error FEATURE_NOT_SUPPORTED. However,
when exporting a schema and an encrypted procedure or function in the schema does not have any dependent
objects, the procedure or function will be skipped during the export.
To enable export of any other objects based on an encrypted procedure, the option ENCRYPTED OBJECT
HEADER ONLY is introduced for the EXPORT statement. This option does not export encrypted objects in
encrypted state, but exports the encrypted object as a header-only procedure or function. After an encrypted
procedure or a function has been exported with the HEADER ONLY option, objects based on encrypted objects
will be invalid even after a successful import. You should alter the exported header-only procedure or function
to its original body or dummy body to make dependent objects valid.
Sample Code
Original Procedure
Sample Code
Export Statement
export all as binary into <path> with encrypted object header only;
Sample Code
Exported create.sql
Each table assignment in a procedure or table user defined function specifies a transformation of some data by
means of classical relational operators such as selection, projection. The result of the statement is then bound
to a variable which either is used as input by a subsequent statement data transformation or is one of the
output variables of the procedure. In order to describe the data flow of a procedure, statements bind new
variables that are referenced elsewhere in the body of the procedure.
This approach leads to data flows which are free of side effects. The declarative nature to define business logic
might require some deeper thought when specifying an algorithm, but it gives the SAP HANA database
freedom to optimize the data flow which may result in better performance.
The following example shows a simple procedure implemented in SQLScript. To better illustrate the high-level
concept, we have omitted some details.
This SQLScript example defines a read-only procedure that has 2 scalar input parameters and 2 output
parameters of type table. The first line contains an SQL query Q1, that identifies big publishers based on the
number of books they have published (using the input parameter cnt). Next, detailed information about these
publishers along with their corresponding books is determined in query Q2. Finally, this information is
aggregated in 2 different ways in queries Q3 (aggregated per publisher) and Q4 (aggregated per year)
respectively. The resulting tables constitute the output tables of the function.
A procedure in SQLScript that only uses declarative constructs can be completely translated into an acyclic
dataflow graph where each node represents a data transformation. The example above could be represented
as the dataflow graph shown in the following image. Similar to SQL queries, the graph is analyzed and
optimized before execution. It is also possible to call a procedure from within another procedure. In terms of
the dataflow graph, this type of nested procedure call can be seen as a sub-graph that consumes intermediate
results and returns its output to the subsequent nodes. For optimization, the sub-graph of the called procedure
is merged with the graph of the calling procedure, and the resulting graph is then optimized. The optimization
applies similar rules as an SQL optimizer uses for its logical optimization (for example filter pushdown). Then
the plan is translated into a physical plan which consists of physical database operations (for example hash
joins). The translation into a physical plan involves further optimizations using a cost model as well as
heuristics.
Syntax
Description
Table parameters that are defined in the signature are either input or output parameters. The parameters can
be typed either by using a table type previously defined with the CREATE TYPE command, or by writing it
directly in the signature without any previously defined table type.
Example
The advantage of previously defined table type is that it can be reused by other procedure and functions. The
disadvantage is that you must take care of its lifecycle.
The advantage of a table variable structure that you directly define in the signature is that you do not need to
take care of its lifecycle. In this case, the disadvantage is that it cannot be reused.
The any table type parameter is a table parameter whose type is defined during DDL time as a wildcard and is
determined later during query compilation.
Syntax
As a result of the new any table type support, the syntax of table parameters has changed as follows:
Code Syntax
Examples
The following examples illustrate some use cases of the any_table_type parameter for DML and SELECT
statements.
Sample Code
The any_table_type parameter can also be used in other scenarios with different statements.
Sample Code
-- unnest statement
create procedure unst_proc1(in itt table(a int), out ott table(...)) as
begin
tmp = SELECT '1','2','3' as A from :itt;
tmp2 = unnest(ARRAY_AGG(:tmp.a));
ott = select * from :tmp2;
end;
call unst_proc1(ctab1,?);
-- ce functions
create procedure ce_proc1 (out outtab table(...)) as
begin
t = ce_column_table(temptable);
outtab = ce_projection(:t, [b]);
end
call ce_proc1(?);
-- apply filters
CREATE PROCEDURE apply_p1(IN inputtab table(...), IN dynamic_filter_1
VARCHAR(5000)) as
begin
outtab = APPLY_FILTER (:inputtab, :dynamic_filter_1);
select * from :outtab;
end;
The any_table_type parameter can be used in procedures and table UDFs in the SQLScript laguage and
procedures in the AFL language with some limitations:
● the any_table_type parameter cannot be used within anonymous blocks, other languages or outside the
scope of SQLScript
● any_table_type parameters are supported only as input parameter of table UDFs, but not as return
parameters
● scalar UDFs do not support any_table_type parameters.
● If an output any table type parameter cannot be resolved during procedure creation (for example,
out_any_table = select * from in_any_table), the procedure cannot be called inside SQLScript.
The type of a table variable in the body of a procedure or a table function is either derived from the SQL Query,
or declared explicitly. If the table variable has derived its type from the SQL query, the SQLScript compiler
determines its type from the first assignments of the variable thus providing a lot of flexibility. One
disadvantage of this procedure is that it also leads to many type conversions in the background because
sometimes the derived table type does not match the typed table parameters in the signature. This can lead to
additional unnecessary conversions. Another disadvantage is the unnecessary internal statement compilation
to derive the types. To avoid this unnecessary effort, you can declare the type of a table variable explicitly. A
declared table variable is always initialized with empty content.
Signature
Local table variables are declared by using the DECLARE keyword. For the referenced type, you can either use a
previously declared table type, or the type definition TABLE (<column_list_definition>). The next
example illustrates both variants:
You can also directly assign a default value to a table variable by using the DEFAULT keyword or ‘=’. By default
all statements are allowed all statements that are also supported for the typical table variable assignment.
The table variable can be also flagged as read-only by using the CONSTANT keyword. The consequence is that
you cannot override the variable any more. Note that if the CONSTANT keyword is used, the table variable
should have a default value, it cannot be NULL.
An alternative way to declare a table variable is to use the LIKE keyword. You can specify the variable type by
using the type of a persistent table, a view, or another table variable.
Note
When you declare a table variable using LIKE <table_name>, all the attributes of the columns (like
unique, default value, and so on) in the referenced table are ignored in the declared variable except the not
null attribute.
Description
Local table variables are declared by using the DECLARE keyword. A table variable temp can be referenced by
using :temp. For more information, see Referencing Variables [page 96]. The <sql_identifier> must be
unique among all other scalar variables and table variables in the same code block. However, you can use
names that are identical to the name of another variable in a different code block. Additionally, you can
reference those identifiers only in their local scope.
In each block there are table variables declared with identical names. However, since the last assignment to the
output parameter <outTab> can only have the reference of variable <temp> declared in the same block, the
result is the following:
N
----
1
In this code example there is no explicit table variable declaration where done, that means the <temp> variable
is visible among all blocks. For this reason, the result is the following:
N
----
2
For every assignment of the explicitly declared table variable, the derived column names and types on the right-
hand side are checked against the explicitly declared type on the left-hand side.
BEGIN
DECLARE a TABLE (i DECIMAL(2,1), j INTEGER);
IF :num = 4
THEN
a = SELECT i, j FROM tab;
END IF;
END;
The example above returns a warning because the table variable <a> is unassigned if <:num> is not 4. This
behavior can be controlled by the configuration parameter UNINITIALIZED_TABLE_VARIABLE_USAGE.
Besides issuing a warning, it also offers the following options:
Create new variable First SQL query assignment Table variable declaration in a block:
Variable scope Global scope, regardless of the block Available in declared block only.
where it was first declared
Variable hiding is applied.
Unassigned variable check No warning during the compilation Warning during compilation if it is pos
sible to refer to the unassigned table
variable. The check is perforrmed only
if a table variable is used.
You can specify the NOT NULL constraint on columns in table types used in SQLScript. Historically, this was
not allowed by the syntax and existing NOT NULL constraints on tables and table types were ignored when
used as types in SQLScript. Now, NOT NULL constraints are taken into consideration, if specified directly in the
column list of table types. NOT NULL constraints in persistent tables and table types are still ignored by default
for backward compatibility but you can make them valid by changing the configuration, as follows:
If both are set, the session variable takes precedence. Setting it to 'ignore_with_warning' has the same
effect as 'ignore', except that you additionally get a warning whenever the constraint is ignored. With
'respect', the NOT NULL constraints (including primary keys) in tables and table types will be taken into
consideration but that could invalidate existing procedures. Consider the following example:
Table variables are bound by using the equality operator. This operator binds the result of a valid SELECT
statement on the right-hand side to an intermediate variable or an output parameter on the left-hand side.
Statements on the right-hand side can refer to input parameters or intermediate result variables bound by
other statements. Cyclic dependencies that result from the intermediate result assignments or from calling
other functions are not allowed, which means that recursion is not possible.
Bound variables are referenced by their name (for example, <var>). In the variable reference the variable name
is prefixed by <:> such as <:var>. The procedure or table function describe a dataflow graph using their
statements and the variables that connect the statements. The order in which statements are written in a body
can be different from the order in which statements are evaluated. In case a table variable is bound multiple
times, the order of these bindings is consistent with the order they appear in the body. Additionally, statements
are only evaluated if the variables that are bound by the statement are consumed by another subsequent
statement. Consequently, statements whose results are not consumed are removed during optimization.
Example:
In this assignment, the variable <lt_expensive_books> is bound. The <:it_books> variable in the FROM
clause refers to an IN parameter of a table type. It would also be possible to consume variables of type table in
the FROM clause which were bound by an earlier statement. <:minPrice> and <:currency> refer to IN
parameters of a scalar type.
Syntax
Syntax Elements
The parameter name definition. PLACEHOLDER is used for place holder parameters and HINT for hint
parameters.
Description
Using column view parameter binding it is possible to pass parameters from a procedure/scripted calculation
view to a parameterized column view e.g. hierarchy view, graphical calculation view, scripted calculation view.
Examples:
The following example assumes that you have a hierarchical column view "H_PROC" and you want to use this
view in a procedure. The procedure should return an extended expression that will be passed via a variable.
CALL "EXTEND_EXPRESSION"('',?);
CALL "EXTEND_EXPRESSION"('subtree("B1")',?);
Description
The MAP_MERGE operator is used to apply each row of the input table to the mapper function and unite all
intermediate result tables. The purpose of the operator is to replace sequential FOR-loops and union patterns,
like in the example below, with a parallel operator.
Sample Code
Note
The mapper procedure is a read-only procedure with only one output that is a tabular output.
Syntax
The first input of the MAP_MERGE operator is th mapper table <table_or_table_variable> . The mapper
table is a table or a table variable on which you want to iterate by rows. In the above example it would be table
variable t.
The second input is the mapper function <mapper_identifier> itself. The mapper function is a function you
want to have evaluated on each row of the mapper table <table_or_table_variable>. Currently, the
MAP_MERGE operator supports only table functions as <mapper_identifier>. This means that in the above
example you need to convert the mapper procedure into a table function.
Example
As an example, let us rewrite the above example to leverage the parallel execution of the MAP_MERGE operator.
We need to transform the procedure into a table function, because MAP_MERGE only supports table functions
as <mapper_identifier>.
Sample Code
MAP_REDUCE is a programming model introduced by Google that allows easy development of scalable parallel
applications for processing big data on large clusters of commodity machines. The MAP_REDUCE operator is a
specialization of the MAP_MERGE operator.
Syntax
Code Syntax
We take as an example a table containing sentences with their IDs. If you want to count the number of
sentences that contain a certain character and the number of occurrences of each character in the table, you
can use the MAP_REDUCE operator in the following way:
Mapper Function
Sample Code
Mapper Function
Reducer Function
Sample Code
Reducer Function
Sample Code
do begin
declare result table(c varchar, stmt_freq int, total_freq int);
result = MAP_REDUCE(tab, mapper(tab.id, tab.sentence) group by c as X,
reducer(X.c, X));
select * from :result order by c;
end;
1. The mapper TUDF processes each row of the input table and returns a table.
5. The reducer TUDF (or procedure) processes each group and returns a table (or multiple tables).
If you use a read-only procedure as a reducer, you can fetch multiple table outputs from a MAP_REDUCE
operator. To bind the output of MAP_REDUCE operators, you can simply apply the table variable as the
parameter of the reducer specification. For example, if you want to change the reducer in the example above to
a read-only procedure, apply the following code.
do begin
declare result table(c varchar, stmt_freq int, total_freq int);
MAP_REDUCE(tab, mapper(tab.id, tab.sentence) group by c as X,
reducer_procedure(X.c, X, result));
Sample Code
do begin
declare result table(c varchar, stmt_freq int, total_freq int);
declare extra_arg1, extra_arg2 int;
declare extra_arg3, extra_arg4 table(...);
... more extra args ...
result = MAP_REDUCE(tab, mapper(tab.id,
tab.sentence, :extra_arg1, :extra_arg3, ...) group by c as X,
reducer(X.c, X, :extra_arg2, :extra_arg4,
1+1, ...));
select * from :result order by c;
end;
Note
There is no restriction about the order of input table parameters, input column parameters, extra
parameters and so on. It is also possible to use default parameter values in mapper/reducer TUDFs or
procedures.
Restrictions
● Only Mapper and Reducer are supported (no other Hadoop functionalities like group comparator, key
comparator and so on).
● The alias ID in the mapper output and the ID in the Reducer TUDF (or procedure) parameter must be the
same.
● The Mapper must be a TUDF, not a procedure.
● The Reducer procedure should be a read-only procedure and cannot have scalar output parameters.
Related Information
7.8 Hints
The SQLScript compiler combines statements to optimize code. Hints enable you to block or enforce the
inlining of table variables.
The SQLScript compiler combines statements to optimize code. Hints enable you to block or enforce the
inlining of table variables.
Note
Using a HINT needs to be considered carefully. In some cases, using a HINT could end up being more
expensive.
Block Statement-Inlining
The overall optimization guideline in SQLScript states that dependent statements are combined if possible. For
example, you have two table variable assignments as follows:
There can be situations, however, when the combined statements lead to a non-optimal plan and as a result, to
less-than-optimal performance of the executed statement. In these situations it can help to block the
combination of specific statements. Therefore SAP has introduced a HINT called NO_INLINE. By placing that
HINT at the end of select statement, it blocks the combination (or inlining) of that statement into other
statements. An example of using this follows:
By adding WITH HINT (NO_INLINE) to the table variable tab, you can block the combination of that
statement and ensure that the two statements are executed separately.
Enforce Statement-Inlining
Using the hint called INLINE helps in situations when you want to combine the statement of a nested
procedure into the outer procedure.
Currently statements that belong to nested procedure are not combined into the statements of the calling
procedures. In the following example, you have two procedures defined.
By executing the procedure, ProcCaller, the two table assignments are executed separately. If you want to
have both statements combined, you can do so by using WITH HINT (INLINE) at the statement of the
output table variable. Using this example, it would be written as follows:
Now, if the procedure, ProcCaller, is executed, then the statement of table variable tab2 in ProcInner is
combined into the statement of the variable, tab, in the procedure, ProcCaller:
SELECT I FROM (SELECT I FROM T WITH HINT (INLINE)) where I > 10;
The ROUTE_TO hint routes the query to the specified volume ID or service type.
Syntax
Code Syntax
Description
The ROUTE_TO hint can be used with either "volume ID", or "service type". If the "volume id" is provided, the
statement is intended to be routed to the specified volume. But if the "service type" (a string argument that can
have values like "indexserver", "computeserver" and so on) is provided within the hint, the statement can be
routed to all nodes related to this service.
Example
Sample Code
This section focuses on imperative language constructs such as loops and conditionals. The use of imperative
logic splits the logic between several data flows.
For more information, see Orchestration Logic [page 12] and Declarative SQLScript Logic [page 88].
Syntax
Syntax Elements
Description
Local variables are declared by using the DECLARE keyword and they can optionally be initialized with their
declaration. By default scalar variables are initialized with NULL. A scalar variable var can be referenced as
described above by using :var.
Tip
If you want to access the value of the variable, use :var in your code. If you want to assign a value to the
variable, use var in your code.
Recommendation
Even though the := operator is still available, SAP recommends that you use only the = operator in defining
scalar variables.
Example
CREATE PROCEDURE proc (OUT z INT) LANGUAGE SQLSCRIPT READS SQL DATA
AS
BEGIN
DECLARE a int;
DECLARE b int = 0;
DECLARE c int DEFAULT 0;
This examples shows various ways for making declarations and assignments.
Note
You can assign a scalar UDF to a scalar variable with 1 output or more than 1 output, as depicted in the
following code examples.
The SELECT INTO statement is widely used for assigning a result set to a set of scalar variables. Since the
statement does not accept an empty result set, it is necessary to define exit handlers in case an empty result
Syntax
Code Syntax
Description
It is also possible to use a single array element as the result of SELECT INTO and EXEC INTO. The syntax of the
INTO clause was extended as follows:
Sample Code
DO BEGIN
DECLARE A_COPY INT ARRAY;
DECLARE B_COPY VARCHAR(10) ARRAY;
SELECT A, B INTO A_COPY[1], B_COPY[1] DEFAULT -2+1, NULL FROM T1;
SELECT :A_COPY[1], :B_COPY[1] from dummy;
--(A_COPY[1],B_COPY[1]) = (-1,?), use default value
EXEC 'SELECT A FROM T1' INTO A_COPY[1] DEFAULT 2;
SELECT :A_COPY[1], :B_COPY[1] from dummy;
--(A_COPY[1]) = (2), exec into statement with default value
INSERT INTO T1 VALUES (0, 'sample0');
SELECT A, B INTO A_COPY[1], B_COPY[1] DEFAULT 5, NULL FROM T1;
SELECT :A_COPY[1], :B_COPY[1] from dummy;
--(A_COPY[1],B_COPY[1]) = (0,'sample0'), executed as-is
END;
DO BEGIN
DECLARE A_COPY INT;
DECLARE B_COPY VARCHAR(10);
CREATE ROW TABLE T1 (A INT NOT NULL, B VARCHAR(10));
SELECT A, B INTO A_COPY, B_COPY DEFAULT -2+1, NULL FROM T1;
--(A_COPY,B_COPY) = (-1,?), use default value
EXEC 'SELECT A FROM T1' INTO A_COPY DEFAULT 2;
--(A_COPY) = (2), exec into statement with default value
INSERT INTO T1 VALUES (0, 'sample0');
SELECT A, B INTO A_COPY, B_COPY DEFAULT 5, NULL FROM T1;
--(A_COPY,B_COPY) = (0,'sample0'), executed as-is
END;
Related Information
Description
If the SELECT statement returns a 1*1 result set (1 row and 1 column), that result set can be used directly as an
expression.
Examples
Sample Code
If the right-hand side of an assignment contains only a SELECT statement (even with parenthesizes, for
example: x = (SELECT * FROM tab)), it will be always be treated as a table variable assignment. The
workaround is to use SELECT INTO.
do begin
declare n int;
n = (select i from mytab); -- ERR-01310: scalar type is not allowed: N
end;
do begin
declare n int;
select i into n from mytab; -- workaround
end;
Limitations
do begin
declare n auto = (select 10 from dummy) + 1; -- ERR-00007: feature not
supported: subquery in auto type assignment
end;
Table variables are, as the name suggests, variables with a reference to tabular data structure. The same
applies to tabular parameters, unless specified otherwise.
The index-based cell access allows you random access (read and write) to each cell of table variable.
<table_variable>.<column_name>[<index>]
For example, writing to certain cell of a table variable is illustrated in the following example. Here we simply
change the value in the second row of column A.
Reading from a certain cell of a table variable is done in similar way. Note that for the read access, the ‘:’ is
needed in front of the table variable.
The same rules apply for <index> as for the array index. That means that the <index> can have any value
from 1 to 2^31 and that SQL Expression and Scalar User Defined Functions (Scalar UDF) that return a number
also can be used as an index. Instead of using a constant scalar values, it is also possible to use a scalar
variable of type INTEGER as <index>.
Restrictions:
Apart from the index-based table cell assignment, SQLScript offers additional operations for directly modifying
the content of a table variable, without having to assign the result of a statement to a new table variable. This,
together with not involving the SQL layer, leads to performance improvement. On the other hand, such
operations require data materialization, contrary to the declarative logic.
Note
For all position expressions the valid values are in the interval from 1 to 2^31-1.
You can insert a new data record at a specific position in a table variable with the following syntax:
All existing data records at positions starting from the given index onwards, are moved to the next position. If
the index is greater than the original table size, the records between the inserted record and the original last
record are initialized with NULL values.
Sample Code
IF IS_EMPTY(:IT) THEN
RETURN;
END IF;
If you do not specify an index (position), the data record will be appended at the end.
Sample Code
Note
The values for the omitted columns are initialized with NULL values.
You can insert the content of one table variable into another table variable with one single operation without
using SQL.
Code Syntax
:<target_table_var>[.(<column_list>)].INSERT(:<source_table_var>[,
<position>])
If no position is specified, the values will be appended to the end. The positions starts from 1 - NULL and all
values smaller than 1 are invalid. If no column list is specified, all columns of the table are insertion targets.
Sample Code
Usage Example
:tab_a.insert(:tab_b);
:tab_a.(col1, COL2).insert(:tab_b);
:tab_a.INSERT(:tab_b, 5);
:tab_a.("a","b").insert(:tab_b, :index_to_insert);
The mapping which column of the source table is inserted into which column of the target table is done
according to the column position. The source table has to have the same number of columns as the target
table or as the number of columns in the column list.
If SOURCE_TAB has columns (X, A, B, C) and TARGET_TAB has columns (A, B, C, D),
then :target_tab.insert(:source_tab) will insert X into A, A into B, B into C and C into D.
If another order is desired, the column sequence has to specified in the column list for the TARGET_TAB. for
example :TARGET_TAB.(D, A, B, C).insert(:SOURCE_TAB) will insert X into D, A into A, B into B and C
into C.
The types of the columns have to match, otherwise it is not possible to insert data into the column. For
example, a column of type DECIMAL cannot be inserted in an INTEGER column and vice versa.
Sample Code
CALL P(?)
You can modify a data record at a specific position. There are two equivalent syntax options.
Note
Note
Sample Code
Note
You can also set values at a position outside the original table size. Just like with INSERT, the records
between the original last record and the newly inserted records are initialized with NULL values.
:<table_variable>.DELETE([ <index> ])
Sample Code
Sample Code
The provided array expression contains indexes pointing to records which shall be deleted from the table
variable. If the array contains an invalid index (for example, zero), an error occurs.
Sample Code
Note
The UNNEST function combines one or many arrays and/or table variables. The result table includes a row for
each element of the specified array. The result of the UNNEST function needs to be assigned to a table
variable. The syntax is:
For example, the following statements convert the array arr_id of type INTEGER and the array arr_name of
type VARCHAR(10) into a table and assign it to the tabular output parameter rst:
For multiple arrays, the number of rows will be equal to the largest cardinality among the cardinalities of the
arrays. In the returned table, the cells that are not corresponding to any elements of the arrays are filled with
NULL values. The example above would result in the following tabular output of rst:
:ARR_ID :ARR_NAME
-------------------
1 name1
2 name2
? name3
The returned columns of the table can also be explicitly named be using the AS clause. In the following
example, the column names for :ARR_ID and :ARR_NAME are changed to ID and NAME.
ID NAME
-------------------
1 name1
2 name2
? name3
As an additional option, an ordinal column can be specified by using the WITH ORDINALITY clause.
AMOUNT SEQ
----------------
10 1
20 2
Note
The UNNEST function cannot be referenced directly in a FROM clause of a SELECT statement.
It is also possible to use table variables in the UNNEST function. While for arrays the associated column-
specifier list entry needs to contain a single column name, the associated entry for a table variable must be
either '*' or a projection aliasing list. '*' means that all columns of the input table should be included in the
result. With the projection aliasing list, it is possible to specify a subset of the columns of the input table and to
rename them in order to avoid name conflicts (a result must not contain multiple columns with the same
name).
Sample Code
do begin
t0 = select * from tab0 order by a asc;
t1 = select * from tab0 order by a desc;
lt = unnest(:t0, :t1) as (*, (a as b));
select * from :lt;
end;
do begin
t0 = select * from tab0 order by a asc;
t1 = select * from tab0 order by a desc;
lt = unnest(:t0, :t1) as (*, (a as b, a as c));
select * from :lt;
end;
Note
Note
If there is no column specifier list, the column names for arrays and the ordinality column in the result table
will be generated. A generated name always begins with "COL" and is followed by a number, which refers to
the column index in the result table. For example, if the third column in the result table has a generated
name, it is "COL3". However, if this name is already occupied because the input table variable contains a
column with this name, the index number will be increased to generate an unoccupied column name (if
"COL3" is used, "COL4" is the next candidate). This behavior is similar for the ordinality column. This
column is named "ORDINALITY" (without index), if this name is available and "ORDINALITY" + INDEX
(starting from 1), if "ORDINALITY" is already occupied.
To determine whether a table or table variable is empty, you can use the predicate IS_EMPTY:
You can use IS_EMPTY in conditions like in IF-statements or WHILE-loops. For instance, in the next example
IS_EMPTY is used in an IF-statement:
Note
To get the number of records of a table or a table variable, you can use the operator RECORD_COUNT:
RECORD_COUNT takes as the argument <table_name> or <table_variable> and returns the number of records
of type BIGINT.
You can use RECORD_COUNT in all places where expressions are supported such as IF-statements, loops or
scalar assignments. In the following example it is used in a loop:
END FOR;
END
Note
This feature offers an efficient way to search by key value pairs in table variables.
Syntax
Description
The size of the column list and the value list must be the same, columns and values are matched by their
position in the list. The <start_position> is optional, the default is 1 (first position), which is equal to
scanning all data.
The search function itself can be used in further expressions, but not directly in SQL statements.
The position of the first matching record is returned (or NULL, if no record matches). This result can be used in
conjunction with other table variable operators (DELETE, UPDATE).
Example
Sample Code
A 1 V11
E 5 V12
B 6 V13
E 7 V14
M 3 V15
A 1 V11
E 5 V12
B 6 V13
E 7 V14
M 3 V15
I 3 X
A 1 V11
E 5 V12
B 6 V13
E 7 V14
I 3 X
You can modify data in SQLScript table variables with SQL DML statements. The following statements are
supported:
● INSERT
● UPDATE
● DELETE
The syntax of the statements is identical with that for manipulating persistent tables. The only difference is that
you need to mark the variables by using a colon.
The DML statements for table variables support the following constraint checks:
● Primary key
● NOT NULL
The constraints can be defined in both the user-defined table type and in the declaration, similarly to the
persistent table definition.
For implementation reasons, it is not possible to combine DML statements with other table-variable related
statements for the same table variable. If a table variable is manipulated by a DML statement, it can only be
used in SQL statements: that includes queries and sub-calls, if the variable is bound to an input parameter. The
variable cannot be the target of any assign statements and therefore cannot be bound to an output parameter
of a sub-call.
Conversion
If you need to combine DML statements with other types of statements for one data set, you need to use
multiple table variables. It is possible to convert data between a variable used in a DML statement and a
variable not used in a DML statement in both directions.
Note
Both variables are declared the same way, that is at declaration time there is no difference between
variables used in a DML statement and variables not used in a DML statement. In both directions, the
conversion implies a data copy.
Use Cases
You can use DML statements if your scenario relies mainly on SQL statements, especially if you need to utilize a
complex SQL logic for manipulation of your data, like:
In other cases, it is recommended to use the SQLScript table variable operators for manipulation of table
variable data because they offer a better performance, can be combined with other table variable relevant
statements and do not imply any restriction with regards to procedure or function parameter handling.
Note
The primary key check can also be accomplished by using sorted table variables.
Limitations
DML statements on table variables cannot be used in autonomous transactions and parallel execution blocks.
Neither input, nor output procedure or function parameters can be manipulated with DML statements.
Introduction
Sorted table variables are a special kind of table variables designed to provide efficient access to their data
records by means of a defined key. They are suitable for usage in imperative algorithms operating on mass
data. The data records of sorted table variables are always sorted by a search key which is specified in the data
type of the variable. When accessing the data via the SQLScript search operator, the efficient binary search is
utilized, if possible.
The search key can be any subset of the table variable columns. The order of the columns in the search key
definition is important: the data records are first sorted by the first search key column, then by the second
search key column and so on.
Note
Position A B C D
1 0 1 10 100
2 2 1 15 200
3 1 2 3 150
4 1 2 5 30
To see how the search key is utilized, check the explanation below about the table variable search operator.
The sorting order is based on the data type of the search key. As the sorting is relevant only for the SQLScript
table variable search operator, it is not guaranteed for all data types that the sorting will behave in exactly the
same way as the ORDER BY specification in SQL statements. You can also not influence the sorting - in
particular, you cannot specify an ascending or a descending order.
Primary Key
Sorted table variables also allow primary key specification. The primary key must consist exactly of the search
key columns. The uniqueness of the primary key is checked in every operation on the table variable (table
assignment, insert operator, and so on). If the uniqueness is violated, the corresponding error is thrown.
CREATE TYPE <name> AS TABLE (<column list>) SQLSCRIPT SEARCH KEY(<key list>)
In the second case, the table type must not include any search key definition.
CREATE PROCEDURE <proc> (IN <param> TABLE(<column list>) SEARCH KEY(<key list>))
CREATE PROCEDURE <proc> (IN <param> <table type> SEARCH KEY(<key list>))
In the second case, the table type must not include any search key definition.
The input sorted table variables are re-sorted on call, unless a sorted table variable with a compatible key was
provided (in this case, no re-sorting is necessary).
Input sorted table variables cannot be modified within the procedure or the function.
For outermost calls, the result sets corresponding to output sorted table variables are sorted according to the
search key, using the ORDER BY clause. Thus you can ensure that the output table parameters have a defined
sequence of the data records.
For sub-calls, the sorted outputs can be assigned to any kind of table variable - unsorted, or sorted with
another search key (this requires a copy and/or a resorting). The usual use case should be indeed an
assignment to a sorted table variable with the same search key (this requires neither a copy nor a resorting).
If you search by an initial part of the key or by the whole key, the binary search can be utilized. If you search by
some additional fields, then first the binary search is applied to narrow down the search interval which is then
scanned sequentially.
:LT.SEARCH((B, A), (1, 2)) You search by columns B, A. Binary search can be applied
and the 2nd data record is found.
:LT.SEARCH((B, C), (1, 15)) You search by columns B, C. Binary search can be applied
only for column B (B = 1), because the column A, which
would be the next search key column, is not provided. The
binary search narrows down the search interval to 1..2 and
this interval is searched sequentially for C = 200 and the 2nd
data record is found.
If there is a matching data record, the position of the 1st matching data record is returned. This is the same
behavior as with unsorted table variables.
However, if you search by the complete search key (all search key columns are specified) and there is no
matching record, a negative value is returned instead of NULL. The absolute value of the return value indicates
the position where a data record with the specified key values would be inserted in to keep the sorting.
:LT.SEARCH(B, 3) The full search key was not specified and there is no match
ing data record. The result is NULL.
:LT.SEARCH((B, A, C), (1, 2, 20)) The full search key was specified and there is no matching
data record. The result is -3, because a data record having B
= 1, A = 2, C = 20 would have to be inserted at position 3.
This allows you to insert a missing data record directly at the correct position. Otherwise the insert operator
would have to search for this position once more.
Example:
Sample Code
The sorting allows you not only to access a single data record but also to iterate efficiently over data records
with the same key value. Just as with the table variable search operator, you have to use the initial part of the
search key or the whole search key.
Sample Code
A table variable has 3 search key columns and you iterate over data records having a specific key value
combination for the first two search key columns.
For sorted table variables, you can use all available table variable modification operators. However, on every
modification, the system has to ensure that the sorting is not violated. This has the following consequences:
● Insert operator
○ The insert operator without explicit position specification inserts the data record(s) at the correct
positions taking the sorting definition into account.
○ The insert operator with explicit position specification checks if the sorting would be violated. If so, an
error is raised and no data is inserted.
○ When inserting a table variable into a sorted table variable with explicit position specification, the input
table variable is not re-sorted, it must comply with the sorting definition.
○ The highest explicitly specified position for insertion is the current table variable size increased by one
(otherwise, empty data records would be created, which may violate the sorting).
● Update operator/Table cell assignment
○ It is not allowed to modify a search key column
○ It is not allowed to modify not existing data records (this would lead to creation of new data records
and possibly sorting violation).
As mentioned above, if a primary key is defined, then its uniqueness is checked as well.
You can use sorted table variables as assignment target just like unsorted table variables. The data records will
always be re-sorted according to the search key. If a primary key is defined, the system checks if it is unique.
Any ORDER BY clause in queries, the result of which is assigned to a sorted table variable, is irrelevant.
Limitations
● The following data types are not supported for the search key:
○ Spatial data types
○ LOB types
● Output of table functions cannot be defined as sorted table type.
Description
It is possible to declare a variable without specifying its type explicitly and let SQLScript determine the type
automatically. This auto type derivation can be used for scalar variables, tables and arrays.
Syntax
Code Syntax
Note
The existing syntax for definition of scalar and table variables is expanded as follows:
Code Syntax
Code Syntax
Caution
Potential incompatibility
The new feature may introduce some problems with existing procedures or functions, since AUTO is now
interpreted as a keyword with higher precedence than a table or a table type named AUTO. The workaround
for this incompatibility is to use SCHEMA.AUTO or quoted "AUTO" to interpret it as table type.
Sample Code
Example of incompatibility
Sample Code
Workaround
Examples
Sample Code
The derived type is determined by the type of the default value but is not always exactly same as the evaluated
type of the default value in the assignment. If the type has a length, the maximum length will be used to
improve flexibility.
VARCHAR(n) VARCHAR(MAX_LENGTH)
NVARCHAR(n) NVARCHAR(MAX_LENGTH)
ALHPANUM(n) ALPHANUM(MAX_LENGTH)
VARBINARY(n) VARBINARY(MAX_LENGTH)
DECIMAL(p, s) DECIMAL
SMALLDECIMAL DECIMAL
Auto type can be used for SQLScript scalar and table variables with the following limitations:
Global session variables can be used in SQLScript to share a scalar value between procedures and functions
that are running in the same session. The value of a global session variable is not visible from another session.
To set the value of a global session variable you use the following syntax:
While <key> can only be a constant string or a scalar variable, <values> can be any expression, scalar
variable or function which returns a value that is convertible to string. Both have maximum length of 5000
The next examples illustrate how you can set the value of a session variable in a procedure:
To retrieve the session variable, the function SESSION_CONTEXT (<key>) can be used.
For more information on SESSION_CONTEXT, see SESSION_CONTEXT in the SAP HANA SQL and System
Views Reference on the SAP Help Portal.
For example, the following function retrieves the value of session variable 'MY_VAR'
Note
SET <key> = <value> cannot not be used in functions and procedure flagged as READ ONLY (scalar
and table functions are implicitly READ ONLY).
Note
The maximum number of session variables can be configured with the configuration parameter
max_session_variables under the section session (min=1, max=5000). The default is 1024.
Note
Session variables are null by default and can be reset to null using UNSET <key>. For more information on
UNSET, see UNSET in the SAP HANA SQL and System Views Reference.
SQLScript supports local variable declaration in a nested block. Local variables are only visible in the scope of
the block in which they are defined. It is also possible to define local variables inside LOOP / WHILE /FOR / IF-
ELSE control structures.
call nested_block(?)
--> OUT:[2]
From this result you can see that the inner most nested block value of 3 has not been passed to the val
variable. Now let's redefine the procedure without the inner most DECLARE statement:
Now when you call this modified procedure the result is:
call nested_block(?)
--> OUT:[3]
From this result you can see that the innermost nested block has used the variable declared in the second level
nested block.
Conditionals
CREATE PROCEDURE nested_block_if(IN inval INT, OUT val INT) LANGUAGE SQLSCRIPT
READS SQL DATA AS
BEGIN
DECLARE a INT = 1;
DECLARE v INT = 0;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
val = :a;
END;
v = 1 /(1-:inval);
IF :a = 1 THEN
DECLARE a INT = 2;
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
While Loop
For Loop
Loop
Note
The example below uses tables and values created in the For Loop example above.
8.6.1 Conditionals
Syntax
IF <bool_expr1>
THEN
<then_stmts1>
[{ELSEIF <bool_expr2>
THEN
<then_stmts2>}...]
[ELSE
<else_stmts3>]
END IF
Syntax Elements
Note
Specifies the comparison value. This can be based on either scalar literals or scalar variables.
Description
The IF statement consists of a Boolean expression <bool_expr1>. If this expression evaluates to true, the
statements <then_stmts1> in the mandatory THEN block are executed. The IF statement ends with END IF.
The remaining parts are optional.
If the Boolean expression <bool_expr1> does not evaluate to true, the ELSE-branch is evaluated. The
statements <else_stmts3> are executed without further checks. No ELSE-branches or ELSEIF-branches are
allowed after an else branch.
Alternatively, when ELSEIF is used instead of ELSE a further Boolean expression <bool_expr2> is evaluated.
If it evaluates to true, the statements <then_stmts2> are executed. In this manner an arbitrary number of
ELSEIF clauses can be added.
This statement can be used to simulate the switch-case statement known from many programming languages.
The predicate x [NOT] BETWEEN lower AND upper can also be used within the expression <bool_expr1>. It
works just like [ NOT ] ( x >= lower AND x <= upper). For more information, see Example 4.
Examples
Example 1
You use the IF statement to implement the functionality of the UPSERT statement in SAP HANA database.
Example 3
It is also possible to use a scalar UDF in the condition, as shown in the following example.
Example 4
Use of the BETWEEN operator
Related Information
Syntax
WHILE <condition> DO
<proc_stmts>
END WHILE
Syntax Elements
Description
The WHILE loop executes the statements <proc_stmts> in the body of the loop as long as the Boolean
expression at the beginning <condition> of the loop evaluates to true.
The predicate x [NOT] BETWEEN lower AND upper can also be used within the expression of the
<condition>. It works just like [ NOT ] ( x >= lower AND x <= upper). For more information, see
Example 3.
Example 1
You use WHILE to increment the :v_index1 and :v_index2 variables using nested loops.
Example 2
You can also use scalar UDF for the while condition as follows.
Example 3
Caution
Syntax:
Syntax elements:
REVERSE
Description:
The for loop iterates a range of numeric values and binds the current value to a variable <loop-var> in
ascending order. Iteration starts with the value of <start_value> and is incremented by one until the <loop-
var> is greater than <end_value> .
If <start_value> is larger than <end_value>, <proc_stmts> in the loop will not be evaluated.
Example 1
You use nested FOR loops to call a procedure that traces the current values of the loop variables appending
them to a table.
Example 2
You can also use scalar UDF in the FOR loop, as shown in the following example.
Syntax:
BREAK
CONTINUE
BREAK
CONTINUE
Specifies that a loop should stop processing the current iteration, and should immediately start processing the
next.
Description:
Example:
You defined the following loop sequence. If the loop value :x is less than 3 the iterations will be skipped. If :x is
5 then the loop will terminate.
Related Information
8.6.5.1 IN Operator
Description
SQLScript supports the use of IN clauses as conditions in IF or WHILE statements. Just like in standard SQL,
the condition can take one of the following forms:
● a list of expressions on the left-hand side and a list of lists of expressions on the right-hand side
● a list of expressions on the left-hand side and a subquery on the right-hand side
In both cases, the numbers and types of entries in each list of the respective row of the result set on the right-
hand side must match the numbers and types of entries on the left-hand side.
Examples
Sample Code
Limitations
Floating-point numbers, variables, and expressions can be used but due to the implementation of these data
types, the results of the calculations may be inaccurate. For more information, see the chapter Numeric Data
Types in the SAP HANA SQL and System Views Reference.
SQLScript supports the use of EXISTS clauses as conditions in IF and WHILE statements. Just like in standard
SQL, it evaluates to true if the sub-query returns a non-empty result set, and to false in any other case.
--
--
WHILE :i < 100 AND EXISTS (SELECT * FROM mytab WHERE a = :i) DO
i = :i + 1;
...
END WHILE
--
WHILE NOT EXISTS (SELECT * FROM mytab WHERE a > sfunc(:z).r2) DO
...
END WHILE
The predicate x [NOT] BETWEEN lower AND upper can be used within the expression of the <condition>
of a WHILE loop. It works just like [ NOT ] ( x >= lower AND x <= upper).
Sample Code
Related Information
8.7 Cursors
Cursors are used to fetch single rows from the result set returned by a query. When a cursor is declared, it is
bound to the query. It is possible to parameterize the cursor query.
Syntax:
Syntax elements:
Description:
Cursors can be defined either after the signature of the procedure and before the procedure’s body or at the
beginning of a block with the DECLARE token. The cursor is defined with a name, optionally a list of parameters,
and an SQL SELECT statement. The cursor provides the functionality to iterate through a query result row-by-
row. Updating cursors is not supported.
Note
Avoid using cursors when it is possible to express the same logic with SQL. You should do this as cursors
cannot be optimized the same way SQL can.
Example:
You create a cursor c_cursor1 to iterate over results from a SELECT on the books table. The cursor passes
one parameter v_isbn to the SELECT statement.
Sample Code
Syntax:
OPEN <cursor_name>[(<argument_list>)]
Syntax elements:
Specifies one or more arguments to be passed to the select statement of the cursor.
Description:
Evaluates the query bound to a cursor and opens the cursor, so that the result can be retrieved. If the cursor
definition contains parameters, the actual values for each of these parameters should be provided when the
cursor is opened.
This statement prepares the cursor, so that the results for the rows of a query can be fetched.
Example:
You open the cursor c_cursor1 and pass a string '978-3-86894-012-1' as a parameter.
OPEN c_cursor1('978-3-86894-012-1');
Syntax:
CLOSE <cursor_name>
Syntax elements:
Closes a previously opened cursor and releases all associated state and resources. It is important to close all
cursors that were previously opened.
Example:
CLOSE c_cursor1;
Syntax:
Syntax elements:
Specifies the name of the cursor where the result will be obtained.
Specifies the variables where the row result from the cursor will be stored.
Description:
Fetches a single row in the result set of a query and moves the cursor to the next row. It is assumed that the
cursor was declared and opened before. You can use the cursor attributes to check if the cursor points to a
valid row.
Example:
You fetch a row from the cursor c_cursor1 and store the results in the variables shown.
Related Information
A cursor provides a number of methods to examine its current state. For a cursor bound to variable
c_cursor1, the attributes summarized in the table below are available.
Cursor Attributes
Attribute Description
c_cursor1::ROWCOUNT Returns the number of rows that the cursor fetched so far.
This value is available after the first FETCH operation. Be
fore the first fetch operation the number is 0.
Example:
The example below shows a complete procedure using the attributes of the cursor c_cursor1 to check if
fetching a set of results is possible.
Related Information
Syntax:
Syntax elements:
Specifies one or more arguments to be passed to the select statement of the cursor.
To access the row result attributes in the body of the loop, you use the displayed syntax.
Description:
Opens a previously declared cursor and iterates over each row in the result set of the query bound to the
cursor. The statements in the body of the procedure are executed for each row in the result set. After the last
row from the cursor has been processed, the loop is exited and the cursor is closed.
Tip
As this loop method takes care of opening and closing cursors, resource leaks can be avoided.
Consequently, this loop is preferred to opening and closing a cursor explicitly and using other loop-variants.
Within the loop body, the attributes of the row that the cursor currently iterates over can be accessed like an
attribute of the cursor. Assuming that <row_var> is a_row and the iterated data contains a column test, then
the value of this column can be accessed using a_row.test.
Example:
The example below demonstrates how to use a FOR-loop to loop over the results from c_cursor1.
Related Information
Syntax
Description
When you iterate over each row of a result set, you can use the updatable cursor to change a record directly on
the row, to which the cursor is currently pointing. The updatable cursor is a standard SQL feature (ISO/IEC
9075-2:2011).
For more information, see sections 14.8 & 14.13 in the SQL standard documentation (ISO/IEC 9075-2:2011).
Restrictions
● The cursor has to be declared with a SELECT statement having the FOR UPDATE clause in order to prevent
concurrent WRITE on tables (without FOR UPDATE, the cursor is not updatable)
● The updatable cursor may be used only for UPDATE and DELETE operations.
● Using an updatable cursor in a single query instead of SQLScript is prohibited.
Note
Updating the same row multiple times is possible, if several cursors selecting the same table are declared
within a single transaction.
Examples
Sample Code
DO BEGIN
DECLARE CURSOR cur FOR SELECT * FROM employees FOR UPDATE;
FOR r AS cur DO
IF r.employee_id < 10000 THEN
UPDATE employees SET employee_id = employee_id + 10000
WHERE CURRENT OF cur;
ELSE
DELETE FROM employees WHERE CURRENT OF cur;
END IF;
END FOR;
END;
Example for updating or deleting multiple tables (currently COLUMN tables only supported) by means of an
updatable cursor.
Note
In this case, you have to specify columns of tables to be locked by using the FOR UPDATE OF clause within
the SELECT statement of the cursor. Keep in mind that DML execution by means of an updatable cursor is
allowed only one time per row.
Sample Code
DO BEGIN
DECLARE CURSOR cur FOR SELECT employees.employee_name,
departments.department_name
FROM employees, departments WHERE employees.department_id =
departments.department_id
FOR UPDATE OF employees.employee_id, departments.department_id;
FOR r AS cur DO
IF r.department_name = 'Development' THEN
UPDATE employees SET employee_id = employee_id + 10000,
department_id = department_id + 100
WHERE CURRENT OF cur;
UPDATE departments SET department_id = department_id + 100
WHERE CURRENT OF cur;
ELSEIF r.department_name = 'HR' THEN
DELETE FROM employees WHERE CURRENT OF cur;
DELETE FROM departments WHERE CURRENT OF cur;
END IF;
END FOR;
END;
Syntax
Description
It is now possible to use control features directly within SQLScript in order to control cursor holdability for
specific objects instead of using a system configuration, as it was necessary before.
Expression Description
DECLARE CURSOR cursor_name WITH HOLD FOR … Declares a cursor with holdability for both commit and roll
back
DECLARE CURSOR cursor_name WITHOUT HOLD FOR … Declares a cursor without holdability for both commit and
rollback
DECLARE CURSOR cursor_name FOR … Declares a cursor with holdability for commit and without
holdability for rollback
Controlling the cursor holdability by cursor declaration gets higher priority than system configuration:
If a cursor is holdable for commit and not holdable for rollback, it will have holdability for rollback after commit.
A not holdable cursor will be invalidated by transactional operations (commit or rollback), but not closed. It will
return a null value for fetch operations rather than throwing an exception and an exception will be thrown by
using an updatable cursor.
Example
Sample Code
Restrictions
It is currently not possible to use an updatable cursor while the cursor is holdable on rollback, since DML
operations using an updatable cursor after rollback may cause unexpected results.
Syntax:
Description:
The autonomous transaction is independent from the main procedure. Changes made and committed by an
autonomous transaction can be stored in persistency regardless of commit/rollback of the main procedure
transaction. The end of the autonomous transaction block has an implicit commit.
The examples show how commit and rollback work inside the autonomous transaction block. The first updates
(1) are committed, whereby the updates made in step (2) are completely rolled back. And the last updates (3)
are committed by the implicit commit at the end of the autonomous block.
CREATE PROCEDURE PROC1( IN p INT , OUT outtab TABLE (A INT)) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE errCode INT;
DECLARE errMsg VARCHAR(5000);
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN AUTONOMOUS TRANSACTION
errCode= ::SQL_ERROR_CODE;
errMsg= ::SQL_ERROR_MESSAGE ;
INSERT INTO ERR_TABLE (PARAMETER,SQL_ERROR_CODE, SQL_ERROR_MESSAGE)
VALUES ( :p, :errCode, :errMsg);
END;
outtab = SELECT 1/:p as A FROM DUMMY; -- DIVIDE BY ZERO Error if p=0
END
In the example above, an autonomous transaction is used to keep the error code in the ERR_TABLE stored in
persistency.
If the exception handler block were not an autonomous transaction, then every insert would be rolled back
because they were all made in the main transaction. In this case the result of the ERR_TABLE is as shown in the
following example.
P |SQL_ERROR_CODE| SQL_ERROR_MESSAGE
--------------------------------------------
0 | 304 | division by zero undefined: at function /()
The LOG_TABLE table contains 'MESSAGE', even though the inner autonomous transaction rolled back.
Note
You have to be cautious if you access a table both before and inside an autonomous transaction started in a
nested procedure (e.g. TRUNCATE, update the same row), because this can lead to a deadlock situation.
One solution to avoid this is to commit the changes before entering the autonomous transaction in the
nested procedure.
The COMMIT command commits the current transaction and all changes before the COMMIT command is
written to persistence.
The ROLLBACK command rolls back the current transaction and undoes all changes since the last COMMIT.
Example 1:
In this example, the B_TAB table has one row before the PROC1 procedure is executed:
V ID
0 1
V ID
3 1
This means only the first update in the procedure affected the B_TAB table. The second update does not affect
the B_TAB table because it was rolled back.
The following graphic provides more detail about the transactional behavior. With the first COMMIT command,
transaction tx1 is committed and the update on the B_TAB table is written to persistence. As a result of the
COMMIT, a new transaction starts, tx2.
By triggering ROLLBACK, all changes done in transaction tx2 are reverted. In Example 1, the second update is
reverted. Additionally after the rollback is performed, a new transaction starts, tx3.
The transaction boundary is not tied to the procedure block. This means that if a nested procedure contains a
COMMIT/ROLLBACK, then all statements of the top-level procedure are affected.
Example 2:
In Example 2, the PROC1 procedure calls the PROC2procedure. The COMMIT in PROC2 commits all changes done
in the tx1 transaction (see the following graphic). This includes the first update statement in the PROC1
Therefore the ROLLBACK command in PROC1 only affects the previous update statement; all other updates
were committed with the tx1 transaction.
Note
● If you used DSQL in the past to execute these commands (for example, EXEC ‘COMMIT’,
EXEC ’ROLLBACK’), SAP recommends that you replace all occurrences with the native commands
COMMIT/ROLLBACK because they are more secure.
● The COMMIT/ROLLBACK commands are not supported in Scalar UDF or in Table UDF.
8.9.2 SAVEPOINT
SQLScript now supports transactional savepoints that allow the rollback of a transaction to a defined point.
This includes:
Limitation
Dynamic SQL allows you to construct an SQL statement during the execution time of a procedure. While
dynamic SQL allows you to use variables where they may not be supported in SQLScript and provides more
flexibility when creating SQL statements, it does have some disadvantages at run time:
Note
You should avoid dynamic SQL wherever possible as it may have a negative impact on security or
performance.
8.10.1 EXEC
Syntax:
Description:
EXEC executes the SQL statement <sql-statement> passed in a string argument. EXEC does not return any
result set, if <sql_statement> is a SELECT statement. You have to use EXECUTE IMMEDIATE for that
purpose.
INTO <var_name_list>
<var_name_list> ::= <var_name>[{, <var_name>}...]
<var_name> ::= <identifier> | <identifier> '[' <index> ']'
Sample Code
END;
The EXEC INTO statement does not accept empty result sets, so you need to define exit handlers in case of an
empty result set or use DEFAULT values. The following example illustrates how to use default values with an
EXEC statement:
Sample Code
DO BEGIN
DECLARE A_COPY INT;
DECLARE B_COPY VARCHAR(10);
CREATE ROW TABLE T1 (A INT NOT NULL, B VARCHAR(10));
SELECT A, B INTO A_COPY, B_COPY DEFAULT -2+1, NULL FROM T1;
--(A_COPY,B_COPY) = (-1,?), use default value
EXEC 'SELECT A FROM T1' INTO A_COPY DEFAULT 2;
--(A_COPY) = (2), exec into statement with default value
INSERT INTO T1 VALUES (0, 'sample0');
SELECT A, B INTO A_COPY, B_COPY DEFAULT 5, NULL FROM T1;
--(A_COPY,B_COPY) = (0,'sample0'), executed as-is
END;
It is also possible to use a single array element as the result of EXEC INTO. The following example illustrates the
case.
Sample Code
DO BEGIN
DECLARE A_COPY INT ARRAY;
DECLARE B_COPY VARCHAR(10) ARRAY;
SELECT A, B INTO A_COPY[1], B_COPY[1] DEFAULT -2+1, NULL FROM T1;
SELECT :A_COPY[1], :B_COPY[1] from dummy;
--(A_COPY[1],B_COPY[1]) = (-1,?), use default value
EXEC 'SELECT A FROM T1' INTO A_COPY[1] DEFAULT 2;
SELECT :A_COPY[1], :B_COPY[1] from dummy;
--(A_COPY[1]) = (2), exec into statement with default value
INSERT INTO T1 VALUES (0, 'sample0');
SELECT A, B INTO A_COPY[1], B_COPY[1] DEFAULT 5, NULL FROM T1;
SELECT :A_COPY[1], :B_COPY[1] from dummy;
--(A_COPY[1],B_COPY[1]) = (0,'sample0'), executed as-is
USING <expression_list>
<expression_list>::= <expression> [{ , <expression>} …]
<expression> can be either a simple expression, such as a character, a date, a number, or a scalar variable.
Sample Code
END;
When the suffix READS SQL DATA is attached, the statement is considered read-only. Since it is not possible to
check at compile time whether the statement that is about to be executed is read-only, the operation returns a
run-time error, if the executed statement is not read-only. The read-only declaration has the following
advantages:
● DSQL can be used in a read-only context, for example read-only procedures and table user-defined
functions
● read-only DSQL can be parallelized with other read-only operations thus improving the overall execution
time.
To avoid the repetition of the suffix READS SQL DATA, every DSQL inside a read-only procedure or function will
automatically by considered read-only, regardless of the suffix. However, it is still possible to add the suffix.
Syntax:
Description:
EXECUTE IMMEDIATE executes the SQL statement passed in a string argument. The results of queries
executed with EXECUTE IMMEDIATE are appended to the procedures result iterator.
When the suffix READS SQL DATA is attached, the statement is considered read-only. Since it is not possible to
check at compile time whether the statement that is about to be executed is read-only, the operation returns a
run-time error, if the executed statement is not read-only. The read-only declaration has the following
advantages:
● DSQL can be used in a read-only context, for example read-only procedures and table user-defined
functions
● read-only DSQL can be parallelized with other read-only operations thus improving the overall execution
time.
To avoid the repetition of the suffix READS SQL DATA for every DSQL statement in a read-only procedure or a
function, the DSQL will automatically be considered read-only, regardless of the suffix. However, it is still
possible to add the suffix.
Example:
You use dynamic SQL to delete the contents of the table tab, insert a value and, finally, to retrieve all results in
the table.
Related Information
This feature introduces additional support for parameterized dynamic SQL. It is possible to use scalar
variables, as well as table variable in USING and INTO clauses and CALL-statement parameters with USING and
INTO clauses. You can use the INTO and USING clauses to pass in or out scalar or tabular values. With the INTO
clause, the result set is not appended to the procedure result iterator.
Description
EXEC executes the SQL statement <sql-statement> passed as a string argument. EXEC does not return a
result set, if <sql_statement> is a SELECT-statement. You have to use EXECUTE IMMEDIATE for that
purpose.
If the query returns result sets or output parameters, you can assign the values to scalar or table variables with
the INTO clause.
When the SQL statement is a SELECT statement and there are table variables listed in the INTO clause, the
result sets are assigned to the table variables sequentially. If scalar variables are listed in the INTO clause for a
SELECT statement, it works like <select_into_stmt> and assigns the value of each column of the first row
to a scalar variable when a single row is returned from a single result set. When the SQL statement is a CALL
statement, output parameters represented as':<var_name>' in the SQL statement are assigned to the
variables in the INTO clause that have the same names.
Examples
Sample Code
INTO Example 1
Sample Code
INTO Example 2
Sample Code
INTO Example 3
Note
You can also bind scalar or table values with the USING clause.
When <sql-statement> uses ':<var_name>' as a parameter, only variable references are allowed in the
USING clause and variables with the same name are bound to the parameter ':<var_name>'. However, when
<sql-statement> uses '?' as a parameter (unnamed parameter bound), any expression is allowed in the
USING clause and values are mapped to parameters sequentially. The unnamed parameter bound is supported
when there are only input parameters.
Sample Code
USING Example 1
DO BEGIN
DECLARE tv TABLE (col1 INT) = SELECT * FROM mytab;
DECLARE a INT = 123;
DECLARE tv2 TABLE (col1 INT);
EXEC 'select col1 + :a as col1 from :tv' INTO tv2 USING :a, :tv;
SELECT * FROM :tv2;
END;
Sample Code
USING Example 2
USING Example 3
DO BEGIN
DECLARE tv TABLE (col1 INT) = SELECT * FROM mytab;
DECLARE a INT = 123;
EXEC 'call myproc(:a, :tv)' USING :a, :tv;
END;
Limitations
The parameter '?' and the variable reference ':<var_name>' cannot be used at the same time in an SQL
statement.
8.10.4 APPLY_FILTER
Syntax
<variable_name> = APPLY_FILTER(<table_or_table_variable>,
<filter_variable_name>);
Syntax Elements
The variable where the result of the APPLY_FILTER function will be stored.
You can use APPLY_FILTER with persistent tables and table variables.
<table_name> :: = <identifier>
Note
The following constructs are not supported in the filter string <filter_variable_name>:
Description
The APPLY_FILTER function applies a dynamic filter to a table or a table variable. In terms of logic, it can be
considered a partially dynamic SQL statement. The advantage of the function is that you can assign it to a table
variable and that will not block SQL inlining.
Caution
The disadvantage of APPLY_FILTER is the missing parametrization capability. Using constant values always
leads to preparing a new query plan and, therefore, to different query Plan Cache entries for the different
parameter values. This comes along with additional time spent for query preparation and potential cache
flooding effects in fast-changing parameter value scenarios. To avoid this, we recommend to use EXEC with
USING clause to make use of a parametrized WHERE-clause.
Sample Code
Before:
Sample Code
After:
EXEC 'SELECT * FROM :lt0 WHERE (' || :column || ' = :value' INTO lt
USING :lt0, :value READS SQL DATA;
Exception handling is a method for handling exception and completion conditions in an SQLScript procedure.
The DECLARE EXIT HANDLER parameter allows you to define an exit handler to process exception conditions
in your procedure or function.
DECLARE EXIT HANDLER FOR SQLEXCEPTION SELECT 'EXCEPTION was thrown' AS ERROR
FROM dummy;
There are two system variables ::SQL_ERROR_CODE and ::SQL_ERROR_MESSAGE that can be used to get the
error code and the error message, as shown in the next example:
CREATE PROCEDURE MYPROC (IN in_var INTEGER, OUT outtab TABLE(I INTEGER) ) AS
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
outtab = SELECT 1/:in_var as I FROM dummy;
END;
::SQL_ERROR_CODE ::SQL_ERROR_MESSAGE
304 Division by zero undefined: the right-hand value of the division cannot be zero
at function /() (please check lines: 6)
Besides defining an exit handler for an arbitrary SQLEXCEPTION, you can also define it for a specific error code
number by using the keyword SQL_ERROR_CODE followed by an SQL error code number.
For example, if only the “division-by-zero” error should be handled the exception handler, the code looks as
follows:
The following error codes are supported in the exit handler. You can use the system view M_ERROR_CODES to
get more information about the error codes.
Type Description
ERR_TX_ROLLBACK_DEADLOCK
ERR_TX_SERIALIZATION
ERR_TX_LOCK_ACQUISITION_FAIL
When catching transactional errors, the transaction still lives inside the EXIT HANDLER. That allows the explicit
use of COMMIT or ROLLBACK.
It is now possible to define an exit handler for the statement FOR UPDATE NOWAIT with the error code 146.
For more information, see Supported Error Codes [page 183].
Instead of using an error code the exit handler can be also defined for a condition.
For more information about declaring a condition, see DECLARE CONDITION [page 177].
If you want to do more in the exit handler, you have to use a block by using BEGIN…END. For instance preparing
some additional information and inserting the error into a table:
END;
tab = SELECT 1/:in_var as I FROM dummy;
Note
In the example above, in case of an unhandled exception the transaction will be rolled back. Thus the new
row in the table LOG_TABLE will be gone as well. To avoid this, you can use an autonomous transaction. For
more information, see Autonomous Transaction [page 159].
Description
The EXIT handler in SQLScript already offers a way to process exception conditions in a procedure or a
function during execution. The CONTINUE handler not only allows you to handle the error but also to continue
with the execution after an exception has been thrown.
Caution
Code Syntax
Behavior
The behavior of the CONTINUE handler for catching and handling exceptions is the same as that of the EXIT
handler with the following exceptions and extensions.
SQLScript execution continues with the statement following the exception-throwing statement right after
catching and handling the exception.
Sample Code
DO BEGIN
DECLARE A INT = 10;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN -- Catch the exception
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
END;
A = 1 / 0; -- An exception will be thrown
SELECT :A FROM DUMMY; -- Continue from this statement after handling the
exception
END;
In multilayer blocks, SQLScript execution continues with the next statement in the inner-most block after the
exception-throwing statement.
Sample Code
DO BEGIN
DECLARE A INT = 10;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY; -- Catch the
exception
SELECT :A FROM DUMMY;
BEGIN
A = 1 / 0; -- An exception throwing
A = :A + 1; -- Continue from this statement after handling the
exception
END;
SELECT :A FROM DUMMY; -- Result: 11
END;
For this reason, implicit or explicit parallel execution is not supported within the scope of a continue handler.
Sample Code
DO BEGIN
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY; -- Catch the
exception
BEGIN PARALLEL EXECUTION -- not supported
CALL PROC;
CALL PROC;
CALL PROC;
END;
END;
Sample Code
DO BEGIN
DECLARE A INT = 0;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
IF A = 1 / 0 THEN -- An error occurs
A = 1;
ELSE
A = 2;
END IF;
SELECT :A FROM DUMMY; -- Continue from here, Result: 0
END;
Sample Code
DO BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY; -- OK
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY; -- Checker error
thrown
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
BEGIN
Variable Values
The value of the variable remains as it was before the execution of the statement that returns an exception.
Sample Code
DO BEGIN
DECLARE CONTINUE HANDLER FOR SQL_ERROR_CODE 12346 BEGIN END;
BEGIN
DECLARE CONTINUE HANDLER FOR SQL_ERROR_CODE 12345 BEGIN
SIGNAL SQL_ERROR_CODE 12346;
SELECT ::SQL_ERROR_CODE FROM DUMMY; -- 12346, not 12345
END;
SIGNAL SQL_ERROR_CODE 12345;
END;
END;
DO BEGIN
DECLARE A INT = 10;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN
SELECT :A FROM DUMMY; -- Result: 10
END;
A = 1 / 0;
SELECT :A FROM DUMMY; -- Result: 10
END;
Declaring a CONDITION variable allows you to name SQL error codes or even to define a user-defined
condition.
These variables can be used in EXIT HANDLER declaration as well as in SIGNAL and RESIGNAL statements.
Whereby in SIGNAL and RESIGNAL only user-defined conditions are allowed.
Besides declaring a condition for an already existing SQL error code, you can also declare a user-defined
condition. Either define it with or without a user-defined error code.
Considering you would need a user-defined condition for an invalid procedure input you have to declare it as in
the following example:
Optional you can also associate a user-defined error code, e.g. 10000:
Note
Please note the user-defined error codes must be within the range of 10000 to 19999.
How to signal and/or resignal a user-defined condition will be handled in the section SIGNAL and RESIGNAL
[page 178].
The SIGNAL statement is used to explicitly raise a user-defined exception from within your procedure or
function.
The error value returned by the SIGNAL statement is either an SQL_ERROR_CODE, or a user_defined_condition
that was previously defined with DECLARE CONDITION [page 177]. The used error code must be within the
user-defined range of 10000 to 19999.
For example, to signal an SQL_ERROR_CODE 10000, proceed as follows:
To raise a user-defined condition, for example invalid_input, as declared in the previous section (see DECLARE
CONDITION [page 177]), use the following command:
SIGNAL invalid_input;
In both cases you get the following information in case the user-defined exception is thrown:
[10000]: user-defined error: "SYSTEM"."MY": line 4 col 2 (at pos 96): [10000]
(range 3) user-defined error exception: Invalid input arguments
In the following example, the procedure signals an error in case the input argument of start_date is greater
than the input argument of end_date:
END;
If the procedures are called with invalid input arguments, you receive the following error message:
For more information on how to handle the exception and continue with procedure execution, see Nested Block
Exceptions in Exception Handling Examples [page 180].
The RESIGNAL statement is used to pass on the exception that is handled in the exit handler.
Besides pass on the original exception by simple using RESIGNAL you can also change some information
before pass it on. Please note that the RESIGNAL statement can only be used in the exit handler.
CREATE PROCEDURE MYPROC (IN in_var INTEGER, OUT outtab TABLE(I INTEGER) ) AS
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION
RESIGNAL;
In case of <in_var> = 0 the raised error would be the original SQL error code and message text.
You can change the error message of an SQL error by using SET MESSAGE _TEXT:
The original SQL error message will be now replaced by the new one:
[304]: division by zero undefined: [304] "SYSTEM"."MY": line 4 col 10 (at pos
131): [304] (range 3) division by zero undefined exception: for the input
parameter in_var = 0 exception was raised
You can get the original message via the system variable ::SQL_ERROR_MESSAGE. This is useful, if you still
want to keep the original message, but would like to add additional information:
A general exception can be handled with an exception handler declared at the beginning of a statement that
makes an explicit or an implicit signal exception.
You can declare an exception handler that catches exceptions with specific error code numbers.
Exceptions can be declared by using a CONDITION variable. The CONDITION can optionally be specified with an
error code number.
The SIGNAL statement can be used to explicitly raise an exception from within your procedures.
Note
The error code used must be within the user-defined range of 10000 to 19999.
Resignal an Exception
The RESIGNAL statement raises an exception on the action statement in exception handler. If error code is not
specified, RESIGNAL will throw the caught exception.
The following is a list of the error codes supported by the exit handler.
An array is an indexed collection of elements of a single data type. In the following section we explore the
varying ways to define and use arrays in SQLScript.
<sql_type> ::=
DATE | TIME| TIMESTAMP | SECONDDATE | TINYINT | SMALLINT | INTEGER | BIGINT |
DECIMAL | SMALLDECIMAL | REAL | DOUBLE | VARCHAR | NVARCHAR | VARBINARY | CLOB |
NCLOB |BLOB
Only unbounded arrays with a maximum cardinality of 2^31 are supported. You cannot define a static size for
an array.
You can use the array constructor to directly assign a set of values to the array.
The array constructor returns an array containing elements specified in the list of value expressions. The
following example illustrates an array constructor that contains the numbers 1, 2 and 3:
Besides using scalar constants you can also use scalar variables or parameters instead, as shown in the next
example.
Note
id[2] = 10;
Please note that all unset elements of the array are NULL. In the given example id[1] is then NULL.
Instead of using a constant scalar value it is also possible to use a scalar variable of type INTEGER as
<array_index>. In the next example, variable I of type INTEGER is used as an index.
DECLARE i INT ;
DECLARE arr NVARCHAR(15) ARRAY ;
for i in 1 ..10 do
arr [:i] = 'ARRAY_INDEX '|| :i;
end for;
SQL Expressions and Scalar User Defined Functions (Scalar UDF) that return a number also can be used as an
index. For example, a Scalar UDF that adds two values and returns the result of it
Note
The value of an array element can be accessed with the index <array_index>, where <array_index> can be
any value from 1 to 2^31. The syntax is:
For example, the following copies the value of the second element of array arr to variable var. Since the array
elements are of type NVARCHAR(15) the variable var has to have the same type:
Please note that you have to use ‘:’ before the array variable if you read from the variable.
DO
BEGIN
DECLARE arr TINYINT ARRAY = ARRAY(1,2,3);
DECLARE index_array INTEGER ARRAY = ARRAY(1,2);
DECLARE value TINYINT;
arr[:index_array[1]] = :arr[:index_array[2]];
value = :arr[:index_array[1]];
select :value from dummy;
END;
In the following example the column A of table variable tab is aggregated into array id:
The type of the array needs to have the same type as the column.
Optionally the ORDER BY clause can be used to determine the order of the elements in the array. If it is not
specified, the array elements are ordered non-deterministic. In the following example all elements of array id
are sorted descending by column B.
Additionally it is also possible to define where NULL values should appear in the result set. By default NULL
values are returned first for ascending ordering, and last for descending ordering. You can override this
behavior using NULLS FIRST or NULLS LAST to explicitly specify NULL value ordering. The next example
shows how the default behavior for the descending ordering can be overwritten by using NULLS FIRST:
ARRAY_AGG function does not support using value expressions instead of table variables.
The TRIM_ARRAY function removes elements from the end of an array. TRIM_ARRAY returns a new array with a
<trim_quantity> number of elements removed from the end of the array <array_variable>.
TRIM_ARRAY”(“:<array_variable>, <trim_quantity>”)”
<array_variable> ::= <identifier>
<trim_quantity> ::= <unsigned_integer>
ID
---
1
2
The CARDINALITY function returns the highest index of a set element in the array <array_variable>. It
returns N (>= 0), if the index of the N-th element is the largest among the indices.
CARDINALITY(:<array_variable>)
The result is n=0 because there is no element in the array. In the next example, the cardinality is 20, as the 20th
element is set. This implicitly sets the elements 1-19 to NULL:
END;
The CARDINALITY function can also directly be used everywhere where expressions are supported, for
example in a condition:
The CONCAT function concatenates two arrays. It returns the new array that contains a concatenation of
<array_variable_left> and <array_variable_right>. Both || and the CONCAT function can be used
for concatenation:
You can create procedures and functions with array parameters so that array variables or constant arrays can
be passed to them.
Restriction
This feature supports array parameters only for server-side query parameters. It is not possible to use
client-side array interfaces. Array parameters cannot be used in the outermost queries or calls. It is
allowed to use array parameters only in nested queries or nested calls.
Syntax
Code Syntax
Code Syntax
Sample Code
do begin
declare a int array;
declare b int array = array(3, 4);
call my_l_proc_out(:a, :b);
select :a from dummy;
END;
Sample Code
do begin
declare arr_var int array = array(1, 2, 3, 4);
select my_sudf_arr(:arr_var) x from dummy;
end;
Sample Code
do begin
declare arr_var int array = array(1, 2, 3, 4);
select * from my_tudf_arr(:arr_var);
end;
Note
For improving SQLScript usability, not only constant arrays but also array variables can be used in DML and
queries. In addition, it is also possible to use array variables in the SELECT INTO clause.
Sample Code
do begin
declare a int array = array(1, 2, 3);
declare b int array;
insert into tab1 values (1, :a);
select tab1.A into b from tab1;
Note
The system view ELEMENT_TYPES now shows the element data type of the parameter, if it is an array type.
The ELEMENT_TYPES view has the columns SCHEMA_NAME, OBJECT_NAME, ELEMENT_NAME, and
DATA_TYPE_NAME.
Limitations
If your SQLScript procedure needs execution of dynamic SQL statements where the parts of it are derived from
untrusted input (e.g. user interface), there is a danger of an SQL injection attack. The following functions can
be utilized in order to prevent it:
Example:
The following values of input parameters can manipulate the dynamic SQL statement in an unintended way:
This cannot happen if you validate and/or process the input values:
Syntax IS_SQL_INJECTION_SAFE
IS_SQL_INJECTION_SAFE(<value>[, <max_tokens>])
Syntax Elements
String to be checked.
Description
Checks for possible SQL injection in a parameter which is to be used as a SQL identifier. Returns 1 if no possible
SQL injection is found, otherwise 0.
The following code example shows that the function returns 0 if the number of tokens in the argument is
different from the expected number of a single token (default value).
safe
-------
0
The following code example shows that the function returns 1 if the number of tokens in the argument matches
the expected number of 3 tokens.
safe
-------
1
Syntax ESCAPE_SINGLE_QUOTES
ESCAPE_SINGLE_QUOTES(<value>)
Description
Escapes single quotes (apostrophes) in the given string <value>, ensuring a valid SQL string literal is used in
dynamic SQL statements to prevent SQL injections. Returns the input string with escaped single quotes.
Example
The following code example shows how the function escapes a single quote. The one single quote is escaped
with another single quote when passed to the function. The function then escapes the parameter content
Str'ing to Str''ing, which is returned from the SELECT.
string_literal
---------------
Str''ing
ESCAPE_DOUBLE_QUOTES(<value>)
Description
Escapes double quotes in the given string <value>, ensuring a valid SQL identifier is used in dynamic SQL
statements to prevent SQL injections. Returns the input string with escaped double quotes.
Example
The following code example shows that the function escapes the double quotes.
table_name
--------------
TAB""LE
So far, implicit parallelization has been applied to table variable assignments as well as read-only procedure
calls that are independent from each other. DML statements and read-write procedure calls had to be executed
sequentially. From now on, it is possible to parallelize the execution of independent DML statements and read-
write procedure calls by using parallel execution blocks:
For example, in the following procedure several UPDATE statements on different tables are parallelized:
Note
Only DML statements on column store tables are supported within the parallel execution block.
In the next example several records from a table variable are inserted into different tables in parallel.
Sample Code
You can also parallelize several calls to read-write procedures. In the following example, several procedures
performing independent INSERT operations are executed in parallel.
Sample Code
call cproc;
Only the following statements are allowed in read-write procedures, which can be called within a parallel
block:
● DML
● Imperative logic
● Autonomous transaction
● Implicit SELECT and SELECT INTO scalar variable
Description
Before the introduction of SQLScript recursive logic, it was necessary to rewrite any recursive operation into an
operation using iterative logic, if it was supposed to be used within an SQLScript procedure or a function.
SQLScript now supports recursive logic that allows you to write a procedure or a function that calls itself within
its body until the abort condition is met.
Example
Sample Code
Limitations
Recommendation
SAP recommends that you use SQL rather than Calculation Engine Plan Operators with SQLScript.
The execution of Calculation Engine Plan Operators currently is bound to processing within the calculation
engine and does not allow a possibility to use alternative execution engines, such as L native execution. As
most Calculation Engine Plan Operators are converted internally and treated as SQL operations, the
conversion requires multiple layers of optimizations. This can be avoided by direct SQL use. Depending on
your system configuration and the version you use, mixing Calculation Engine Plan Operators and SQL can
lead to significant performance penalties when compared to to plain SQL implementation.
Calculation engine plan operators encapsulate data-transformation functions and can be used in the definition
of a procedure or a table user-defined function. They constitute a no longer recommended alternative to using
SQL statements. Their logic is directly implemented in the calculation engine, which is the execution
environments of SQLScript.
● Data Source Access operators that bind a column table or a column view to a table variable.
● Relational operators that allow a user to bypass the SQL processor during evaluation and to directly
interact with the calculation engine.
● Special extensions that implement functions.
The data source access operators bind the column table or column view of a data source to a table variable for
reference by other built-in operators or statements in a SQLScript procedure.
9.1.1 CE_COLUMN_TABLE
Syntax:
CE_COLUMN_TABLE(<table_name> [<attributes>])
Syntax Elements:
Description:
The CE_COLUMN_TABLE operator provides access to an existing column table. It takes the name of the table
and returns its content bound to a variable. Optionally a list of attribute names can be provided to restrict the
output to the given attributes.
Note that many of the calculation engine operators provide a projection list for restricting the attributes
returned in the output. In the case of relational operators, the attributes may be renamed in the projection list.
The functions that provide data source access provide no renaming of attributes but just a simple projection.
Note
Calculation engine plan operators that reference identifiers must be enclosed with double-quotes and
capitalized, ensuring that the identifier's name is consistent with its internal representation.
If the identifiers have been declared without double-quotes in the CREATE TABLE statement (which is the
normal method), they are internally converted to upper-case letters. Identifiers in calculation engine plan
operators must match the internal representation, that is they must be upper case as well.
In contrast, if identifiers have been declared with double-quotes in the CREATE TABLE statement, they are
stored in a case-sensitive manner. Again, the identifiers in operators must match the internal
representation.
9.1.2 CE_JOIN_VIEW
Syntax:
CE_JOIN_VIEW(<column_view_name>[{,<attributes>,}...])
Syntax elements:
Specifies the name of the required columns from the column view.
The CE_JOIN_VIEW operator returns results for an existing join view (also known as Attribute View). It takes
the name of the join view and an optional list of attributes as parameters of such views/models.
9.1.3 CE_OLAP_VIEW
Syntax:
CE_OLAP_VIEW(<olap_view_name>, '['<attributes>']')
Syntax elements:
Note
● count("column")
● sum("column")
● min("column")
● max("column")
● use sum("column") / count("column") to compute the average
The CE_OLAP_VIEW operator returns results for an existing OLAP view (also known as an Analytical View). It
takes the name of the OLAP view and an optional list of key figures and dimensions as parameters. The OLAP
cube that is described by the OLAP view is grouped by the given dimensions and the key figures are aggregated
using the default aggregation of the OLAP view.
9.1.4 CE_CALC_VIEW
Syntax:
CE_CALC_VIEW(<calc_view_name>, [<attributes>])
Syntax elements:
Specifies the name of the required attributes from the calculation view.
Description:
The CE_CALC_VIEW operator returns results for an existing calculation view. It takes the name of the
calculation view and optionally a projection list of attribute names to restrict the output to the given attributes.
The calculation engine plan operators presented in this section provide the functionality of relational operators
that are directly executed in the calculation engine. This allows exploitation of the specific semantics of the
calculation engine and to tune the code of a procedure if required.
9.2.1 CE_JOIN
Syntax:
Syntax elements:
Specifies a list of join attributes. Since CE_JOIN requires equal attribute names, one attribute name per pair of
join attributes is sufficient. The list must at least have one element.
Specifies a projection list for the attributes that should be in the resulting table.
Note
If the optional projection list is present, it must at least contain the join attributes.
Description:
The CE_JOIN operator calculates a natural (inner) join of the given pair of tables on a list of join attributes. For
each pair of join attributes, only one attribute will be in the result. Optionally, a projection list of attribute names
can be given to restrict the output to the given attributes. Finally, the plan operator requires each pair of join
attributes to have identical attribute names. In case of join attributes having different names, one of them must
be renamed prior to the join.
9.2.2 CE_LEFT_OUTER_JOIN
Calculate the left outer join. Besides the function name, the syntax is the same as for CE_JOIN.
9.2.3 CE_RIGHT_OUTER_JOIN
Calculate the right outer join. Besides the function name, the syntax is the same as for CE_JOIN.
Note
Syntax:
Syntax elements:
Specifies a list of attributes that should be in the resulting table. The list must at least have one element. The
attributes can be renamed using the SQL keyword AS, and expressions can be evaluated using the CE_CALC
function.
Specifies an optional filter where Boolean expressions are allowed. See CE_CALC [page 225] for the filter
expression syntax.
Description:
Restricts the columns of the table variable <var_table> to those mentioned in the projection list. Optionally,
you can also rename columns, compute expressions, or apply a filter.
With this operator, the <projection_list> is applied first, including column renaming and computation of
expressions. As last step, the filter is applied.
Caution
Be aware that <filter> in CE_PROJECTION can be vulnerable to SQL injection because it behaves like
dynamic SQL. Avoid use cases where the value of <filter> is passed as an argument from outside of the
procedure by the user himself or herself, for example:
create procedure proc (in filter nvarchar (20), out output ttype)
begin
tablevar = CE_COLUMN_TABLE(TABLE);
output = CE_PROJECTION(:tablevar,
["A", "B"], '"B" = :filter );
end;
It enables the user to pass any expression and to query more than was intended, for example: '02 OR B =
01'.
Syntax:
Syntax elements:
Specifies the expression to be evaluated. Expressions are analyzed using the following grammar:
Where terminals in the grammar are enclosed, for example 'token' (denoted with id in the grammar), they are
like SQL identifiers. An exception to this is that unquoted identifiers are converted into lower-case. Numeric
constants are basically written in the same way as in the C programming language, and string constants are
enclosed in single quotes, for example, 'a string'. Inside string, single quotes are escaped by another single
quote.
An example expression valid in this grammar is: "col1" < ("col2" + "col3"). For a full list of expression
functions, see the following table.
Description:
CE_CALC is used inside other relational operators. It evaluates an expression and is usually then bound to a
new column. An important use case is evaluating expressions in the CE_PROJECTION operator. The CE_CALC
function takes two arguments:
Expression Functions
Name Description Syntax
midstr returns a part of the string starting at string midstr(string, int, int)
arg2, arg3 bytes long. arg2 is counted
from 1 (not 0) 2
leftstr returns arg2 bytes from the left of the string leftstr(string, int)
arg1. If arg1 is shorter than the value of
arg2, the complete string will be re
turned. 1
rightstr returns arg2 bytes from the right of the string rightstr(string, int)
arg1. If arg1 is shorter than the value of
arg2, the complete string will be re
turned. 1
instr returns the position of the first occur int instr(string, string)
rence of the second string within the
first string (>= 1) or 0, if the second
string is not contained in the first. 1
● trim(s) = ltrim(rtrim(s))
● trim(s1, s2) = ltrim(rtrim(s1, s2),
s2)
Mathematical Functions The math functions described here generally operate on floating point values;
their inputs will automatically convert to double, the output will also be a double.
● double log(double) These functions have the same functionality as in the Cprogramming language.
● double exp(double)
● double log10(double)
● double sin(double)
● double cos(double)
● double tan(double)
● double asin(double)
● double acos(double)
● double atan(double)
● double sinh(double)
● double cosh(double)
● double floor(double)
● double ceil(double)
Further Functions
1 Due to calendar variations with dates earlier that 1582, the use of the date data type is deprecated; you
should use the daydate data type instead.
Note
date is based on the proleptic Gregorian calendar. daydate is based on the Gregorian calendar which is
also the calendar used by SAP HANA SQL.
2 These Calculation Engine string functions operate using single byte characters. To use these functions with
multi-byte character strings please see section: Using String Functions with Multi-byte Character Encoding
below. Note, this limitation does not exist for the SQL functions of the SAP HANA database which support
Unicode encoded strings natively.
To allow the use of the string functions of the Calculation Engine with multi-byte character encoding, you can
use the charpos and chars functions. An example of this usage for the single-byte character function midstr
follows below:
Related Information
Syntax:
Syntax elements:
Note
Specifies a list of aggregates. For example, [SUM ("A"), MAX("B")] specifies that in the result, column "A"
has to be aggregated using the SQL aggregate SUM and for column B, the maximum value should be given.
● count("column")
● sum("column")
● min("column")
● max("column")
● use sum("column") / count("column") to compute the average
Specifies an optional list of group-by attributes. For instance, ["C"] specifies that the output should be
grouped by column C. Note that the resulting schema has a column named C in which every attribute value
from the input table appears exactly once. If this list is absent the entire input table will be treated as a single
group, and the aggregate function is applied to all tuples of the table.
Specifies the name of the column attribute for the results to be grouped by.
CE_AGGREGATION implicitly defines a projection: All columns that are not in the list of aggregates, or in the
group-by list, are not part of the result.
Description:
The result schema is derived from the list of aggregates, followed by the group-by attributes. The order of the
returned columns is defined by the order of columns defined in these lists. The attribute names are:
● For the aggregates, the default is the name of the attribute that is aggregated.
● For instance, in the example above ([SUM("A"),MAX("B")]), the first column is called A and the second
is B.
● The attributes can be renamed if the default is not appropriate.
● For the group-by attributes, the attribute names are unchanged. They cannot be renamed using
CE_AGGREGATION.
Note
Note that count(*) can be achieved by doing an aggregation on any integer column; if no group-by
attributes are provided, this counts all non-null values.
9.2.7 CE_UNION_ALL
Syntax:
Syntax elements:
Description:
The CE_UNION_ALL function is semantically equivalent to SQL UNION ALL statement. It computes the union
of two tables which need to have identical schemas. The CE_UNION_ALL function preserves duplicates, so the
result is a table which contains all the rows from both input tables.
Syntax
Syntax Elements
Specifies a list of attributes that should be in the resulting table. The list must at least have one element. The
attributes can be renamed using the SQL keyword AS.
Description
For each input table variable the specified columns are concatenated. Optionally columns can be renamed. All
input tables must have the same cardinality.
Caution
The vertical union is sensitive to the order of its input. SQL statements and many calculation engine plan
operators may reorder their input or return their result in different orders across starts. This can lead to
unexpected results.
9.3.2 CE_CONVERSION
Syntax:
Specifies the parameters for the conversion. The CE_CONVERSIONoperator is highly configurable via a list of
key-value pairs. For the exact conversion parameters permissible, see the Conversion parameters table.
Specify the key and value pair for the parameter setting.
Description:
Applies a unit conversion to input table <var_table> and returns the converted values. Result columns can
optionally be renamed. The following syntax depicts valid combinations. Supported keys with their allowed
domain of values are:
Conversion parameters
Key Values Type Mandatory Default Documentation
Syntax:
TRACE(<var_input>)
Syntax elements:
Description:
The TRACE operator is used to debug SQLScript procedures. It traces the tabular data passed as its argument
into a local temporary table and returns its input unmodified. The names of the temporary tables can be
retrieved from the SYS.SQLSCRIPT_TRACE monitoring view.
Example:
out = TRACE(:input);
Caution
This operator should not be used in production code as it will cause significant run-time overhead.
Additionally, the naming conventions used to store the tracing information may change. This operator
should only be used during development for debugging purposes.
When you have a procedure or a function that already exist and you want to create a new procedure consuming
them, to avoid dependency problems you can use headers in their place.
When creating a procedure, all nested procedures that belong to that procedure must exist beforehand. If
procedure P1 calls P2 internally, then P2 must have been created earlier than P1. Otherwise, P1 creation fails
with the error message,“P2 does not exist”. With large application logic and no export or delivery unit available,
it can be difficult to determine the order in which the objects need to be created.
To avoid this kind of dependency problem, SAP introduces HEADERS. HEADERS allow you to create a minimum
set of metadata information that contains only the interface of the procedure or function.
AS HEADER ONLY
You create a header for a procedure by using the HEADER ONLY keyword, as in the following example:
With this statement you create a procedure <proc_name> with the given signature <parameter_clause>.
The procedure <proc_name> has no body definition and thus has no dependent base objects. Container
properties (for example, security mode, default_schema, and so on) cannot be defined with the header
definition. These are included in the body definition.
The following statement creates the procedure TEST_PROC with a scalar input INVAR and a tabular output
OUTTAB:
CREATE PROCEDURE TEST_PROC (IN INVAR NVARCHAR(10), OUT OUTTAB TABLE(no INT)) AS
HEADER ONLY
By checking the is_header_only field in the system view PROCEDURE, you can verify that a procedure only
header is defined.
If you want to check for functions, then you need to look into the system view FUNCTIONS.
Once a header of a procedure or function is defined, other procedures or functions can refer to it in their
procedure body. Procedures containing these headers can be compiled as shown in the following example:
CREATE PROCEDURE OUTERPROC (OUT OUTTAB TABLE (NO INT)) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE s INT;
s = 1;
CALL TEST_PROC (:s, outtab);
END;
To change this and to make a valid procedure or function from the header definition, you must replace the
header by the full container definition. Use the ALTER statement to replace the header definition of a
procedure, as follows:
For a function header, the task is similar, as shown in the following example:
For example, if you want to replace the header definition of TEST_PROC that was defined already, then the
ALTER statement might look as follows:
ALTER PROCEDURE TEST_PROC (IN INVAR NVARCHAR(10), OUT OUTTAB TABLE(no INT))
LANGUAGE SQLSCRIPT SQL SECURITY INVOKER READS SQL DATA
AS
BEGIN
DECLARE tvar TABLE (no INT, name nvarchar(10));
tvar = SELECT * FROM TAB WHERE name = :invar;
outtab = SELECT no FROM :tvar;
END
You cannot change the signature with the ALTER statement. If the name of the procedure or the function or the
input and output variables do not match, you will receive an error.
Note
The ALTER PROCEDURE and the ALTER FUNCTION statements are supported only for a procedure or a
function containing a header definition.
SQLScript supports the spatial data type ST_GEOMETRY and SQL spatial functions to access and manipulate
spatial data. In addition, SQLScript also supports the objective style function calls needed for some SQL spatial
functions.
The following example illustrates a small scenario for using spatial data type and function in SQLScript.
The function get_distance calculates the distance between the two given parameters <first> and
<second> of type ST_GEOMETRY by using the spatial function ST_DISTANCE.
The ‘:’ in front of the variable <first> is needed because you are reading from the variable.
The function get_distance itself is called by the procedure nested_call. The procedure returns the
distance and the text representation of the ST_GEOMETRY variable <first>.
Out(1) Out(2)
----------------------------------------------------------------------
8,602325267042627 POINT(7 48)
Note that the optional SRID (Spatial Reference Identifier) parameter in SQL spatial functions is mandatory if
the function is used within SQLScript. If you do not specify the SRID, you receive an error as demonstrated with
the function ST_GEOMFROMTEXT in the following example. Here SRID 0 is used to specify the default spatial
reference system.
DO
BEGIN
If you do not use the same SRID for the ST_GEOMETRY variables <line1> and <line2> latest the UNNEST will
return an error because it is not allowed for the values in one column to have different SRID.
In addition, there is a consistency check for output table variables to ensure that all elements of a spatial
column have the same SRID.
Note
● ST_CLUSTERID
● ST_CLUSTERCENTEROID
● ST_CLUSTERENVELOPE
● ST_CLUSTERCONVEXHULL
● ST_AsSVG
The construction of objects with the NEW keyword is also not supported in SQLScript. Instead you can use
ST_GEOMFROMTEXT(‘POINT(1 1)’, srid).
For more information on SQL spatial functions and their usage, see SAP HANA Spatial Reference available on
the SAP HANA Platform.
System variables are built-in variables in SQLScript that provide you with information about the current
context.
12.1 ::CURRENT_OBJECT_NAME
and ::CURRENT_OBJECT_SCHEMA
To identify the name of the current running procedure or function you can use the following two system
variables:
the result of that function is then the name and the schema_name of the function:
SCHEMA_NAME NAME
----------------------------------------
MY_SCHEMA RETURN_NAME
The next example shows that you can also pass the two system variables as arguments to procedure or
function call.
Note
Note that in anonymous blocks the value of both system variables is NULL.
The two system variable will always return the schema name and the name of the procedure or function.
Creating a synonym on top of the procedure or function and calling it with the synonym will still return the
original name as shown in the next example.
We create a synonym on the RETURN_NAME function from above and will query it with the synonym:
SCHEMA_NAME NAME
------------------------------------------------------
MY_SCHEMA RETURN_NAME
12.2 ::ROWCOUNT
The system variable ::ROWCOUNT stores either the number of updated rows of the previously executed DML,
CALL and CREATE TABLE statement, or the number of rows returned from a SELECT statement. There is no
accumulation of ::ROWCOUNT values from all previously executed statements. When the previous statement
does not return a value, the previous value of ::ROWCOUNT is retained. When ::ROWCOUNT is used right after
a PARALLEL EXECUTION block, the system variable stores only the value of the last statement in the
procedure definition.
Caution
Until SAP HANA 2.0 SPS03, the system variable ::ROWCOUNT was updated only after DML statements.
Starting with SAP HANA 2.0 SPS04, the behavior of ::ROWCOUNT changes, it is now also updated for
SELECT, CALL and CREATE TABLE statements.
● ::ROWCOUNT for a nested CALL statement is an aggregation of the number of updated rows and does not
include the number of rows returned from SELECT statements.
Note
When ::ROWCOUNT is used after a SELECT statement, it requires to fetch entire rows from the result set to
get the total number of selected rows. When the result from the SELECT statement is assigned to a table
variable or scalar variable it has barely any effect on the performance. However, a SELECT statement that is
returning a result set cannot avoid fetching all rows implicitly regardless of how many rows will be explicitly
fetched from the result set.
The following examples demonstrate how you can use ::ROWCOUNT in a procedure. Consider we have the
following table T:
Now we want to update table T and want to return the number of updated rows:
UPDATED_ROWS
-------------------------
2
In the next example we change the procedure by having two update statements and in the end we again get the
row count:
By calling the procedure you will see that the number of updated rows is now 1. That is because the las update
statements only updated one row.
UPDATED_ROWS
-------------------------
1
By now calling this procedure again the number of updated row is now 3:
UPDATED_ROWS
-------------------------
3
Caution
The update of ::ROWCOUNT in SAP HANA 2.0 SPS04 introduces an incompatible behavior change. Please
refer to the following description for the details, workaround and supporting tools.
Since ::ROWCOUNT is now updated after SELECT, CALL and CREATE TABLE statements, the behavior of
existing procedures may change, if the system variable ::ROWCOUNT is not used directly after a DML
statement. Using ::ROWCOUNT directly after the target statement is recommended and can guarantee the
same behavior between different versions.
To detect such cases, new rules were introduced in SQLScript Code Analyzer:
Based on the result from the SQLScript Code Analyzer rule, you can update your procedures according to the
new standard behavior.
The following scenario shows a simple example of the impact of the behavior changes.
Sample Code
do begin
insert into mytab select * from mytab2; -- ::ROWCOUNT = 1
x = select * from mytab; -- ::ROWCOUNT = 1 (retained,
SPS03), ::RWCOUNT = 2 (SPS04)
select ::rowcount from dummy; -- 1 in SPS03, 2 in SPS04
end;
SELECT statement N/A (retain previous value) The number of rows returned from the
SELECT statement
select * from mytab;
Table variable statement with SELECT N/A (retain previous value) The number of rows returned from the
SELECT statement
statement
tv = select * from
mytab;
SELECT INTO statement N/A (retain previous value) 1 if the statement is executed success
fully, retains the previous value other
wise.
select i into a from
mytab;
SELECT INTO statement with default N/A (retain previous value) 0 if the default values are assigned, 1 if
the values are assigned from the SE
value
LECT statement, retains the previous
value otherwise.
select i into a default
2 from mytab;
SELECT statement in dynamic SQL 0 The number of rows from the SELECT
statement
exec 'select * from
mytab';
execute immediate
'select * from mytab';
EXEC INTO with SELECT statement 0 EXEC INTO with scalar variables works
similar to SELECT INTO case.
exec 'select i, j from
mytab' into s1, s2; EXEC INTO with a table variable works
exec 'select * from similar to a table variable assign state
mytab' into tv;
ment case.
Nested CALL statement N/A (retain previous value) The number of updated rows.
call proc_nested;
CREATE TABLE statement N/A (retains previous value) The number of updated rows
SQLScript procedures, functions and triggers can return the line number of the current statement
via ::CURRENT_LINE_NUMBER.
Syntax
::CURRENT_LINE_NUMBER
Example
Sample Code
Sample Code
Sample Code
1 do begin
2 declare a int = ::CURRENT_LINE_NUMBER;
3 select :a, ::CURRENT_LINE_NUMBER + 1 from dummy;
4 end;
5 -- Returns [2, 3 + 1]
In some scenarios you may need to let certain processes wait for a while (for example, when executing
repetitive tasks). Implementing such waiting manually may lead to "busy waiting" and to the CPU performing
unnecessary work during the waiting time. To avoid this, SQLScript offers a built-in library
SYS.SQLSCRIPT_SYNC containing the procedures SLEEP_SECONDS and WAKEUP_CONNECTION.
Procedure SLEEP_SECONDS
This procedure puts the current process on hold. It has one input parameter of type DOUBLE which specifies
the waiting time in seconds. The maximum precision is one millisecond (0.001), but the real waiting time may
be slightly longer (about 1-2 ms) than the given time.
Note
● If you pass 0 or NULL to SLEEP_SECONDS, SQLScript executor will do nothing (also no log will be
written).
● If you pass a negative number, you get an error.
Procedure WAKEUP_CONNECTION
This procedure resumes a waiting process. It has one input parameter of type INTEGER which specifies the ID
of a waiting connection. If this connection is waiting because the procedure SLEEP_SECONDS has been called,
the sleep is terminated and the process continues. If the given connection does not exist or is not waiting
because of SLEEP_SECONDS, an error is raised.
If the user calling WAKEUP_CONNECTION is not a session admin and is different from the user of the waiting
connection, an error is raised as well.
Note
● The waiting process is also terminated, if the session is canceled (with ALTER SYSTEM CANCEL
SESSION or ALTER SYSTEM DISCONNECT SESSION).
● A session admin can wake up any sleeping connection.
Limitations
The library cannot be used in functions (neither in scalar, nor in tabular ones) and in calculation views.
Examples
Sample Code
Monitor
Sample Code
The SQLSCRIPT_STRING library offers a handy and simple way for manipulating strings. You can split libraries
with given delimiters or regular expressions, format or rearrange strings, and convert table variables into the
already available strings.
Syntax
Code Syntax
SPLIT / SPLIT_REGEXPR
The SPLIT(_REGEXPR) function returns multiple variables depending on the given parameters.
SPLIT_TO_TABLE / SPLIT_REGEXPR_TO_TABLE
The SPLIT_TO_TABLE(_REGEXPR) returns a single-column table with table type (WORD NVARCHAR(5000))
Sample Code
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE a1, a2, a3 INT;
(a1, a2, a3) = LIB:SPLIT('10, 20, 30', ', '); --(10, 20, 30)
END;
Sample Code
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE first_name, last_name STRING;
DECLARE area_code, first_num, last_num INT;
Sample Code
DO BEGIN
Note
The SPLIT_TO_TABLE function currently does not support implicit table variable declaration.
FORMAT String
Code Syntax
Type Meaning
'c' Character
Type Meaning
'F' Fixed point. Use NAN for nan and INF for inf in the result.
Type 'e' with precision p-1, the number has exponent exp
If -4 <= exp < p, the same as 'f' and the precision is p-1-exp
Example
Type Example
FORMAT
Returns a single formatted string using a given format string and additional arguments. Two type of additional
arguments are supported: scalar variables and a single array. The first argument type accepts only scalar
variables and should have a proper number and type of arguments. With the second argument type is allowed
only one array that should have a proper size and type.
FORMAT_TO_TABLE/FORMAT_TO_ARRAY
Returns a table or an array with N formatted strings using a given table variable. FORMAT STRING is applied
row by row.
Sample Code
DO BEGIN
USING SQLSCRIPT_STRING AS LIB;
DECLARE your_name STRING = LIB:FORMAT('{} {}', 'John', 'Sutherland');
--'John Sutherland'
DECLARE name_age STRING = LIB:FORMAT('{1} {0}', 30, 'Sutherland');
--'Sutherland 30'
DECLARE pi_str STRING = LIB:FORMAT('PI: {:06.2f}', 3.141592653589793);
--'PI: 003.14'
DECLARE ts STRING = LIB:FORMAT('Today is {}', TO_VARCHAR (current_timestamp,
'YYYY/MM/DD')); --'Today is 2017/10/18'
DECLARE scores double ARRAY = ARRAY(1.4, 2.1, 40.3);
DECLARE score_str STRING = LIB:FORMAT('{}-{}-{}', :scores);
--'1.4-2.1-40.3'
END;
tt.first_name[2] = 'Edward';
tt.last_name[2] = 'Stark';
tt.birth_year[2] = 1960;
TABLE_SUMMARY
TABLE_SUMMARY converts a table variable into a single formatted string. It serializes the table into a human-
friendly format, similar to the current result sets in the client. Since the table is serialized as a single string, the
result is fetched during the PROCEDURE execution, not at the client-side fetch time. The parameter
MAX_RECORDS limits the number of rows to be serialized. If the size of the formatted string is larger than
NVARCHAR(8388607), only the limited size of the string is returned.
By means of SQLScript FORMAT functions, the values in the table are formatted as follows:
Sample Code
DO
BEGIN
USING SQLSCRIPT_STRING AS STRING;
USING SQLSCRIPT_PRINT AS PRINT;
T1 = SELECT * FROM SAMPLE1;
Print:PRINT_LINE(STRING:TABLE_SUMMARY(:T1, 3));
END;
Syntax
Code Syntax
Description
The PRINT library makes it possible to print strings or even whole tables. It is especially useful when used
together with the STRING library. The PRINT library procedures produce a server-side result from the
parameters and stores it in an internal buffer. All stored strings will be printed in the client only after the end of
the PROCEDURE execution. In case of nested execution, the PRINT results are delivered to the client after the
end of the outermost CALL execution. The traditional result-set based results are not mixed up with PRINT
results.
The PRINT library procedures can be executed in parallel. The overall PRINT result is flushed at once, without
writing it on a certain stream for each request. SQLScript ensures the order of PRINT results, based on the
description order in the PROCEDURE body, not on the order of execution.
Note
PRINT_LINE
This library procedure returns a string as a PRINT result. The procedure accepts NVARCHAR values as input,
but also most other values are possible, as long as implicit conversion is possible (for example, INTEGER to
NVARCHAR). Hence, most of the non-NVACHAR values can be used as parameters, since they are supported
with SQLScript implicit conversion. Users can freely introduce string manipulation by using either a
PRINT_TABLE
This library procedure takes a table variable and returns a PRINT result. PRINT_TABLE() parses a table variable
into a single string and sends the string to the client. The parameter MAX_RECORDS limits the number of rows
to be printed. PRINT_TABLE() is primarily used together with TABLE_SUMMARY of the STRING library.
Example
Sample Code
DO
BEGIN
USING SQLSCRIPT_PRINT as LIB;
LIB:PRINT_LINE('HELLO WORLD');
LIB:PRINT_LINE('LINE2');
LIB:PRINT_LINE('LINE3');
END;
DO
BEGIN
USING SQLSCRIPT_PRINT as LIB1;
USING SQLSCRIPT_STRING as LIB2;
LIB1:PRINT_LINE('HELLO WORLD');
LIB1:PRINT_LINE('Here is SAMPLE1');
T1 = SELECT * FROM SAMPLE1;
LIB1:PRINT_LINE(LIB2:TABLE_SUMMARY(:T1));
LIB1:PRINT_LINE('Here is SAMPLE2');
T2 = SELECT * FROM SAMPLE2;
LIB1:PRINT_TABLE(:T2);
LIB1:PRINT_LINE('End of PRINT');
END;
SQLSCRIPT_LOGGING supports user level tracing for various types of SQLScript objects including procedures,
table functions and SQLScript libraries.
Interface
Code Syntax
Description
Logging
An SQLScript object with LOG() is called a logging object. A log message can be categorized by its topic.
Procedure Description
LOG (LEVEL, TOPIC, MESSAGE, ...) A formatted log message is inserted in the output table if
there is a configuration that enables the log. The invoking
user should have the SQLSCRIPT LOGGING privilege for the
current object. Saving log messages requires a configura-
tion, otherwise the logging will be ignored.
Restriction
Not available inside scalar user-defined functions and
autonomous transaction blocks.
Configuration
A configuration is an imaginary object designed for logging settings. It is not a persistence object and lasts only
until the end of the execution of the outermost statement. All settings for logging can be controlled by
configurations. At least 1 configuration is required to save the log messages and up to 10 configurations can
exist at a time.
SET_LEVEL (CONFIGURATION_NAME, LEVEL) This is a mandatory configuration setting. The Logging Li
brary writes logs with higher (less verbose level) or equal
level. The levels (from less verbose to more verbose) are: fa
tal, error, warning, info, debug
SQLScript Objects
SQLSCRIPT_LOGGING supports procedures, table functions and SQLScript libraries. SQLScript objects need
to be registered to a configuration in order to collect logs from the objects. Only object-wise configurations are
supported, a member-wise setting for libraries is not available.
Procedure Description
ADD_SQLSCRIPT_OBJECT (CONFIGURATION_NAME, Opt-in for collecting logs from the object. It requires
SCHEMA_NAME, OBJECT_NAME) SQLSCRIPT LOGGING privilege for the object. Up to 10 ob
jects can be added to a single configuration.
Output Table
Log messages from logging objects are inserted into an output table.
Procedure Description
SET_OUTPUT_TABLE (CONFIGURATION_NAME, Sets which table should be used as an output table. Only a
SCHEMA_NAME, TABLE_NAME) single output table is supported. The table type must match
SQLSCRIPT_LOGGING_TABLE_TYPE. This is a mandatory
configuration setting
Filters
You can focus on specific messages by using filters. The OR operator is applied in case of multiple filter values:
will be evaluated as
Note
SET_FILTER (CONFIGURATION_NAME, TYPE, ...) Sets a filter for logging. Supports open-ended parameter for
multiple filter values.
ADD_FILTER (CONFIGURATION_NAME, TYPE, ...) Adds filter values to the filter type
REMOVE_FILTER (CONFIGURATION_NAME, TYPE, ...) Remove filter values from the filter type
Procedure Description
START_LOGGING (CONFIGURATION_NAME) Start to collect logs for the given configuration. Throws an
error if the output table or level are not set.
Configuration Steps
Example
Sample Code
DO BEGIN
using SQLSCRIPT_LOGGING as LIB;
-- conf1
call LIB:CREATE_CONFIGURATION('conf1');
call LIB:ADD_SQLSCRIPT_OBJECT('conf1', current_schema, 'TUDF1');
call LIB:SET_OUTPUT_TABLE('conf1', current_schema, 'T1');
call LIB:SET_LEVEL('conf1', 'debug');
call LIB:START_LOGGING('conf1');
-- conf2
call LIB:CREATE_CONFIGURATION('conf2');
call LIB:ADD_SQLSCRIPT_OBJECT('conf2', current_schema, 'TUDF2');
call LIB:SET_OUTPUT_TABLE('conf2', current_schema, 'T2');
call LIB:SET_LEVEL('conf2', 'debug');
call LIB:START_LOGGING('conf2');
-- all
call LIB:CREATE_CONFIGURATION('conf_all');
call LIB:ADD_SQLSCRIPT_OBJECT('conf_all', current_schema, 'TUDF1');
call LIB:ADD_SQLSCRIPT_OBJECT('conf_all', current_schema, 'TUDF2');
call LIB:SET_OUTPUT_TABLE('conf_all', current_schema, 'T_ALL');
call LIB:SET_LEVEL('conf_all', 'debug');
call LIB:START_LOGGING('conf_all');
DO BEGIN
using SQLSCRIPT_LOGGING as LIB;
call LIB:CREATE_CONFIGURATION('conf1');
call LIB:SET_OUTPUT_TABLE('conf1', current_schema, 'T1');
call LIB:SET_LEVEL('conf1', 'debug');
call LIB:ADD_SQLSCRIPT_OBJECT('conf1', 'SQLSCRIPT_LOGGING_USER_A', 'P1');
call LIB:START_LOGGING('conf1');
call SQLSCRIPT_LOGGING_USER_A.p1;
call LIB:STOP_LOGGING('conf1');
END;
SQLSCRIPT LOGGING privilege is required to collect logs for a SQLScript object. A logging user can be different
from the procedure owner and the owner can expose log messages to other users selectively by using this
privilege.
Syntax
Code Syntax
Example
Sample Code
Related Information
SQLSCRIPT_LOGGING:LOG can only write logs to a table with a predefined table type. You can create an output
table using the type SYS.SQLSCRIPT_LOGGING_TABLE_TYPE or the public synonym
SQLSCRIPT_LOGGING_TABLE_TYPE.
Definition
Example
Sample Code
Related Information
All scalar variables used in queries of procedures, functions or anonymous blocks, are represented either as
query parameters, or as constant values during query compilation. Which option shall be chosen is a decision
of the optimizer.
Example
The following procedure uses two scalar variables (var1 and var2) in the WHERE-clause of a nested query.
Sample Code
CREATE PROCEDURE PROC (IN var1 INT, IN var2 INT, OUT tab mytab)
AS
BEGIN
tab = SELECT * FROM MYTAB WHERE MYCOL >:var1
OR MYCOL =:var2;
END;
Sample Code
will prepare the nested query of the table variable tab by using query parameters for the scalar parameters:
Sample Code
Before the query is executed, the parameter values will be bound to the query parameters.
Calling the procedure without query parameters and using constant values directly
Sample Code
will lead to the following query string that uses the parameter values directly:
Sample Code
A potential disadvantage is that there is a chance of not getting the most optimal query plan because
optimizations using parameter values cannot be performed directly during compilation time. Using constant
values will always lead to preparing a new query plan and therefore to different query plan cache entries for the
different parameter values. This comes along with additional time spend for query preparation and potential
cache flooding effects in fast-changing parameter value scenarios.
In order to control the parameterization behavior of scalar parameters explicitly, you can use the function
BIND_AS_PARAMETER and BIND_AS_VALUE. The decision of the optimizer and the general configuration are
overridden when you use these functions.
Syntax
Using BIND_AS_PARAMETER will always use a query parameter to represent a <scalar_variable> during query
preparation.
Using BIND_AS_VALUE will always use a value to represent a <scalar_variable> during query preparation.
The following example represents the same procedure from above but now using the functions
BIND_AS_PARAMETER and BIND_AS_VALUE instead of referring to the scalar parameters directly:
Sample Code
CREATE PROCEDURE PROC (IN var1 INT, IN var2 INT, OUT tab mytab)
AS
BEGIN
tab = SELECT * FROM MYTAB WHERE MYCOL > BIND_AS_PARAMETER(:var1)
OR MYCOL = BIND_AS_VALUE(:var2);
END;
Sample Code
and bind the values (1 for var1 and 2 for var2), the following query string will be prepared
Sample Code
The same query string will be prepared even if you call this procedure with constant values because the
functions override the decisions of the optimizer.
15.1 M_ACTIVE_PROCEDURES
The view M_ACTIVE_PROCEDURES monitors all internally executed statements starting from a procedure call.
That also includes remotely executed statements.
Note
By default this column shows '-1'.
You need to perform the following
configurations to enable the statis
tics.
global.ini:
('resource_tracking',
'enable_tracking') =
'true’
global.ini:
('resource_tracking',
'memory_tracking') =
'true'
Level Description
To prevent flooding of the memory with irrelevant data, the number of records is limited. If the record count
exceeds the given threshold, the first record is deleted irrespective of its status. The limit can be adjusted the
INI-parameter execution_monitoring_limit, for example execution_monitoring_limit = 100 000.
Limitations:
The default behavior of M_ACTIVE_PROCEDURES is to keep the records of completed internal statements until
the parent procedure is complete. This behavior can be changed with the following two configuration
parameters: NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION and
RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT.
With NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION, you can specify how many calls are retained after
execution and RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT defines how long the result should be kept in
M_ACTIVE_PROCEDURES. The following options are possible:
● Both parameters are set: M_ACTIVE_PROCEDURES keeps the specified numbers of records for the
specified amount of time
● Only NUMBER_OF_CALLS_TO_RETAIN_AFTER_EXECUTION is set: M_ACTIVE_PROCEDURES keeps the
specified number for the default amount of time ( = 3600 seconds)
● Only RETENTION_PERIOD_FOR_SQLSCRIPT_CONTEXT is set: M_ACTIVE_PROCEDURES keeps the default
number of records ( = 100) for the specified amount of time
● Nothing is set: no records are kept.
Note
The Query Export is an enhancement of the EXPORT statement. It allows exporting queries, that is database
objects used in a query together with the query string and parameters. This query can be either standalone, or
executed as a part of a SQLScript procedure.
Prerequisites
In order to execute the query export as a developer you need an EXPORT system privilege.
Procedure
With <export_format> you define whether the export should use a BINARY format or a CSV format.
Note
Currently the only format supported for SQLScript query export is CSV . If you choose BINARY, you get a
warning message and the export is performed in CSV.
The server path where the export files are be stored is specified as <path>.
For more information about <export_option_list>, see EXPORT in the SAP HANA SQL and System Views
Reference on the SAP Help Portal.
Apart from SELECT statements, you can export the following statement types as well:
With the <sqlscript_location_list> you can define in a comma-separated list several queries that you want to
export. For each query you have to specify the name of the procedure with <procedure_name> to indicate
where the query is located. <procedure_name> can be omitted if it is the same procedure as the procedure in
<procedure_call_statement>.
You also need to specify the line information, <line_number>, and the column information, <column_number>.
The line number must correspond to the first line of the statement. If the column number is omitted, all
statements (usually there is just one) on this line are exported. Otherwise the column must match the first
character of the statement.
The line and column information is usually contained in the comments of the queries generated by SQLScript
and can be taken over from there. For example, the monitoring view M_ACTIVE_PROCEDURES or the
statement statistic in PlanViz shows the executed queries together with the comment.
If you want to export both queries of table variables tabtemp, then the <sqlscript_location> looks as follows:
and
For the query of table variable temp we also specified the column number because there are two table variable
assignments on one line and we only wanted to have the first query.
To export these queries, the export needs to execute the procedure call that triggers the execution of the
procedure containing the queries. Therefore the procedure call has to be specified as well by using
<procedure_call_statement>:
EXPORT ALL AS CSV INTO '/tmp' ON (proc_one LINE 15), ( proc_two LINE 27 COLUMN
4) FOR CALL PROC_ONE (...);
Given the above example, we want to export the query on line 34 but only the snapshot of the 2nd and 30th
loop iteration. The export statement is then the following, considering that PROC_LOOP is a procedure call:
If you want to export the snapshots of all iterations you need to use PASS ALL:
EXPORT ALL AS CSV INTO '/tmp' ON (myschema.proc_loop LINE 34 PASS ALL) FOR CALL
PROC_LOOP(...);
Overall the SQLScript Query Export creates one subdirectory for each exported query under the given path
<path> with the following name pattern <schema_name>-<procedure_name>-<line_number>-
<column_number>-<pass_number >. For example the directories of the first above mentioned export
statement would be the following:
|_ /tmp
|_ MYSCHEMA-PROC_LOOP-34-10-2
|_Query.sql
|_index
|_export
|_ MYSCHEMA-PROC_LOOP-34-10-30
|_Query.sql
|_index
|_export
The exported SQLScript query is stored in a file named Query.sql and all related base objects of that query are
stored in the directories index and export, as it is done for a typical catalog export.
You can import the exported objects, including temporary tables and their data, with the IMPORT statement.
For more information about IMPORT, see IMPORT in the SAP HANA SQL and System Views Reference on the
SAP Help Portal.
Note
Note
Query export is not supported on distributed systems. Only single-node systems are supported.
The derived table type of a tabular variable should always match the declared type of the corresponding
variable, both for the type code and for the length or precision/scale information. This is particularly important
for signature variables because they can be considered the contract a caller will follow. The derived type code
will be implicitly converted, if this conversion is possible without loss in information (see The SAP HANA SQL
and System Views Reference for additional details on which data types conversion are supported).
If the derived type is larger (for example, BIGINT) than the expected type (for example, INTEGER) this can lead
to errors, as illustrated in the following example.
The procedure PROC_TYPE_MISMATCH has a defined tabular output variable RESULT with a single column of
type VARCHAR with a length of 2. The derived type from the table variable assignment has a single column of
type VARCHAR with a length of 10.
Calling this procedure will work fine as long as the difference in length does not matter, for example calling this
procedure from any SQL client will not cause an issues. However, using the result for further processing can
lead to an error as illustrated in the following example:
Declared type "VARCHAR(2)" of attribute "A" not same as assigned type "VARCHAR(10)"
The configuration parameters have three different levels to reveal differences between expected and derived
types if the derived type is larger than the expected type:
warn general warning: Declared type "VAR Print warning in case of type mis
CHAR(2)" of attribute "A" not same as match(default behavior)
assigned type "VARCHAR(10)"
strict return type mismatch: Declared type Error in case of potential type error
"VARCHAR(2)" of attribute "A" not
same as assigned type "VARCHAR(10)"
Note
With the SQLScript debugger you can investigate functional issues. The debugger is available in the SAP
WebIDE for SAP HANA (WebIDE) and in ABAP in Eclipse (ADT Debugger). In the following we want to give you
an overview of the available functionality and also in which IDE it is supported. For a detailed description of how
to use the SQLScript debugger, see the documentation of SAP WebIDE for SAP HANA and ABAP in Eclipse
available at the SAP HANA Help Portal.
A conditional breakpoint can be used to break the debugger in the breakpoint-line only when certain conditions
are met. This is especially useful when a Breakpoint is set within a loop.
Each breakpoint can have only one condition. The condition expressions can contain any SQL function. A
condition must either contain an expression that results in true or false, or can contain a single variable or a
complex expression without restrictions in the return type.
When setting a conditional breakpoint, the debugger will check all conditions for potential syntax errors. It
checks for:
At execution time the debugger will check and evaluate the conditions of the conditional breakpoints, but with
the given variables and its values. If the value of a variable in a condition is not accessible and therefor the
condition cannot be evaluated, the debugger will send a warning and will break for the breakpoint anyway.
Note
The debugger will also break and send a warning, if there are expressions set, that access a variable that is
not yet accessible at this point (NULL value).
Note
For more information on SQL functions, see FUNCTION in the SAP HANA SQL and System Views Reference on
the SAP Help Portal.
15.4.2 Watchpoints
Watchpoints give you the possibility to watch the values of variables or complex expressions and break the
debugger, if certain conditions are met.
For each watchpoint you can define an arbitrary number of conditions. The conditions can either contain an
expression that results in true or false or contain a single variable or complex expression without restrictions in
the return type.
When setting a watchpoint, the debugger will check all conditions for potential syntax errors. It checks for:
At execution time the debugger will check and evaluate the conditions of the watchpoints, but with the given
variables and its values. A watchpoint will be skipped, if the value of a variable in a condition is not accessible.
But in case the return type of the condition is wrong , the debugger will send a warning to the user and will
break for the watchpoint anyway.
If a variable value changes to NULL, the debugger will not break since it cannot evaluate the expression
anymore.
You can activate the Exception Mode to allow the Debugger to break, if an error in the execution of a procedure
or a function occurs. User-defined exceptions are also handled.
The debugger stops on the line, where the exception is thrown, and allows access to the current value of all
local variables, the call stack and a short information about the error. After that, the execution can continue
and you can step into the exception handler or into further exceptions (fore example, on a CALL statement).
Save Table allows you to store the result set of a table variable into a persistent table in a predefined schema in
a debugging session.
Syntax
Syntax Elements
<statement_name> ::= <string_literal> Specifies the name of a specific execution plan in the output
table for a given SQL statement
<explain_plan_entry> ::= <plan_id> specifies the identifier of the entry in the SQL
<call_statement> | SQL PLAN CACHE ENTRY plan cache to be explained. Refer to the
<plan_id> M_SQL_PLAN_CACHE monitoring view to find the
<plan_id> for the desired cache entry.
<plan_id> ::= <integer_literal>
<call_statement> specifies the procedure call to ex
plain the plan for. For more information about subqueries,
see the CALL statement.
Note
The EXPLAIN PLAN [SET STATEMENT_NAME = <statement_name>] FOR SQL PLAN CACHE ENTRY
<plan_id> command can only be run by users with the OPTIMIZER_ADMIN privilege.
Description
EXPLAIN PLAN provides information about the compiled plan of a given procedure. It inserts each piece of
information into a system global temporary table named EXPLAIN_CALL_PLANS. The result is visible only
within the session where the EXPLAIN PLAN call is executed.
EXPLAIN PLAN generates the plan information by using the given SQLScript Engine Plan structure. It traverses
the plan structure and records each information corresponding to the current SQLScript Engine Operator.
In the case of invoking another procedure inside of a procedure, EXPLAIN PLAN inserts the results of the
invoked procedure (callee) under the invoke operator (caller), although the actual invoked procedure is a sub-
plan which is not located under the invoke operator.
Another case is the else operator. EXPLAIN PLAN generates a dummy else operator to represent alternative
operators in the condition operator.
Example
You can retrieve the result by selecting from the table EXPLAIN_CALL_PLANS.
The EXPLAIN PLAN FOR select query deletes its temporary table by HDB client but in the case of EXPLAIN
PLAN FOR call, it is not yet supported. To delete rows in the table, execute a delete query from
EXPLAIN_CALL_PLANS table or close the current session.
Note
Client integration is not available yet. You need to use the SQL statement above to retrieve the plan
information.
Syntax
Description
To improve supportability, SQLScript now provides more detailed information on the SQLScript Table User-
Defined Function (TUDF) native operator in EXPLAIN PLAN.
TUDF is automatically unfolded when applicable. If unfolding is blocked, the cause is listed in EXPLAIN PLAN.
This feature automatically applies to EXPLAIN PLAN FOR select statements under the following conditions:
If the two conditions are met, an SQL PLAN is automatically generated along with an SQLScript Engine Plan of
the TUDF.
Behavior
EXPLAIN PLAN for SQLScript TUDF native operator provides the following compiled plans:
EXPLAIN_PLAN_TABLE EXPLAIN_CALL_PLANS
OPERATOR_PROPERTIES field: The internal SQLScript plan of the outermost TUDF is ex
plained. It is automatically generated along with EX
● lists the detailed reasons why the SQLScript TUDF is PLAIN_PLAN_TABLE with the same STATEMENT_NAME.
not unfolded (see the table below)
● contains a comma-separated list of objects used within
the TUDF
NOT UNFOLDED BECAUSE FUNCTION BODY CANNOT BE Multiple statements in TUDF body cannot be simplified into
SIMPLIFIED TO A SINGLE STATEMENT a single statement.
NOT UNFOLDED DUE TO ANY TABLE TUDF uses ANY TABLE type.
NOT UNFOLDED DUE TO BINARY TYPE PARAMETER TUDF has a binary type as its parameter.
NOT UNFOLDED DUE TO DEV_NO_SQLSCRIPT_SCENARIO The caller of TUDF disables unfolding with the
HINT DEV_NO_PREPARE_SQLSCRIPT_SCENARIO hint.
NOT UNFOLDED DUE TO IMPERATIVE LOGICS TUDF has an imperative logic, including SQLScript IF,
FOR,WHILE, or LOOP statements.
NOT UNFOLDED DUE TO INTERNAL SQLSCRIPT OPERA TUDF unfolding is blocked by an internal SQLScript operator.
TOR
NOT UNFOLDED DUE TO INPUT PARAMETER TYPE MIS The type of the input argument does not match the defined
MATCH type of the TUDF input parameter.
NOT UNFOLDED DUE TO JSON OR SYSTEM FUNCTION TUDF uses JSON or system function.
NOT UNFOLDED DUE TO NATIVE SQLSCRIPT OPERATOR TUDF has a SQLScript native operator, which does not have
an appropriate SQL counterpart.
NOT UNFOLDED DUE TO NO CALCULATION VIEW UNFOLD The caller of TUDF disables Calculation View unfolding.
ING
NOT UNFOLDED DUE TO PRIMARY KEY CHECK TUDF has a primary key check.
NOT UNFOLDED DUE TO RANGE RESTRICTION Table with RANGE RESTRICTION is used within the TUDF.
NOT UNFOLDED DUE TO SEQUENCE OBJECT A SEQUENCE variable is used within the TUDF.
NOT UNFOLDED DUE TO SEQUENTIAL EXECUTION TUDF is executed with SEQUENTIAL EXECUTION clause.
NOT UNFOLDED DUE TO SPATIAL TYPE PARAMETER TUDF has a spatial type as its parameter.
NOT UNFOLDED DUE TO TIME TRAVEL OPTION TUDF uses a history table OR the time travel option is used.
NOT UNFOLDED DUE TO WITH HINT TUDF uses a WITH HINT clause that cannot be unfolded.
NOT UNFOLDED DUE TO WITH PARAMETERS CLAUSE TUDF uses a WITH PARAMETERS clause.
Example
Sample Code
Sample Code
Sample Code
DUE TO IMPERA
TIVE LOGICS,
ACCESSED_OB
JECT_NAMES:
SYS.DUMMY, PUB
LIC.DUMMY
Sample Code
Limitations
● EXPLAIN PLAN is generated once per statement. It will not be regenerated regardless of configuration
changes. To regenerate EXPLAIN PLAN, the SQL PLAN CACHE should be cleared via ALTER SYSTEM
CLEAR SQL PLAN CACHE.
● EXPLAIN_CALL_PLAN accumulates execution plans over time. That content is not be automatically
deleted.
Description
SAP HANA stores the results of a code coverage session in the M_SQLSCRIPT_CODE_COVERAGE_RESULTS
monitoring view and stores the definitions of objects that were used during a code coverage session in the
M_SQLSCRIPT_CODE_COVERAGE_OBJECT_DEFINITIONS monitoring view.
Syntax
Syntax Elements
<token_id>: specifies the token that the code coverage applies to.
<user_id>: specifies the database user ID that the code coverage applies to.
<application_user_id>: specifies the ID of the application user that the code coverage applies to.
<session_id>: specifies the ID of the session that the code coverage applies to.
Select from the monitoring views at any time, and from any column, you are interested in after starting code
coverage. However, the full content of code coverage run is visible only after the query triggered in the second
session (which is being covered) finishes (described in the second example, below).
The content in the monitoring views is overwritten in these views each time you stop a SQLScript code
coverage session and start a new one. Since the data is temporary, copy or export the content from these views
to retain data recorded by a SQLScript code coverage session before executing ALTER SYSTEM STOP
SQLSCRIPT CODE COVERAGE.
You must have at least two connections for code coverage. In the first session you execute the codes on which
you run code coverage, and in the second session you start the code coverage for a specific connection ID to
record the coverage.
Caution
You must have the EXECUTE, DEBUG, and ATTACH_DEBUGGER privileges to perform code coverage.
SAP HANA requires two sessions to perform the code coverage. The examples below use session A to execute
the code on which you run code coverage, and session B starts the code coverage for a specific connection ID
to record the coverage.
3. In session B, start code coverage by using the connection ID of the user who is executing the code in
session A (this example uses a connection ID of 203247):
CALL dummy_proc();
5. From session B, view the code coverage by querying the M_SQLSCRIPT_CODE_COVERAGE_RESULTS and
M_SQLSCRIPT_CODE_COVERAGE_OBJECT_DEFINITIONS monitoring views
If required, store the contents of the monitoring views for future reference (this can be a regular or a local
temporary table):
6. From session B, disable the code coverage (this also clears the existing code coverage):
The SQLScript Code Analyzer consists of two built-in procedures that scan CREATE FUNCTION and CREATE
PROCEDURE statements and search for patterns indicating problems in code quality, security or performance.
Interface
The view SQLSCRIPT_ANALYZER_RULES listing the available rules is defined in the following way:
RULE_NAMESPACE VARCHAR(16)
RULE_NAME VARCHAR(64)
CATEGORY VARCHAR(16)
SHORT_DESCRIPTION VARCHAR(256)
LONG_DESCRIPTION NVARCHAR(5000)
RECOMMENDATION NVARCHAR(5000)
Procedure ANALYZE_SQLSCRIPT_DEFINITION
The procedure ANALYZE_SQLSCRIPT_DEFINITION can be used to analyze the source code of a single
procedure or a single function that has not been created yet. If not yet existing objects are referenced, the
procedure or function cannot be analyzed.
Sample Code
) AS BUILTIN
Parameter Description
RULES Rules to be used for the analysis. Available rules can be re
trieved from the view SQLSCRIPT_ANALYZER_RULES
Procedure ANALYZE_SQLSCRIPT_OBJECTS
The procedure ANALYZE_SQLSCRIPT_OBJECTS can be used to analyze the source code of multiple already
existing procedures or functions.
Sample Code
Parameter Description
RULES Rules that should be used for the analysis. Available rules
can be retrieved from the view SQLSCRIPT_ANA
LYZER_RULES.
OBJECT_DEFINITIONS Contains the names and definitions of all objects that were
analyzed, including those without any findings
Rules
UNNECESSARY_VARIABLE
For each variable, it is tested if it is used by any output parameter of the procedure or if it influences the
outcome of the procedure. Statements relevant for the outcome could be DML statements, implicit result sets,
conditions of control statements.
UNUSED_VARIABLE_VALUE
If a value, assigned to a variable, is not used in any other statement, the assignment can be removed. In case of
default assignments in DECLARE statements, the default is never used.
UNCHECKED_SQL_INJECTION_SAFETY
Parameters of type string should always be checked for SQL injection safety, if they are used in dynamic SQL.
This rule checks if the function is_sql_injection_safe is called for every parameter of that type.
If the condition is more complex (for example, more than one variable is checked in one condition), a warning
will be displayed because it is only possible to check if any execution of the dynamic SQL has passed the SQL
injection check.
SINGLE_SPACE_LITERAL
This rule searches for string laterals consisting of only one space. If ABAP VARCHAR MODE is used, such string
literals are treated as empty strings. In this case CHAR(32) can be used instead of ' '.
COMMIT_OR_ROLLBACK_IN_DYNAMIC_SQL
This rule detects dynamic SQL that uses the COMMIT or ROLLBACK statements. It is recommended to use
COMMIT and ROLLBACK directly in SQLScript, thus eliminating the need of dynamic SQL.
● It can only check dynamic SQL that uses a constant string (for example, EXEC 'COMMIT';). It cannot detect
dynamic SQL that evaluates any expression (for example, EXEC 'COM' || 'MIT';)
USE_OF_SELECT_IN_SCALAR_UDF
This rule detects and reports SELECT statements in scalar UDFs. SELECT statements in scalar UDFs can affect
performance. If table operations are really needed, procedures or table UDFs should be used instead.
Sample Code
USE_OF_SELECT_IN SCALAR_UDF
DO BEGIN
tab = SELECT RULE_NAMESPACE, RULE_NAME, category FROM
SQLSCRIPT_ANALYZER_RULES where rule_name = 'USE_OF_SELECT_IN_SCALAR_UDF';
CALL ANALYZE_SQLSCRIPT_DEFINITION('
CREATE FUNCTION f1(a INT) RETURNS b INT AS
BEGIN
DECLARE x INT;
SELECT count(*) into x FROM _sys_repo.active_object;
IF :a > :x THEN
SELECT count(*) INTO b FROM _sys_repo.inactive_object;
ELSE
b = 100;
END IF;
END;', :tab, res);
SELECT * FROM :res;
END;
RULE_NAME SHORT_DESCRIP
SPACE RULE_NAME Category TION START_POSITION END_POSITION
USE_OF_UNASSIGNED_SCALAR_VARIABLE
The rule detects variables which are used but were never assigned explicitly. Those variables still have their
default value when used, which might be undefined. It is recommended to assign a default value (that can be
NULL) to be sure that you get the intended value when you read from the variable. If this rule returns a warning
or an error, check in your code if have not assigned a value to the wrong variable. Always rerun this rule after
changing code, since it is possible that multiple errors trigger only a single message and the error still persists.
For every DECLARE statement this rule returns one of the following:
● <nothing>: if the variable is always assigned before use or not used. Everything is correct.
● Variable <variable> may be unassigned: if there is at least one branch, where the variable is unassigned
when used, even if the variable is assigned in other branches.
● Variable <variable> is used but was never assigned explicitly: if the variable will never have a value assigned
when used.
The rule detects the following DML statements inside loops - INSERT, UPDATE, DELETE, REPLACE/UPSERT.
Sometimes it is possible to rewrite the loop and use a single DML statement to improve performance instead.
In the following example a table is updated in a loop. This code can be rewritten to update the table with a
single DML statement.
Sample Code
DO BEGIN
tab = select rule_namespace, rule_name, category from
sqlscript_analyzer_rules;
call analyze_sqlscript_definition('
// Optimized version
The rule checks whether Calculation Engine Plan Operators (CE Functions) are used. Since they make
optimization more difficult and lead to performance problems, they should be avoided. For more information
and how to replace them using only plain SQL, see Calculation Engine Plan Operators [page 217]
USE_OF_DYNAMIC_SQL
The rule checks and reports, if dynamic SQL is used within a procedure or a function.
ROW_COUNT_AFTER_SELECT
The rule checks, if the system variable ::ROWCOUNT is used after a SELECT statement.
ROW_COUNT_AFTER_DYNAMIC_SQL
The rule checks, if the system variable ::ROWCOUNT is used after the use of dynamic SQL.
Examples
Sample Code
DO BEGIN
tab = SELECT rule_namespace, rule_name, category FROM
SQLSCRIPT_ANALYZER_RULES; -- selects all rules
CALL ANALYZE_SQLSCRIPT_DEFINITION('
CREATE PROCEDURE UNCHECKED_DYNAMIC_SQL(IN query NVARCHAR(500)) AS
BEGIN
DECLARE query2 NVARCHAR(500) = ''SELECT '' || query || '' from
tab'';
EXEC :query2;
query2 = :query2; --unused variable value
END', :tab, res);
SELECT * FROM :res;
END;
Sample Code
DO BEGIN
tab = SELECT rule_namespace, rule_name, category FROM
SQLSCRIPT_ANALYZER_RULES;
to_scan = SELECT schema_name, procedure_name object_name, definition
FROM sys.procedures
WHERE procedure_type = 'SQLSCRIPT2' AND schema_name
IN('MY_SCHEMA','OTHER_SCHEMA')
ORDER BY procedure_name;
CALL analyze_sqlscript_objects(:to_scan, :tab, objects, findings);
SELECT t1.schema_name, t1.object_name, t2.*, t1.object_definition
FROM :findings t2
JOIN :objects t1
ON t1.object_definition_id = t2.object_definition_id;
END;
Due to the nature of static code analysis, the SQLScript Code Analyzer may produce false positives. To avoid
confusion when analyzing large procedures with many findings, and potentially many false positives, the Code
Analyzer offers a way to manually suppress these false positives.
You can use SQLScript Pragmas to define which rules should be suppressed. The pragma name is
AnalyzerSuppress and it must at least one argument describing which rule should be suppressed.
Sample Code
Related Information
The Code Analyzer has limited support for Continue Handler. The Continue Handler blocks are currently not
analyzed as a normal part of a procedure. Consider the following example:
Sample Code
The Code Analyzer will return a finding that the parameter 'tablename' is used within DSQL, although the
example is safe against injections.
If you look into the following example, you will see that the the handler block is analyzed on its own:
In this case the Code Analyzer will not return a finding because the injection handling is performed in the
handler block itself.
Sample Code
In this case it is expected that the Code Analyzer will return a finding stating that that the value of 'var2' is
not used. However, currently most checks related to library member variables are not supported, including the
following scenario:
Sample Code
In this case the Code Analyzer does not return a warning stating that 'query1' is used in dynamic SQL
without being checked.
Limitations of UNCHECKED_SQL_INJECTION_SAFETY
Sample Code
The example above returns a finding even though the procedure is injection safe.
If a SQLScript variable is used within a query, the Code Analyzer assumes that it is contained in the result.
Sample Code
In the example above 'query' is not contained in 'some_value' but is considered unsafe. There is no
further analysis whether the output of the query possibly contains (parts of) the SQLScript variable inputs.
2. Nested procedure calls are also not analyzed.
Sample Code
In example above, the Code Analyzer also returns a finding because it does not analyze the inner procedure
'escape_proc'.
3. There are also limitations for structured types, like array variables, row variables or table variables.
A variable of structured type is considered one unit. It is either affected by an unchecked input completely,
or not at all.
Sample Code
Container Example
In the example above, the Code Analyzer will return a finding because the row variable 'r' is considered
one unit. Because the in parameter 'query' is assigned directly (without escaping) to 'r.a', the variable
'r' as a whole is considered affected by the input variable. Thus every operation that uses any part of 'r'
is assumed to use the unescaped version of 'query'.
Related Information
SQLScript Plan Profiler is a new performance analysis tool designed mainly for the purposes of stored
procedures and functions. When SQLScript Plan Profiler is enabled, a single tabular result per call statement is
generated. The result table contains start time, end time, CPU time, wait time, thread ID, and some additional
details for each predefined operation. The predefined operations can be anything that is considered of
importance for analyzing the engine performance of stored procedures and functions, covering both
compilation and execution time. The tabular results are displayed in the new monitoring view
M_SQLSCRIPT_PLAN_PROFILER_RESULTS in HANA.
Note
There are two ways to start the profiler and to check the results.
ALTER SYSTEM
You can use the ALTER SYSTEM command with the following syntax:
Code Syntax
● START
When the START command is executed, the profiler checks if the exact same filter has already been
applied and if so, the command is ignored. You can check the status of enabled profilers in the monitoring
view M_SQLSCRIPT_PLAN_PROFILERS. Results are available only after the procedure execution has
finished. If you apply a filter by procedure name, only the outermost procedure calls are returned.
Sample Code
● STOP
When the STOP command is executed, the profiler disables all started commands, if they are included in
the filter condition (no exact filter match is needed). The STOP command does not affect the results that
are already profiled.
● CLEAR
The CLEAR command is independent of the status of profilers (running or stopped). The CLEAR command
clears profiled results based on the PROCEDURE_CONNECTION_ID, PROCEDURE_SCHEMA_NAME, and
PROCEDURE_NAME in M_SQLSCRIPT_PLAN_PROFILER_RESULTS. If the results are not cleared, the
oldest data will be automatically deleted when the maximum capacity is reached.
j) ALTER SYSTEM CLEAR SQLSCRIPT PLAN PROFILER FOR SESSION 222222; -- deletes
records with PROCEDURE_CONNECTION_ID = 222222
k) ALTER SYSTEM CLEAR SQLSCRIPT PLAN PROFILER FOR PROCEDURE S1.P1; -- delete
records with PROCEDURE_SCHEMA_NAME = S1 and PROCEDURE_NAME = P1
l) ALTER SYSTEM CLEAR SQLSCRIPT PLAN PROFILER; -- deletes all records
Note
The <filter> does not check the validity or existence of <session id> or <procedure_id>.
SQL Hint
You can use the SQL HINT command to start the profiler with the following syntax:
SQL Hint is the most convenient way to enable the profiler. In that way, the profiling result is returned as an
additional result set. If the profiler has already been enabled by means of the ALTER SYSTEM command, the
result will be also visible in the monitoring view.
Currently both hint and system commands can be used to enable the SQLScript Plan Profiler for anonymous
blocks.
Sample Code
DO BEGIN
select * from dummy;
END WITH HINT(SQLSCRIPT_PLAN_PROFILER); -- returns additional result set
Sample Code
You can check the status of the profiler by using the following command:
Sample Code
Example
Memory Usage
Description
The following columns are used to track the memory usage of each operator (similarly to CPU times and
ACTIVE times):
● USED_MEMORY_SIZE_SELF: Memory used in the operation itself, excluding its children (in bytes)
● USED_MEMORY_SIZE_CUMULATIVE: Total memory used in the operation itself and its children (in bytes)
Those columns show the memory usage of each SQL statement, such as
STATEMENT_EXECUTION_MEMORY_SIZE and STATEMENT_MATERIALIZATION_MEMORY_SIZE in
M_ACTIVE_PROCEDURES. For entries whose memory consumption is not collected or not calculated, the
value displayed is '-1'.
The following two configurations must be enabled to activate the resource tracking:
Sample Code
do begin
v1 = select * from small_table with hint(no_inline);
v2 = select * from big_table with hint(no_inline);
select * from :v1 union all select * from :v2;
end with hint(sqlscript_plan_profiler);
USED_MEM USED_MEMORY_CU
OPERATOR OPERATOR_STRING OPERATOR_DETAILS ORY_SELF MULATIVE
Do -1 4084734
Sequential Op -1 4084734
Initial Op -1 -1
Parallel Op -1 4084734
Execute SQL State ..., statement execu 4035899 (<a> + <b>) 4035899
ment tion memory: <a>,
itab size: <b>
Execute SQL State ..., statement execu 16067 (<c> + <d>) 16067
ment tion memory: <c>,
itab size: <d>
Flow Control Op -1 -1
Terminal Op -1 -1
Nested Calls
Description
The following columns provide more detailed information about nested calls:
Example
Sample Code
as begin
end;
as begin
end;
OPERA
PROCE TOR_PRO
DURE_SCH PROCE OPERA OPERA CE OPERA OPERA
EMA_NAM DURE_NA TOR_STRI TOR_SCHE DURE_NA OPERA TOR_COL TOR_POSI
E ME OPERATOR NG MA_NAME ME TOR_LINE UMN TION
SYSTEM P1 Compile
SYSTEM P1 Initial Op
SYSTEM P1 Compile
SYSTEM P1 Initial Op
SYSTEM P1 Terminal Op
SYSTEM P1 Terminal Op
With pragmas SQLScript offers a new way for providing meta information. Pragmas can be used to annotate
SQLScript code, but they do not have a function themselves and only affect other statements and declarations.
Pragmas are clearly distinct syntax elements similar to comments, but while comments provide information to
the reader of the code, pragmas provide information to the compiler and the code analyzer.
Syntax
Code Syntax
Procedure Head
Procedure Body
Note
The keywords pushscope and popscope are not case sensitive. PuShScopE is equal to pushscope and
PUSHSCOPE.
Semantics
While the exact semantics depend on the specific pragma type, there are rules that apply to pragmas in
general. The identifier is case insensitive, which means that pragma and PrAgMa are recognized as the same
pragma. However, pragma arguments are case sensitive.
Pragma scopes affect all declarations or statements between one pushscope and the next popscope with all
the pragmas that are specified in the pushscope.
Sample Code
do begin
@pushscope(@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY'))
declare a int;
declare b nvarchar(500);
@popscope()
declare c date;
select :c from dummy;
end
In the example above the declarations for a and b will be affected by the pragma 'AnalyzerSuppress', while the
declaration for c and the SELECT statement, are not affected.
Pragma scopes are independent of the logical structure of the code. This means that irrespective of which
parts of the code are executed, the pragma scopes always affect the same statements and declarations.
Sample Code
@pushscope(@AnalyzerSuppress('SAP.USE_OF_UNASSIGNED_SCALAR_VARIABLE.CONSISTENC
Y'))
if a < b then
In this example, the assignment on line 9 will never be affected by the pragma. The SELECT statement, on the
other hand, will always be affected by the pragma.
When using both pushscopes and single pragmas before declarations or statements, all pushscopes must
precede the first single pragma. It is not allowed to mix pushscopes and single pragmas arbitrarily. For more
information, see the examples in the section Limitations.
Single pragmas affect the next statement or declaration. This includes everything that is contained by the
statement or declaration.
Sample Code
do begin
@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY')
declare a, b, c int;
@AnalyzerSuppress('SAP.USE_OF_UNASSIGNED_SCALAR_VARIABLE.CONSISTENCY')
a = :b + 1;
end
In this example the single pragma on line 2 will affect the declarations of the three variables a, b and c. The
single pragma on line 4 will affect the assignment and all parts of it. This also includes the expression :b + 1 on
the right hand side.
There is an exception for statements that contain blocks, that is basic blocks, loops and conditionals. The
pragmas that are attached to a basic block, a loop or a conditional will not affect the declarations and
statements within those blocks.
Sample Code
do begin
@AnalyzerSupress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY')
begin
declare a nvarchar(50);
select * from dummy;
end;
end
In this example neither the declaration of a, nor the SELECT statement are affected by the pragma. Since such
blocks belong to the normal SQLScript code, you can add a pragma or pragma scopes directly.
Available Pragmas
AnalyzerSuppress('NAME_SPACE.RULE_NAME.CATEGORY', ...)
Sample Code
do begin /*allowed*/
@pushScope(@AnalyzerSuppress('SAP.UNUSED_VARIABLE_VALUE.CONSISTENCY'))
@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY')
declare a, b int = 5;
@popscope()
end
do begin /*allowed*/
@pushScope(@AnalyzerSuppress('SAP.UNUSED_VARIABLE_VALUE.CONSISTENCY'))
declare a int;
@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY')
declare b int = 5;
@popscope()
end
do begin /*allowed*/
@pushScope(@AnalyzerSuppress('SAP.UNUSED_VARIABLE_VALUE.CONSISTENCY'))
declare a int;
@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY')
@someOtherPragma()
declare b int = 5;
@popscope()
end
It is not allowed to use pragma scopes within the parameter declaration list and in the declaration list before
the initial begin of a procedure.
Sample Code
-- not allowed
create procedure
wrong_proc(@pushscope(@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY'
)) in a int, in b nvarchar @popscope())
as begin
select * from dummy;
end
-- not allowed
create procedure wrong_proc as
@pushscope(@AnalyzerSuppress('SAP.UNNECESSARY_VARIABLE.CONSISTENCY'))
a int;
b nvarchar;
Related Information
The already existing mechanism of using libraries in SQLScript is re-used for the purposes of writing end-user
tests. The language type SQLSCRIPT TEST has been introduced to specify that a library contains end-user
tests. Currently, this language type can be only used for libraries.
Note
To ensure a clear separation between productive and test-only coding, libraries of that language type
cannot be used in any function, procedure or library that does not utilize the language type SQLSCRIPT
TEST.
Within the body of such a test library, you can use some of the SQLScript pragmas to mark a library member
procedure as a test or test-related coding: @Test(), @TestSetup(), @TestTeardown(),
@TestSetupConfig('ConfigName'), @TestTeardownConfig('ConfigName'),
@TestSetupLibrary() as well as @TestTearDownLibrary(). Those pragmas are supported only for
library member procedures and the procedures may not have any parameters.
Note
All of these pragmas are optional and not required by default within an SQLSCRIPT TEST library. But to
enable a library member procedure to be invoked as end-user test by the SQLScript Test Framework, at
least the @Test() pragma is required.
Sample Code
@TestClassification('FAST','base')
@TestSetUpConfig('config1')
public procedure SetUpConfig1() as
begin
truncate table tab_test;
insert into tab_test values(1, 'first entry');
insert into tab_test values(2, 'second entry');
insert into tab_test values(3, 'third entry');
end;
@TestSetUpConfig('config2')
public procedure SetUpConfig2() as
begin
truncate table tab_test;
insert into tab_test values(5, 'fifth entry');
insert into tab_test values(6, 'sixth entry');
insert into tab_test values(7, 'seventh entry');
end;
@TestSetUpConfig('config3')
public procedure SetUpConfig3() as
begin
truncate table tab_test;
insert into tab_test values(5, 'some pattern string');
end;
@TestTearDownConfig('config1', 'config2', 'config3')
public procedure TearDownConfig() as
begin
truncate table tab_test;
end;
@TestSetUpTest()
public procedure SetUpTest() as
begin
using sqlscript_test as testing;
declare num_entries int = record_count(tab_test);
testing:expect_ne(0, num_entries);
end;
@TestTearDownTest()
public procedure TearDownTest() as
begin
select 'whatever' from dummy;
end;
@TestClassification('SLOW')
@Test()
public procedure TestA as
begin
using sqlscript_test as testing;
tab1 = select 'A1' as A from dummy;
tab2 = select 'A2' as A from dummy;
testing:expect_table_eq(:tab1, :tab2);
end;
@Test()
public procedure TestC as
begin
using sqlscript_test as testing;
declare str nclob;
call proc_test(:str);
testing:expect_eq('some replaced string', :str);
end;
END;
To run the example SQLSCRIPT TEST library above, you would also need an object to be tested, for example the
following procedure:
When invoking end-user tests, the SQLScript Test Framework considers member procedures of the
SQLSCRIPT TEST library, marked with one of the pragmas mentioned above. It is, however, still possible to
have additional member functions or procedures in such a library without any pragmas. These could then serve
as helpers or be used to separate common coding.
The order of execution of library member procedures having these pragmas is defined as follows:
1. @TestSetupLibrary()
2. @TestSetupConfig('Config1')
3. @TestSetup()
4. @Test()
5. @TestTeardown()
6. @TestSetUp()
7. @Test()
8. @TestTeardown()
9. [...]
10. @TestTeardownConfig('Config1')
11. @TestSetupConfig('Config2')
12. @TestSetup()
13. @Test()
14. @TestTeardown()
15. @TestSetUp()
16. @Test()
17. @TestTeardown()
18. [...]
19. @TestTeardownConfig('Config2')
20. [...]
21. @TestTeardownLibrary()
Note
In case the execution of a library member procedure having one of the SetUp pragmas fails, the
corresponding TearDown, as well as the tests, will not be executed. With the @TestClassification(…)
pragma, SetUpLibrary, SetUpConfiguration and Test procedures can be assigned additional tags
that can be used in test filters.
Related Information
The entry point of the end-user test framework in SQLScript is the built-in procedure
SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA.
Note
As the name of the procedure indicates, the tests are run on the existing data in the system. You need to
pay special attention when writing tests that change or delete objects or data in the system because others
may be influenced by these changes. Tests themselves may also be influenced by other tests running in
parallel on the same system.
Users do not have the EXECUTE privilege for the built-in procedure
SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA by default. You need to get this privilege granted (for
example, by a SYSTEM user).
To invoke end-user tests in the SQLScript test framework, the following CALL statement has to be executed.
CALL SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA('<json_string>', ?, ?, ?)
Note
Wildcards can be used to specify values in the JSON string ('*' for multiple wildcard characters, '?' for
exactly one wildcard character).
CALL
SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA('{"schema":"MY_SCHEMA","library":"*"}',
?, ?, ?)
CALL
SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA('{"schema":"MY_SCHEMA","library":"LIB*TE
ST"}', ?, ?, ?)
CALL
SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA('[{"schema":"MY_SCHEMA","library":"SOME_
PREFIX_*"},{"schema":"OTHER_SCHEMA","library":"*_SOME_SUFFIX"}]', ?, ?, ?)
The first call to SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA will run all tests (in all their configurations
respectively) of all libraries with language type SQLSCRIPT TEST in the schema MY_SCHEMA. The second call
will do the same but applies a filter to the libraries that are to be executed. Here, only SQLSCRIPT TEST
libraries having a name starting with 'LIB' and ending with 'TEST' will be executed by the test framework. For
the third call, also libraries with language type SQLSCRIPT TEST in the schema OTHER_SCHEMA will be
executed but their name has to end with '_SOME_SUFFIX'.
The complete definition of what can be provided in the JSON string of the test plan is described below.
Note
Examples:
Sample Code
[{
"schema":"MY_SCHEMA",
"library":"*"
},
{
"library": "MY_LIB",
"run": [{
"exclude-tests": ["A", "B"],
"configurations": ["config1", "config3"]
},
{
"tests": ["A", "B"],
"exclude-configurations": ["config2"]
}]
},
{
"schema": "MY_SCHEMA",
"library": "*",
"run": [{
"tests": ["*TEST*KERNEL*"],
"exclude-tests": ["DISABLED_*"],
"exclude-configurations": ["*SCALE_OUT*"]
},
{
"configurations": ["*SINGLE_NODE*", "*SCALE_OUT*"],
"exclude-configurations": ["*STRESS_TEST*"]
}]
}]
Behavior
Note
Each entry in <run_spec_list> will cause a separate list of tests and configurations to be added to the
test plan depending on the values of the inner <run_spec_member> entries. In that way some tests as well
as configurations of the same library may be executed repeatedly by the test framework.
Classifications can be specified on multiple levels and the filtering based on classifications also needs to be
performed on multiple levels.
● If a classification specifier of a library member (the classification specified with the pragma) matches a
pattern in the exclude-specification, this member and everything it includes will not be executed. For
example, if a SetUpLibrary matches an exclude-classification, nothing in this library will be executed. For
a config it means that no test will be executed in this config. And for a test it just means that this test is
not executed.
● If the classifications specifier does not match the exclude-specification, the library, the configuration or the
test is executed.
● If a classifications specifier of a library member matches a pattern in the specification this member and
everything it includes, it will be executed unless an exclude specification matches.
● If the classification specifier does not match the specification, only the members included that match the
specification will be executed.
● If tests do not match, they will not be executed.
Sample Code
If classification 'clas0' is included, everything will be executed. If classification 'clas1' is included, everything in
configuration 'A' will be executed. If classification 'clas2' is included, only 'TESTA' will be executed but in both
configurations - 'A' and 'B'.
If classification 'clas0' is included and 'clas1' excluded, only the configuration 'B' will be executed (with both
tests). If classification 'clas0' is included and 'clas2' is excluded, only 'TESTB' will be executed but in both
configurations - 'A' and 'B'. If classification 'clas1' is included and 'clas2' excluded, only 'TESTB' in configuration
'A' will be executed.
If classification 'clas2' is included and 'clas0' excluded, nothing will be executed. If classification 'clas2' is
included and 'clas1' excluded, only 'TESTA' will be executed and only in configuration 'B'. If classification 'clas1'
is included and 'clas0' excluded, nothing will be executed.
Output
Results
Call Stacks
For checking which tests and configurations will be invoked by the test framework when providing a JSON
string as test plan description, the built-in library SYS.SQLSCRIPT_TEST contains two additional procedures.
LIST_TESTS returns every test that would be executed at least once. LIST_CONFIGURATIONS returns every
configuration that would execute at least one test. The result set will not contain any duplicates.
CALL SYS.SQLSCRIPT_TEST:LIST_TESTS('<json_string>')
CALL SYS.SQLSCRIPT_TEST:LIST_CONFIGURATIONS('<json_string>')
Sample Code
Examples
CALL SYS.SQLSCRIPT_TEST:LIST_TESTS('{"schema":"MY_SCHEMA","library":"*"}', ?)
CALL
SYS.SQLSCRIPT_TEST:LIST_TESTS('{"schema":"MY_SCHEMA","library":"LIB*TEST"}', ?
)
CALL
SYS.SQLSCRIPT_TEST:LIST_CONFIGURATIONS('[{"schema":"MY_SCHEMA","library":"SOME
_PREFIX_*"},{"schema":"OTHER_SCHEMA","library":"*_SOME_SUFFIX"}]', ?)
Within the SQLSCRIPT TEST libraries, certain procedures of the built-in library SYS.SQLSCRIPT_TEST can be
used to verify results within end-user tests.
Currently, there are several matchers for scalar variables, one matcher for table variables and one that aborts
the execution of the current test. The matchers for scalar variables are:
EXPECT_GE Checks if the first input is greater than or equal to the sec
ond input
EXPECT_GT Checks if the first input is greater than the second input
EXPECT_LE Checks if the first input is less than or equal to the second
input
EXPECT_LT Checks if the first input is less than the second input
All scalar matchers, except EXPECT_NULL, take exactly two scalar input arguments. The data types of these
two inputs must be comparable in SQLScript. Most of the data types can be categorized in three classes: string
types, numeric types and date types. While all types within the same class are comparable to each other, it is
not possible to compare date and numeric types. String types can be compared to every other data type but
will be converted to a non-string type prior to the comparison. Whenever two different data types are
compared, at least one of the inputs will be converted. When the conversion fails, it is considered a normal
execution error instead of reporting a matcher failure.
The table matcher (EXPECT_TABLE_EQ) has three input arguments. Besides the two table variables that
should be compared, there is a third optional input - IGNORE_ORDER. This parameter is TRUE by default and
will compare the table variables without considering the order of rows. For example row 2 of the first input
might match row 5 of the second input. However, every row will be matched at most to one row in the other
table variable. The two input table variables must have an equal number of columns and the columns must
have same names. The data types of the columns have to be comparable as well. If the types of the table
columns are different, one of the columns will be converted before the comparison. Unlike in scalar
comparisons, this will not lead to a run-time error if such a conversion fails. Instead, the row will always be
considered a mismatch. One additional difference to scalar matchers is the handling of NULL values. For scalar
matchers, anything compared to NULL is false (even NULL). The table matcher assumes that NULL is equal to
NULL.
The built-in library SQLSCRIPT_TEST also contains a procedure named FAIL. This procedure will (similarly to a
matcher) add an entry to the Details output table of SYS.SQLSCRIPT_RUN_TESTS_ON_ORIGINAL_DATA
whereby the error message that was provided as an input argument to the procedure FAIL will be included as a
message. After that, this procedure will abort the execution of the current test. The subsequent tests will still
be executed.
So far this document has introduced the syntax and semantics of SQLScript. This knowledge is sufficient for
mapping functional requirements to SQLScript procedures. However, besides functional correctness, non-
functional characteristics of a program play an important role for user acceptance. For instance, one of the
most important non-functional characteristics is performance.
The following optimizations all apply to statements in SQLScript. The optimizations presented here cover how
dataflow exploits parallelism in the SAP HANA database.
● Reduce Complexity of SQL Statements: Break up a complex SQL statement into many simpler ones. This
makes a SQLScript procedure easier to comprehend.
● Identify Common Sub-Expressions: If you split a complex query into logical sub queries it can help the
optimizer to identify common sub expressions and to derive more efficient execution plans.
● Multi-Level-Aggregation: In the special case of multi-level aggregations, SQLScript can exploit results at a
finer grouping for computing coarser aggregations and return the different granularities of groups in
distinct table variables. This could save the client the effort of reexamining the query result.
● Reduce Dependencies: As SQLScript is translated into a dataflow graph, and independent paths in this
graph can be executed in parallel, reducing dependencies enables better parallelism, and thus better
performance.
● Avoid Using Cursors: Check if use of cursors can be replaced by (a flow of) SQL statements for better
opportunities for optimization and exploiting parallel execution.
● Avoid Using Dynamic SQL: Executing dynamic SQL is slow because compile time checks and query
optimization must be done for every invocation of the procedure. Another related problem is security
because constructing SQL statements without proper checks of the variables used may harm security.
Variables in SQLScript enable you to arbitrarily break up a complex SQL statement into many simpler ones.
This makes a SQLScript procedure easier to comprehend.
Writing this query as a single SQL statement requires either the definition of a temporary view (using WITH), or
the multiple repetition of a sub-query. The two statements above break the complex query into two simpler
SQL statements that are linked by table variables. This query is much easier to understand because the names
of the table variables convey the meaning of the query and they also break the complex query into smaller
logical pieces.
The query examined in the previous topic contained common sub-expressions. Such common sub-expressions
might introduce expensive repeated computation that should be avoided.
It is very complicated for query optimizers to detect common sub-expressions in SQL queries. If you break up a
complex query into logical subqueries it can help the optimizer to identify common sub-expressions and to
derive more efficient execution plans. If in doubt, you should employ the EXPLAIN plan facility for SQL
statements to investigate how the SAP HANA database handles a particular statement.
Computing multi-level aggregation can be achieved by using grouping sets. The advantage of this approach is
that multiple levels of grouping can be computed in a single SQL statement.
For example:
To retrieve the different levels of aggregation, the client must typically examine the result repeatedly, for
example, by filtering by NULL on the grouping attributes.
In the special case of multi-level aggregations, SQLScript can exploit results at a finer grouping for computing
coarser aggregations and return the different granularities of groups in distinct table variables. This could save
the client the effort of re-examining the query result. Consider the above multi-level aggregation expressed in
SQLScript:
One of the most important methods for speeding up processing in the SAP HANA database is through
massively parallelized query execution.
Parallelization is exploited at multiple levels of granularity. For example, the requests of different users can be
processed in parallel, and single relational operators within a query can also be executed on multiple cores in
parallel. It is also possible to execute different statements of a single SQLScript procedure in parallel if these
statements are independent of each other. Remember that SQLScript is translated into a dataflow graph, and
independent paths in this graph can be executed in parallel.
As an SQLScript developer, you can support the database engine in its attempt to parallelize execution by
avoiding unnecessary dependencies between separate SQL statements, and by using declarative constructs if
possible. The former means avoiding variable references, and the latter means avoiding imperative features,
such as cursors.
While the use of cursors is sometime required, they also imply row-by-row processing. Consequently,
opportunities for optimizations by the SQL engine are missed. You should therefore consider replacing cursors
with loops in SQL statements.
Read-Only Access
Computing this aggregate in the SQL engine may result in parallel execution on multiple CPUs inside the SQL
executor.
Computing this in the SQL engine reduces the calls through the runtime stack of the SAP HANA database. It
also potentially benefits from internal optimizations like buffering and parallel execution.
Like updates and deletes, computing this statement in the SQL engine reduces the calls through the runtime
stack of the SAP HANA database. It also potentially benefits from internal optimizations like buffering and
parallel execution.
Dynamic SQL is a powerful way to express application logic. It allows SQL statements to be constructed at the
execution time of a procedure. However, executing dynamic SQL is slow because compile-time checks and
Another related problem is security because constructing SQL statements without proper checks of the
variables used can create a security vulnerability, like an SQL injection, for example. Using variables in SQL
statements prevents these problems because type checks are performed at compile time and parameters
cannot inject arbitrary SQL code.
The table below summarizes potential use cases for dynamic SQL:
This section contains information about creating applications with SQLScript for SAP HANA.
In this section we briefly summarize the concepts employed by the SAP HANA database for handling
temporary data.
Table Variables are used to conceptually represent tabular data in the data flow of a SQLScript procedure. This
data may or may not be materialized into internal tables during execution. This depends on the optimizations
applied to the SQLScript procedure. Their main use is to structure SQLScript logic.
Temporary Tables are tables that exist within the life time of a session. For one connection one can have
multiple sessions. In most cases disconnecting and reestablishing a connection is used to terminate a session.
The schema of global temporary tables is visible for multiple sessions. However, the data stored in this table is
private to each session. In contrast, for local temporary tables neither the schema nor the data is visible
outside the present session. In most aspects, temporary tables behave like regular column tables.
Persistent Data Structures are like sequences and are only used within a procedure call. However, sequences
are always globally defined and visible (assuming the correct privileges). For temporary usage – even in the
presence of concurrent invocations of a procedure, you can invent a naming schema to avoid sequences. Such
a sequence can then be created using dynamic SQL.
Ranking can be performed using a Self-Join that counts the number of items that would get the same or lower
rank. This idea is implemented in the sales statistical example below.
Related Information
In this document we have discussed the syntax for creating SQLScript procedures and calling them. Besides
the SQL command console for invoking a procedure, calls to SQLScript will also be embedded into client code.
In this section we present examples how this can be done.
The best way to call SQLScript from ABAP is by means of the AMDP framework. That framework manages the
lifecycle of SQLScript objects and embeds them as ABAP objects (classes). The development, maintenance,
and transport is performed on the ABAP side. A call of an AMDP corresponds to a class method call in ABAP.
The AMDP framework takes care of generating and calling the corresponding database objects.
For more information, see ABAP - Keyword Documentation → ABAP - Reference → Processing External Data →
ABAP Database Accesses → AMDP - ABAP Managed Database Procedures.
Tip
You can call SQLScript from ABAP by using a procedure proxy that can be natively called from ABAP by
using the built-in command CALL DATABASE PROCEDURE. However, it is recommended to use AMDP.
The SQLScript procedure has to be created normally in the SAP HANA Studio with the HANA Modeler. After
this a procedure proxy can be creating using the ABAP Development Tools for Eclipse. In the procedure proxy
the type mapping between ABAP and HANA data types can be adjusted. The procedure proxy is transported
normally with the ABAP transport system while the HANA procedure may be transported within a delivery unit
as a TLOGO object.
Calling the procedure in ABAP is very simple. The example below shows calling a procedure with two inputs
(one scalar, one table) and one (table) output parameter:
Using the connection clause of the CALL DATABASE PROCEDURE command, it is also possible to call a
database procedure using a secondary database connection. Please consult the ABAP help for detailed
instructions of how to use the CALL DATABASE PROCEDURE command and for the exceptions may be raised.
For more information, see ABAP - Keyword Documentation → ABAP - Reference → Processing External Data →
ABAP Database Accesses → ABAP and SAP HANA → Access to Objects in SAP HANA XS → Access to SAP
HANA XSC Objects → Database Procedure Proxies for SQLScript Procedures in XSC → CALL DATABASE
PROCEDURE.
Using ADBC
*&---------------------------------------------------------------------*
*& Report ZRS_NATIVE_SQLSCRIPT_CALL
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
report zrs_native_sqlscript_call.
parameters:
con_name type dbcon-con_name default 'DEFAULT'.
types:
* result table structure
begin of result_t,
key type i,
value type string,
end of result_t.
data:
* ADBC
sqlerr_ref type ref to cx_sql_exception,
con_ref type ref to cl_sql_connection,
stmt_ref type ref to cl_sql_statement,
res_ref type ref to cl_sql_result_set,
* results
result_tab type table of result_t,
row_cnt type i.
start-of-selection.
try.
con_ref = cl_sql_connection=>get_connection( con_name ).
stmt_ref = con_ref->create_statement( ).
*************************************
** Setup test and procedure
*************************************
* Create test table
try.
stmt_ref->execute_ddl( 'DROP TABLE zrs_testproc_tab' ).
catch cx_sql_exception.
endtry.
stmt_ref->execute_ddl(
'CREATE TABLE zrs_testproc_tab( key INT PRIMARY KEY, value
NVARCHAR(255) )' ).
stmt_ref->execute_update(
'INSERT INTO zrs_testproc_tab VALUES(1, ''Test value'' )' ).
* Create test procedure with one output parameter
try.
stmt_ref->execute_ddl( 'DROP PROCEDURE zrs_testproc' ).
catch cx_sql_exception.
endtry.
stmt_ref->execute_ddl(
`CREATE PROCEDURE zrs_testproc( OUT t1 zrs_testproc_tab ) ` &&
`READS SQL DATA AS ` &&
`BEGIN ` &&
` t1 = SELECT * FROM zrs_testproc_tab; ` &&
`END`
).
Output:
Related Information
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.CallableStatement;
import java.sql.ResultSet;
…
import java.sql.SQLException;CallableStatement cSt = null;
String sql = "call SqlScriptDocumentation.getSalesBooks(?,?,?,?)";
ResultSet rs = null;
Given procedure:
using System;
using System.Collections.Generic;
using System.Text;
using System.Data;
using System.Data.Common;
using ADODB;
using System.Data.SqlClient;
namespace NetODBC
{
class Program
{
static void Main(string[] args)
{
try
{
DbConnection conn;
DbProviderFactory _DbProviderFactoryObject;
String connStr = "DRIVER={HDBODBC32};UID=SYSTEM;PWD=<password>;
SERVERNODE=<host>:<port>;DATABASE=SYSTEM";
The examples used throughout this manual make use of various predefined code blocks. These code snippets
are presented below.
18.1.1 ins_msg_proc
This code is used in the examples of this reference manual to store outputs, so that you can see the way the
examples work. It simply stores text along with a time stamp of the entry.
Before you can use this procedure, you must create the following table.
To view the contents of the message_box, you select the messages in the table.
For information about the capabilities available for your license and installation scenario, refer to the Feature
Scope Description for SAP HANA.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such
links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Gender-Related Language
We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.