Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
154 views

DataSunrise Database Security Admin Guide Linux

Uploaded by

thameemul ansari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
154 views

DataSunrise Database Security Admin Guide Linux

Uploaded by

thameemul ansari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

www.datasunrise.

com

DataSunrise Database Security 9.0

Administration
Guide, Linux
DataSunrise Database Security Administration Guide (Linux)

Copyright © 2015-2023, DataSunrise, Inc . All rights reserved.

All brand names and product names mentioned in this document are trademarks, registered trademarks or service
marks of their respective owners.
No part of this document may be copied, reproduced or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, except as expressly allowed by law or permitted in writing by the
copyright holder.
The information in this document is subject to change without notice and is not warranted to be error-free. If you
find any errors, please report them to us in writing.
iii

Contents

Chapter 1: General Information.......................................................................... 5


Product Description.......................................................................................................................................... 5
Supported Databases and Features................................................................................................................5
DataSunrise Operation Modes.......................................................................................................................10
Sniffer Mode........................................................................................................................................ 10
Proxy Mode......................................................................................................................................... 11
Trailing DB audit logs..........................................................................................................................11
Dynamic SQL Processing.............................................................................................................................. 12
System Requirements.................................................................................................................................... 14
Useful Resources........................................................................................................................................... 15

Chapter 2: Deployment Topologies.................................................................. 17


Installing DataSunrise on a Database Server............................................................................................... 17
Proxy Mode......................................................................................................................................... 17
Sniffer Mode........................................................................................................................................ 18
Installing DataSunrise on a Separate Server................................................................................................ 18
Proxy Mode......................................................................................................................................... 18
Sniffer Mode........................................................................................................................................ 19

Chapter 3: DataSunrise Installation..................................................................20


Prerequisites...................................................................................................................................................20
Configuring IBM Informix ODBC driver..........................................................................................................25
Required Port Numbers................................................................................................................................. 27
DataSunrise Installation................................................................................................................................. 27
Installing DataSunrise from the Repository (Debian)..........................................................................29
Installing DataSunrise from the RPM Repository (RHEL 8, CentOS 8)..............................................30
DataSunrise Installation Folder...................................................................................................................... 30
Starting DataSunrise...................................................................................................................................... 31
Copying DataSunrise Settings to Another DataSunrise Instance..................................................................32
Switching Between Multiple DataSunrise Instances......................................................................................32
Upgrading DataSunrise.................................................................................................................................. 32
Downgrading DataSunrise............................................................................................................................. 33
Restoring Access to the Web Console if the Password is Lost.................................................................... 33
Adding a DataSunrise server on Ubuntu to an Active Directory domain.......................................................34
DataSunrise Removal.................................................................................................................................... 37
Removing DataSunrise Using the Repository (Debian)......................................................................38
Removing DataSunrise Using the RPM Repository (RHEL, CentOS 8).............................................38

Chapter 4: Multi-Server Configuration (High Availability Mode).................... 39


Preparing Databases to be Used as a Dictionary/Audit Storage.................................................................. 40
Preparing a PostgreSQL Database.................................................................................................... 40
Preparing an AWS RDS PostgreSQL Database......................................................................41
Preparing a MySQL Database............................................................................................................ 41
Preparing an MS SQL Server Database............................................................................................ 42
Preparing a MariaDB Database.......................................................................................................... 43
Adding a DataSunrise Server to HA Setup................................................................................................... 43
Reviewing Servers of an Existing HA Configuration..................................................................................... 45
Restoring the Configuration if your local_settings.db is Lost........................................................................ 47
Configuring Existing DataSunrise Installation into HA Configuration.............................................................48

Chapter 5: Always On........................................................................................ 50


Working with Always On Availability Group of SQL Server.......................................................................... 50
Configuring of the Firewall Inside the Azure Cloud for Maintenance of the SaaS SQL Azure.......................50
Configuring a Remote Dictionary on AWS Using the AWS Secrets Manager.............................................. 51

Chapter 6: Shared Configuration (SC Mode)................................................... 52


Deploying DataSunrise in the Shared Configuration..................................................................................... 52

Chapter 7: Frequently Asked Questions..........................................................53


1 General Information | 5

1 General Information

1.1 Product Description


The introductory section of this chapter describes basic features, steps necessary for database protection and
principles of DataSunrise operation.
Protection of databases starts with selecting and configuring the database instance. In the process you also need to
select the protection mode: Sniffer (passive protection) or Proxy (active database protection). You can additionally
restrict access to your database(s) protected by DataSunrise web user interface using 2-factor authentication.
DataSunrise’s functionality is based on a system of highly customizable and versatile policies (Rules) which control
database protection. You can create rules for the following tools included in DataSunrise:
• DataSunrise Audit. DataSunrise logs all user actions, SQL queries and query results. DataSunrise Data Audit
saves information on database users, user sessions, query code, etc. Data auditing results can be exported to an
external system, such as SIEM.
• DataSunrise Security. DataSunrise analyzes database traffic, detects and blocks unauthorized queries and SQL
injections on-the-fly. Alerts and reports on detected threats can be sent to network administrators or a security
team (officer) via e-mail or instant messengers.
• DataSunrise Dynamic Masking. DataSunrise prevents sensitive data exposure thanks to its data masking tool.
DataSunrise’s Dynamic Masking obfuscates output of sensitive data from a database by replacing it with random
data or real-looking data on-the-fly.
The Static Masking feature replaces real data with a fake copy which enables you to create a fully protected testing
and development environment out of your real production database.
The Table Relations feature can build associations between database columns. As a result, all associated columns
with sensitive data are linked and better organized.
The Data Discovery tool enables you to search for database objects that contain sensitive data and quickly create
Rules for these objects. The search can be done by the Lexicon, column names and data type. In addition, you can
use Lua scripting. NLP (Natural Language Processing) Data Discovery enables you to search for sensitive data across
database columns that contain unstructured data. For example, you can locate an email address in a text. Using the
Table Relations feature you can see all the columns associated with the discovered columns. You can set up periodic
task for DataSunrise to search for and protect newly added sensitive data.
DataSunrise functionality allows companies to be compliant with national and international sensitive data protection
regulations such as HIPAA, PCI DSS, ISO/IEC 27001, CCPA, GDPR, SOX, KVKK, PIPEDA, APPs, APPI, LGPD,
Nevada Privacy Law, Digital Personal Data Protection Bill, New Zealand's Privacy Act. This is how the
Compliance feature works. Databases are regularly searched for newly added sensitive data. As a result, database
and sensitive data within are constantly protected.
DataSunrise can generate PDF and CSV reports about audit and security events, data discovery, sessions, operation
errors and system events.

1.2 Supported Databases and Features


Supported database types and versions:
• Amazon Aurora MySQL
• Amazon Aurora PostgreSQL
• Amazon DynamoDB
1 General Information | 6
• Amazon Redshift
• Amazon S3 and other S3 protocol compatible file storage services like Minio and Alibaba OSS. Auditing and Data
Masking of CSV, XML, JSON and unstructured files are supported
• Apache Hive 1.0+
• Amazon Athena
• AlloyDB
• Cassandra 3.11.1- 3.11.2 (DB servers), 3.4.x ( CQL)
• CockroachDB 22.1+
• DocumentDB
• Elasticsearch 5+
• GaussDB(DWS)
• Greenplum 4.2+
• Hydra
• IBM DB2 9.7+. Linux, Windows, UNIX and z/OS are supported
• IBM Db2 Big SQL 5.0+
• Impala 2.x
• IBM Informix 11+
• MS SQL Server 2005+
• MariaDB 5.1+
• Microsoft Azure Synapse Analytics
• MongoDB 3.0+
• MySQL 5.0+ (Xprotocol is supported too)
• Neo4j
• IBM Netezza 6.0+
• Oracle Database 9.2+
• Percona Server for MySQL 5.1+
• PostgreSQL 7.4+
• SAP HANA 1.0+
• ScyllaDB 3.0+
• Snowflake Standard, Enterprise, Business Critical
• Sybase Adaptive Server Enterprise/15.7.0+
• Teradata 13+
• TiDB 5.0.0+
• Vertica 7.0+
• YugabyteDB 1.3+
• Google Cloud BigQuery
• Amazon OpenSearch 2.3+
The table below lists the databases supported by DataSunrise and features available for them. Please note that
proxying both of encrypted and unencrypted traffic is supported for all types of databases.
1 General Information | 7
Supported features. Part 1
DB type Database Activity DB Audit Trail Database Dynamic Static Masking
Monitoring Security Masking
Amazon Aurora MySQL + + + + +
Amazon Aurora + + + + +
PostgreSQL
Amazon DynamoDB + + +
Amazon OpenSearch + + +
Amazon Redshift + + + + +
Amazon S3 + + +
Apache Hive + + + +
Amazon Athena + + +
AlloyDB + + + +
IBM Db2 Big SQL + + + +
Cassandra + + + +
CockroachDB + + +
DocumentDB + + +
Elasticsearch + + +
GaussDB(DWS) + + + + +
GCloud BigQuery +
Greenplum + + + +
Hydra + + + +
IBM DB2 + + + +
Impala + + + +
IBM Informix + + + +
Microsoft SQL Server + + + + +
MariaDB + + + + +
Microsoft Azure Synapse + + + + +
Analytics
MongoDB + + + + +
MySQL + + + + +
Neo4j +
IBM Netezza + + + +
Oracle Database + + + + +
Percona Server for + + + +
MySQL
PostgreSQL + + + + +
SAP HANA + + + +
ScyllaDB + +
Snowflake + + + +
1 General Information | 8

DB type Database Activity DB Audit Trail Database Dynamic Static Masking


Monitoring Security Masking
Sybase + + +
Teradata + + + +
TiDB + + + +
Vertica + + + +
YugabyteDB + + + + +
1 General Information | 9
Supported features. Part 2
DB type Data Authentication Kerberos Sniffer Sniffing of Dynamic SQL
Discovery Proxy Authentication encrypted processing
traffic
Amazon Aurora MySQL + + +
Amazon Aurora + + +
PostgreSQL
Amazon DynamoDB +
Amazon OpenSearch + +
Amazon Redshift + +
Amazon S3 +
Apache Hive + + +
Amazon Athena
AlloyDB +
IBM Db2 Big SQL +
Cassandra + +
CockroachDB + +
DocumentDB + +
Elasticsearch + +
GaussDB(DWS) + + + + +
GCloud BigQuery
Greenplum + + + +
Hydra + + + +
IBM DB2 + +* +
Impala + + +
IBM Informix + +
Microsoft SQL Server + + + + + +
MariaDB + + + +
Microsoft Azure Synapse + + + + +
Analytics
MongoDB + +
MySQL + + + + +
Neo4j +
IBM Netezza + + + +
Oracle Database + + +
Percona Server for + + + +
MySQL
PostgreSQL + + + + +
SAP HANA + + +
ScyllaDB + +
1 General Information | 10

DB type Data Authentication Kerberos Sniffer Sniffing of Dynamic SQL


Discovery Proxy Authentication encrypted processing
traffic
Snowflake +
Sybase + +
Teradata + +
TiDB + + + +
Vertica + + + +
YugabyteDB + + + +

*Kerberos delegation is not supported

1.3 DataSunrise Operation Modes


DataSunrise can be deployed in one of the following configurations: Sniffer mode, Proxy mode, Trailing DB Audit
Logs.

1.3.1 Sniffer Mode


When deployed in the Sniffer mode, DataSunrise is connected to a SPAN port of a network switch. Thus, it acts as a
traffic analyzer capable to capture a copy of the database traffic from a mirrored port of the network switch.

Figure 1: Sniffer mode operation scheme.

In this configuration, DataSunrise can be used only for "passive security" ("active security" features such as database
firewall or masking are not supported in this mode). When deployed in Sniffer mode, DataSunrise is capable to
perform database activity monitoring only because it can't modify database traffic in this configuration. Running
DataSunrise in Sniffer mode does not require any additional reconfiguring of databases or client applications. Sniffer
mode can be used for data auditing purposes or for running DataSunrise in Learning mode.

Important: database traffic should not be encrypted. Check your database settings as some databases encrypt
traffic by default. If you're operating an SQL Server database, do not use ephemeral ciphers. DataSunrise deployed
in Sniffer mode does not support connections redirected to a random port (like Oracle). All network interfaces (the
main and the one the database is redirected to) should be added to DataSunrise's configuration.
1 General Information | 11

1.3.2 Proxy Mode


When deployed in this configuration, DataSunrise works as an intermediary between a database server and its client
applications. Thus it is able to process all incoming queries before redirecting them to a database server.

Figure 2: Proxy mode operation scheme.

Proxy mode is for "active protection". DataSunrise intercepts SQL queries sent to a protected database by database
users, checks if they comply with existing security policies, and audits, blocks or modifies the incoming queries or
query results if necessary. When running in the Proxy mode, DataSunrise supports its full functionality: database
activity monitoring, database firewall, both dynamic and static data masking are available.

Important: We recommend to use DataSunrise in the proxy mode. It provides full protection and in this mode,
DataSunrise supports processing of encrypted traffic and redirect connections (it is essential for Hana, Oracle,
Vertica, MS SQL). For example in SQL Server redirects can occur when working with Azure SQL or AlwaysOn Listener.

1.3.3 Trailing DB audit logs


This deployment scheme can be used to perform auditing of Oracle, Snowflake, Neo4J, PostgreSQL-like, AWS S3, MS
SQL Server, GCloud BigQuery, MongoDB and MySQL-like databases by the means of native auditing tools.

Figure 3: Trailing DB logs operation scheme.

Target database performs auditing using its integrated auditing mechanisms and saves auditing results in a
dedicated database table or in either a CSV or XML file depending on selected configuration. Then DataSunrise
1 General Information | 12
establishes a connection with the database, downloads the audit data from the database and passes it to the Audit
Storage for further analysis.
First and foremost, this configuration is intended to be used for Amazon RDS databases because DataSunrise
doesn't support sniffing on RDS.
This operation mode has two main drawbacks:
• If the database admin has access to the database logs, he can delete them
• Native auditing makes a negative impact on database performance.

1.4 Dynamic SQL Processing


Dynamic SQL processing is auditing, masking and blocking of queries that contain dynamic SQL. Thus, all queries
not clear before they are executed in database we call dynamic ones. For example, in PostgreSQL, EXECUTE is used
for such queries.

Note: Dynamic SQL processing is available for PostgreSQL, MySQL and MS SQL Server

EXECUTE enables you to execute a query which is contained in a string, variable or is a result of an expression. For
example:

...
EXECUTE "select * from users";
EXECUTE "select * from ” || table_name || where_part;
EXECUTE foo();
...

Here table_name and where_part are variables, foo() is a function that returns a string. The second and third queries
are dynamic ones because we can’t tell what query will be executed in the database.
Let's take a look at the following example:

SELECT run_query();

When executing this subquery, the following function will be called:

CREATE FUNCTION run_query() RETURNS RECORD AS


$$
DECLARE
row RECORD;
result RECORD;
BEGIN
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
EXECUTE row.sql into RESULT;
DELETE FROM queries WHERE id = row.id;
RETURN result;
END;
$$ LANGUAGE plpgsql;

This function takes a random query from the queries table, executes it and returns some result. DataSunrise can't
know which query will be executed beforehand because the exact query will be known when executing the following
subquery:

...
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
...
1 General Information | 13
That's why DataSunrise wraps dynamic SQL in the special function, DS_HANDLE_SQL, that does the trick. As the
result, the original function is modified to be the following:

CREATE FUNCTION DSDSNRBYLCBODMJOVNJLFJFH() RETURNS RECORD AS


$$
DECLARE
row RECORD;
result RECORD;
BEGIN
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
EXECUTE DS_HANDLE_SQL(row.sql) into RESULT;
DELETE FROM queries WHERE id = row.id;
RETURN result;
END;
$$ LANGUAGE plpgsql;

And the following query:

SELECT run_query();

Will be changed to the following one:

SELECT DSDSNRBYLCBODMJOVNJLFJFH();

Inside the DS_HANDLE_SQL function the database sends a dynamic SQL to DataSunrise's handler. The handler
processed the query and audits, masks or blocks it respectively. Thus,

...
EXECUTE row.sql into RESULT;
...

executes not the original query contained in the queries table but a modified one.
To enable dynamic SQL processing, when creating a database instance, you need to enable the “Dynamic SQL
processing” option in the Advanced settings. Then you need to select host and dynamic SQL handler’s port. This is
a host of the machine DataSunrise is installed on, it should be available for your database because the database
connects to this host when processing dynamic SQL.

Important: it's required to provide an external IP address of the SQL handler machine ("127.0.0.1" or "localhost" will
not work).

For processing of dynamic SQL inside functions, you need to enable the “UseMetadataFunctionDDL” parameter in
the Additional parameters and check the : “Mask Queries Included in Procedures and Functions” for masking Rules
or “Process Queries to Tables and Functions through Function Call” for audit and security Rules respectively.
You can also enable dynamic SQL processing in an existing Instance's settings and specify host and port in proxy’s
settings.
Note that you need to configure a handler for each proxy and select a free port number.

PostgreSQL
In PostgreSQL, dblink is used for processing of dynamic SQL. It enables sending any SQL queries to another remote
PG databaase.
Thus, dynamic SQL handler uses a PostgreSQL emulator. User DB with the help of dblink sends a dynamic SQL to
our handler. The emulator receives new connection, performs handshakes and makes the client DB believe that it
1 General Information | 14
sends queries to a real DB. Since it's necessary to pass session id and operation id (to associate a query sent to the
emulator with the original query), all these parameters are transferred using dblink's connection string:

host=<handler_host> port=<handler_port>
dbname=<session id> user=<operation id> password=<connection id>

MySQL
In MySQL, the FEDERATED storage engine extension is used for dynamic SQL processing. It connects two remote
databases as well. But in MySQL case it's something like an extended table that is created in one DB, but the data is
stored in another DB. To create such a table, it's necessary to provide a connection string to MySQL DB.
During execution of a first dynamic SQL query, the HANDLE_SQL function and such an extended table is created in
the DSDS***_ENVIRONMENT schema. This table's connection string points at MySQL emulator at that. The table
includes the following columns: query, connection_id, session_id, operation_id and action.
First, the function INSERTs all the required parameters. The emulator processes the query, modifies it and changed
action to block if necessary. After that, the function SELECTs the resulted query and returns it.
In MySQL, for creation and execution of dynamic queries the following pair of entities is used: prepare stmt
from @var and execute stmt. Since the execution of latter means that a prepared statement already exists in
the database, we modify prepare. As a result, a complete query:

prepare stmt from


@var

is replaced with prepare_<stmt_name>(@var).


This procedure's body looks that way:

call HANDLE_SQL(dynamic_sql, connection_id, session_id, operation_id,


@ds_sql_<stmt_name>);
prepare <stmt_name> from @ds_sql_<stmt_name>;

<stmt_name> in this case is the name of statement of user query. A separate procedure is created for every
name and for every addressing. Information about these procedures is stored in PreparedStatementManager.
@ds_sql_<stmt_name> is an exit parameter of HANDLE_SQL where the function puts a modified query to.

Important: for dynamic SQL processing in MySQL, federated engine should be enabled. To enable it, it's necessary
to add the federated string to the [mysqld] section of the /etc/my.cnf file. Another method: connect to your MySQL/
MariaDB with admin privileges, ensure that Federated Engine is off and enable it with the following query:

show engines;
install plugin federated soname
'ha_federated.so'

1.5 System Requirements


Before installing DataSunrise, make sure that your server meets the following requirements:
Minimum hardware requirements:
• CPU: 2 cores
• RAM: 4 GB
• Available disk space: 20 GB.
1 General Information | 15
Recommended hardware configuration:

Estimated database traffic volume CPU cores* RAM, GB


Up to 3000 operations/sec 2 8
Up to 8000 operations/sec 4 16
Up to 12000 operations/sec 8 32
Up to 14000 operations/sec 16 64
Up to 17000 operations/sec 40 160

*Xeon E5-2676 v3, 2.4 GHz

Note: the more proxies you open, the higher the RAM consumption you will experience.

Software requirements:
• Operating system: 64-bit Linux (Red Hat Enterprise Linux 7+, Debian 10+, Ubuntu 18.04 LTS+, Amazon Linux 2)
• 64-bit Windows (Windows Server 2019+) with .NET Framework 3.5 installed https://www.microsoft.com/en-us/
download/details.aspx?id=21
• Linux-compatible file system (NFS and SMB file systems are not supported)
• Web browser for accessing the Web Console:

Web Browser Supported version


Mozilla Firefox 100+
Google Chrome 72+
Opera 58+
MS Edge 44.18+
Apple Safari 14+

• To run DataSunrise in Sniffer mode, Install the Npcap library: https://nmap.org/npcap/


Note that you might need to install some additional software like database drivers depending on the target
database and operating system you use. For the full list of required components see the Prerequisites section of the
corresponding Admin Guide.

1.6 Useful Resources


Web resources:
• DataSunrise official web site: https://www.datasunrise.com/
• DataSunrise latest version download page: https://www.datasunrise.com/download
• DataSunrise Facebook page: https://www.facebook.com/datasunrise/
• Frequently Asked Questions: https://www.datasunrise.com/documentation/faq/
• Best practices: https://www.datasunrise.com/download-the-datasunrise-security-best-practices/
• Best practices (AWS): https://www.datasunrise.com/documentation/download-the-datasunrise-aws-security-best-
practices/
• Best practices (Azure): https://www.datasunrise.com/documentation/download-the-datasunrise-azure-best-
practices/
Documents (located in the doc folder within the DataSunrise's installation folder):
• DataSunrise Administration Guide for Linux (DataSunrise_Database_Security_Admin_Guide_Linux.pdf). Describes
installation and post-installation procedures, deployment schemes, includes troubleshooting subsection.
• DataSunrise Administration Guide for Windows (DataSunrise_Database_Security_Admin_Guide_Windows.pdf).
Describes installation and post-installation procedures, deployment schemes, includes troubleshooting
subsection
• DataSunrise User Guide (DataSunrise_Database_Security_User_Guide.pdf). Describes the Web Console's structure,
program management, etc
• Command Line Interface Guide (DataSunrise_Database_Security_CLI_Guide.pdf). Contains the CLI commands
description, use cases, etc
• Release Notes (DataSunrise_Database_Security_Release_Notes.pdf). Describes changes and enhancements made
in the latest DataSunrise version, known bugs and version history
• EULA (DataSunrise_EULA.pdf). Contains End User License Agreement.
2 Deployment Topologies | 17

2 Deployment Topologies
DataSunrise can be installed either on a database server or on a separate server. In both cases, the software can be
used both in the Sniffer mode and the Proxy mode.

2.1 Installing DataSunrise on a Database


Server

Figure 4: Deployment on a DB server

2.1.1 Proxy Mode


To deploy DataSunrise in the Proxy mode, use one of the following methods:

a) Tweaking of database settings


• Configure DataSunrise to use the port which the database uses to connect to the client applications
• Change the database's port number (because its old port is occupied by DataSunrise now).
• Configure a connection between DataSunrise and the database considering changes made in the previous steps.
All the aforementioned steps are not relevant to Teradata and Vertica. Vertica and Teradata use a default port
that cannot be changed. If you are going to use DataSunrise proxy for a single database, this would work. But if
there is one more Vertica or Teradata database, the default port cannot be used again because it is already taken.
A proxy to another database should be opened on another port and database clients should be reconfigured.

Tip: You can use the installation method described above during firewall testing, but some DB clients will still retain
direct access to the DB. Use a system firewall (Windows Firewall or Iptables for Linux for example) to block direct
access to the DB.

Important: Many operating systems reserve port numbers less than 1024 for privileged system processes. That’s
why it’s preferable to use port numbers higher than 1024 to establish a proxy connection.
2 Deployment Topologies | 18
b) Reconfiguring client applications
• Make sure that DataSunrise uses the same port number as the database
• Configure all client applications to connect to DataSunrise, not to the database

2.1.2 Sniffer Mode


It is not required to tweak any client applications or database settings.

2.2 Installing DataSunrise on a Separate


Server
2.2.1 Proxy Mode

Figure 5: Proxy mode deployment scheme

To deploy DataSunrise in the Proxy mode, perform the following:


• Configure a connection between DataSunrise and the database.
• Configure all the client applications to connect to the DataSunrise's proxy instead of the database.

Important: Many operating systems reserve port numbers less than 1024 for privileged system processes, so it’s
preferable to use port numbers higher than 1024.
2 Deployment Topologies | 19

2.2.2 Sniffer Mode

Figure 6: Sniffer mode deployment scheme

To deploy DataSunrise in the Sniffer mode, configure your network switch for transferring mirrored traffic to
DataSunrise (refer to your network switch's user guide for the description of port mirroring procedure).
3 DataSunrise Installation | 20

3 DataSunrise Installation
Note: Before you begin DataSunrise installation process, please select an appropriate deployment option
(subsections Installing DataSunrise on a Database Server on page 17 and Installing DataSunrise on a Separate Server
on page 18) and perform all required preparations. Also make sure that a machine you want to install DataSunrise
on, meets the system requirements.

3.1 Prerequisites
General
First and foremost, the Linux version of DataSunrise requires UnixODBC for its operation. For details, refer to http://
www.unixodbc.org/. To install UnixODBC from the repository, execute the following command:
Debian-based OSs:

sudo apt-get install unixodbc

On Ubuntu 20+, you need to install the following libraries: libbsd-dev, libgssapi-krb5-2, libldap-2.4-2, liblzo2-dev.
Execute the following command to do that:

sudo apt install libbsd-dev libgssapi-krb5-2 libldap-2.4-2 liblzo2-dev libncurses5

Red Hat based OSs:

sudo yum install unixODBC

All environment variables should be added to: /etc/datasunrise.conf:

NZ_ODBC_LIB_PATH=/usr/local/nz/lib64/
NZ_ODBC_INI_PATH=/etc/

It is not recommended to modify /opt/datasunrise/start_firewall.sh.

Install Java SE Runtime Environment 8 to use Data Discovery, Unstructured Masking and to be able to
generate PDFs with Report Generator: http://www.oracle.com/technetwork/java/javase/downloads/jre8-
downloads-2133155.html For NLP Data Masking you need to configure a Java Virtual Machine (JVM) for Linux
(detailed information you can find in the User Guide: NLP Data Masking (Unstructured Masking))

Database-specific
Depending on your target database type, it might be necessary to install some additional 64-bit drivers and
software:
• For Oracle database, install the Oracle Instant Client. Note that DataSunrise supports Instant Client 11.2+. You
can get a compatible Instant Client at here: https://www.datasunrise.com/support-files/oracle-instantclient19.10-
basic-19.10.0.0.0-1.x86_64.rpm, https://www.datasunrise.com/support-files/oracle-instantclient12.1-
basic-12.1.0.2.0-1.x86_64.rpm
3 DataSunrise Installation | 21
Having installed the Oracle Instant Client, add its home directory path to the $ORACLE_HOME environment
variable and to the $PATH variable. Or you can add the required path to the /etc/datasunrise.conf file. Example:

bash -c "echo \"ORACLE_HOME=/opt/instantclient_12_1/\" >> /etc/datasunrise.conf"

After that, restart the DataSunrise's system service:

sudo service datasunrise restart

Then create a symbolic link for the required libclntsh.so library: libclntsh.so.12.1 or libclntsh.so.11.3 for example (it
depends on Oracle version):

cd /opt/instantclient_12_1
sudo ln -s libclntsh.so.12.1 libclntsh.so

For run-time linker (the ldconfig command refreshes its configuration) you need to create a corresponding .conf
file:

sudo bash -c "echo /opt/oracle/instantclient_12_1 > /etc/ld.so.conf.d/oracle-instantclient.conf"


sudo ldconfig

• For Netezza, install the dedicated ODBC driver. Download it from the IBM Fix Central website: http://
www-933.ibm.com/support/fixcentral/
Note that your IBM ID should be associated with your IBM customer ID with active support and maintenance
contract for the Netezza appliance.
• Unpack the driver's archive:

tar xvzf nz-linuxclient-v7.2.0.0.tar.gz

• Run driver installation:

cd linux64
./unpack

• Add driver configuration to /etc/odbcinst.ini:

[NetezzaSQL]
Driver = /usr/local/nz/lib/libnzsqlodbc3.so
Setup = /usr/local/nz/lib/libnzsqlodbc3.so
APILevel = 1
ConnectFunctions = YYN
Description = Netezza ODBC driver
DriverODBCVer = 03.51
DebugLogging = false
LogPath = /tmp
UnicodeTranslationOption = utf16
CharacterTranslationOption = all
PreFetch = 256
Socket = 16384

• Ensure that the required environment variables are set:

NZ_ODBC_LIB_PATH=/usr/local/nz/lib64/
NZ_ODBC_INI_PATH=/etc/

• Ensure that DataSunrise's script exports the required environment variables (/opt/datasunrise/start_firewall.sh):
3 DataSunrise Installation | 22
Variables export:

export NZ_ODBC_INI_PATH
export NZ_ODBC_LIB_PATH

Adding to LD_LIBRARY_PATH

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./:$ORACLE_HOME:$ORACLE_HOME/lib:$NZ_ODBC_LIB_PATH; export
LD_LIBRARY_PATH

Note: if you get Error code 33, it means that you set the NZ_ODBC_INI_PATH incorrectly:

Error code 0. [unixODBC][Driver Manager]Can't open lib '/usr/local/nz/lib/libnzsqlodbc3.so' : file


not found.

The aforementioned error code means that you need to add /usr/local/nz/lib64/ to LD_LIBRARY_PATH:

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NZ_ODBC_LIB_PATH:/usr/local/nz/lib64/

The following error code:

Error code 33. ###################y(.

means that you should check the UnicodeTranslationOption parameter's encoding (should be utf16):

UnicodeTranslationOption=utf16

• For MS SQL Server, you might need to install the ODBC driver: https://docs.microsoft.com/en-us/sql/connect/
odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-2017
• For Hive, install the Hortonworks ODBC driver: https://hortonworks.com/downloads/.

Note: some Cloudera-issued drivers use UTF-32 encoding by default which makes it impossible to establish
a database connection. To fix this issue, edit the /opt/cloudera/hiveodbc/lib/64/cloudera.hiveodbc.ini file in the
following way:

DriverManagerEncoding=UTF-16;

• For Vertica, install the ODBC client drivers: https://my.vertica.com/download/vertica/client-drivers/ . You will need
to log into your Vertica community account or create it if you don’t have one. Having downloaded the required
drivers, log in to the system as root and do the following:
• Create an /opt/vertica/ folder:

mkdir -p /opt/vertica/ # create the folder /opt/vertica/

• Copy and paste the archive with the drivers to the folder:

cp vertica_x.x..xx_odbc_64_linux.tar.gz /opt/vertica/
3 DataSunrise Installation | 23
• Change the directory to /opt/vertica/:

cd /opt/vertica/$

• Uncompress the archive:

tar vzxf vertica_x.x..xx_odbc_64_linux.tar.gz

• Open the file which contains ODBC settings:

/etc/odbcinst.ini

• In the odbcinst.ini file, change the directory of the Vertica ODBC driver as shown below. Note that
ErrorMessagesPath shown here is the default one since Vertica error messages are located in /opt/vertica/en-
US (ODBCMessages.xml and VerticaMessages.xml files). If your location of logs is different, change the path to
the actual one.

[Vertica]
Description = Vertica_ODBC_Driver
Driver = /opt/vertica/lib64/libverticaodbc.so
ErrorMessagesPath=/opt/vertica/en-US

• If you want to set some specific settings for your Vertica client, create an /etc/vertica.ini file and paste
the code shown below into it. Note that ErrorMessagesPath should be the same as in your odbcinst.ini.
ErrorMessagesPath shown here is the default one since Vertica error messages are located in /opt/vertica/en-
US (ODBCMessages.xml and VerticaMessages.xml files). If your location of logs is different, change the path to
the actual one.

[Driver]
DriverManagerEncoding=UTF-16
ErrorMessagesPath=/opt/vertica/en-US
LogLevel=4
LogPath=/tmp

• For Amazon Redshift, install the Redshift ODBC driver: http://docs.aws.amazon.com/redshift/latest/mgmt/install-


odbc-driver-linux.html
• Download the required 64-bit RPM or DEB driver package and install the required package
• Add the following lines to the /etc/odbcinst.ini file:

[Amazon Redshift (x64)]


Description=Amazon Redshift ODBC Driver (64-bit)
Driver=/opt/amazon/redshiftodbc/lib/64/libamazonredshiftodbc64.so

Note: we strongly suggest you to use ODBC driver version 1.X because the newest v. 2.0.0.1 is not stable.

• For IBM DB2, install IBM Data Server Client Package:


• Download the drivers from the official website. You will need to log in into your IBM account or create a new
account. http://www-01.ibm.com/support/docview.wss?rs=4020&uid=swg27016878
• Unzip the driver archive:

tar -xfv ibm_data_server_driver_for_odbc_cli_linuxx64_v11.1.tar.gz


3 DataSunrise Installation | 24
• Open the odbcinst.ini file with a text editor and add the path to the libdb2o.so file as shown in the example
below. Note that this file is located in the /etc/ directory.

[DB2]
Description=DB2 Driver
Driver=/home/user/сlidriver/lib/libdb2o.so
FileUsage=1
DontDLClose=1

• For SAP Hana, install the Hana driver:


• Download and install the Hana client. Refer to the following link: https://help.sap.com/viewer/
e7e79e15f5284474b965872bf0fa3d63/2.0.02/en-US/d41dee64bb57101490ffc61557863c06.html
• Run the following command using the Linux Terminal:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/sap/hdbclient/

• Add the following lines to the odbc.ini file

[HDBODBC]
Driver64=/usr/sap/hdbclient/libodbcHDB.so
DriverUnicodeType=1

• Add the following lines to the odbcinst.ini file:

[HDBODBC]
Description = ODBC for SAP HANA
Driver64 = /usr/sap/hdbclient/libodbcHDB.so
FileUsage = 1

Important: on some Linux systems (Red Hat for example), you might need to install the libaio package:

sudo yum install libaio

or

sudo apt-get install libaio1 libaio-dev

• For Impala, install the ODBC driver: https://www.cloudera.com/documentation/other/connectors/impala-odbc/


latest/Cloudera-ODBC-Driver-for-Impala-Install-Guide.pdf

Note: some Cloudera-issued drivers use UTF-32 encoding by default which makes it impossible to establish a
database connection. To fix this issue, edit the /opt/cloudera/impalaodbc/lib/64/cloudera.impalaodbc.ini in the
following way:

DriverManagerEncoding=UTF-16

• For Teradata, install the ODBC driver as described below. Note that ODBC drivers v. 17/XX and higher may cause
problems when working with Teradata 13.
• Download the driver: https://downloads.teradata.com/download/connectivity/odbc-driver/linux
3 DataSunrise Installation | 25
• Unpack the archive's contents:

tar -xf tdodbc1620__ubuntu_indep.16.20.00.68-1.tar.gz

• Install the .deb package:

sudo dpkg -i tdodbc1620/tdodbc1620-16.20.00.68-1.noarch.deb

• Paste /opt/teradata/client/ODBC_64/odbcinst.ini contents to /etc/odbcinst.ini:

cat /opt/teradata/client/ODBC_64/odbcinst.ini | sudo tee -a /etc/odbcinst.ini

• Restart DataSunrise's system service:

sudo service datasunrise restart

3.2 Configuring IBM Informix ODBC driver


You can configure the IDB Informix ODBC driver using the driver installer.
To configure the ODBC driver, do the following:
1. Download the ODBC driver archive. Create a temporary directory to unpack the archive to:

$ mkdir /tmp/ibm.csdk.4.50.FC1.LNX

2. Navigate to the directory where your driver archive is. For example:

$ cd ~/download

Unpack the archive:

$ tar -C /tmp/ibm.csdk.4.50.FC1.LNX -xf ibm.csdk.4.50.FC1.LNX.tar

3. Navigate to the directory the archive was unpacked to and start the installation. First, you can use a super user
for that:

$ cd /tmp/ibm.csdk.4.50.FC1.LNX
$ sudo ./installclientsdk

If the driver hasn't been installed, try another method of installation. Start the installation as your Linux user. First,
you need to create driver installation directories and make your user the owner of these directories:

$ sudo mkdir -p /opt/IBM/Informix_Client-SDK


$ sudo chown -R `whoami`:`whoami` /opt/IBM/Informix_Client-SDK
$ ./installclientsdk

Follow the installation wizard prompts. To display help, run the installer with key --help:

$ ./installclientsdk --help
3 DataSunrise Installation | 26
4. Add the following lines to the /edc/odbcinst.ini file (watch the character case and the path to the drivers). Note
that IBM Informix Client doesn't update the /etc/odbcinst.ini file with the required entry so you need to do it
manually:

[ODBC Drivers]
IBM INFORMIX ODBC DRIVER (64-bit)=Installed

[IBM INFORMIX ODBC DRIVER]


Driver=/opt/IBM/Informix_Client-SDK/lib/cli/iclit09b.so
Setup=/opt/IBM/Informix_Client-SDK/lib/cli/iclit09b.so
APILevel=1
ConnectFunctions=YYY
DriverODBCVer=04.50
FileUsage=0
SQLLevel=1
smProcessPerConnect=Y

Ensure that your IBM Infromix client package is installed in the correct directory.
5. Then, before starting DataSunrise's backend, setup ODBC in the following way: add the following line to the /etc/
datasunrise.conf file:

> > > INFORMIXDIR="/opt/IBM/Informix_Client-SDK"

OR
Set up for manual starting of DataSunrise. Set the following environment variables:

> > > export THREADLIB=POSIX


> > > export INFORMIXDIR="/opt/IBM/Informix_Client-SDK"
> > > export PATH="${PATH}:${INFORMIXDIR}/bin"
> > > export LD_LIBRARY_PATH="${INFORMIXDIR}/lib:${INFORMIXDIR}/lib/esql:${LD_LIBRARY_PATH}"

It's worth noting that if the Backend is not started from the Console, the aforementioned variables should be
added tpo some file that is used for environment variables set up. Fo example: ~/.bashrc
Having added the environment variab;es to a file, relogin or reboot your machine for the variables to be applied.
In QT Creator, it's enough to add the following environment variables:

> > > INFORMIXDIR=/opt/IBM/Informix_Client-SDK


> > > LD_LIBRARY_PATH=/opt/IBM/Informix_Client-SDK/lib:/opt/IBM/Informix_Client-SDK/lib/esql
> > > THREADLIB=POSIX

Note: if the driver down't work, check the driver path, check the /etc/odbcinst.ini file paying special attention
to paths and character case. You can check driver dependencies after that with the help of Idd. You should get
something like that:

sndbx-dev)builder@dataarmor:~$ ldd /opt/IBM/Informix_Client-SDK/lib/cli/iclit09b.so


linux-vdso.so.1 => (0x00007ffd63199000)
libifgls.so => /opt/IBM/Informix_Client-SDK/lib/esql/libifgls.so (0x00007f56fb16e000)
libifglx.so => /opt/IBM/Informix_Client-SDK/lib/esql/libifglx.so (0x00007f56faf6c000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f56fad4f000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f56faa46000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f56fa842000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f56fa60a000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f56fa240000)
/lib64/ld-linux-x86-64.so.2 (0x00007f56fb76b000)
(sndbx-dev)builder@dataarmor:~$

If some libraries haven't been found, check the LD_LIBRARY_PATH variable's value and ensure that the required
libraries are installed in your system and you can access them (check rights).
3 DataSunrise Installation | 27
If errors occur anyway, add the following content to the file ~/.profile:

> > > export THREADLIB=POSIX


> > > export INFORMIXDIR="/opt/IBM/Informix_Client-SDK"
> > > export PATH="${PATH}:${INFORMIXDIR}/bin"
> > > export LD_LIBRARY_PATH="${INFORMIXDIR}/lib:${INFORMIXDIR}/lib/esql:${LD_LIBRARY_PATH}"

Then:

sudo reboot

3.3 Required Port Numbers


To operate DataSunrise, you need certain port numbers to be available for usage on your DataSunrise server
(container, bare metal/virtual server). These ports are required for DataSunrise's vital components and proxy/trailings
connections established with your target databases.
Add the following port numbers to the "allow list" of your network firewall for inbound access:
• 11000/TCP: for access to DataSunrise's Web Console and healthcheck probes (in case DataSunrise setup is put
behind a load balancer of some description)
• 12000/TCP: for local admin access if Kerberos Authentication is enabled for the Web Console
• 11001 + 1: for Interchange Manager ports. 11001 is the main Backend port to communicate witth its worker
processes (Core processes) and +1 for each Proxy/Trailing/Sniffer configured on you server. In High Availability
mode it is used for communication between Backends of a cluster. Interchange Manager ports are used for
getting Core uptime calculation and for other minor stats collection
• Proxy port (any TCP): for access to the TCP port configured to handle traffic of a certain database instance. There
are no recommendations which port to open. Same port can't be used by multiple database interfaces on a
database instance

3.4 DataSunrise Installation


To install DataSunrise, do the following:
1. Use the following command to give the execution permission to the DataSunrise installation file:

sudo chmod +x DataSunrise_Suite_XXX.linux.64bit.run

2. Start the installation by executing the following command:

sudo ./DataSunrise_Suite_XXX.linux.64bit.run install

Note: on some Linux distributions (CentOS 6.8 at least), you might need to specify a temporary folder for
unzipping the installation archive:

sudo DataSunrise_Suite_XXX.linux.64bit.run --target ~/ds

You can use some additional parameters to customize the installation process:
3 DataSunrise Installation | 28

CLI parameter Description


-f Perform a quick installation using the default settings
--config-path Install DataSunrise to a non-default directory. Example:

sudo ./DataSunrise.linux.64bit.run install --config-path /opt/


datasunrise/datasunrise

--no-password Don't generate a password for the Web Console at the end of the
installation process (set the password for the Web Console after the
installation)
--extract-only Extract the DataSunrise distribution into the specified folder without
installation (in this case you can start DataSunrise manually from the
installation folder)
--no-start Don't start the DataSunrise service after the installation
--remote-config Configure remote Dictionary
remove Uninstall DataSunrise
repair Repair the DataSunrise installation
update Update DataSunrise (replace the existing binary files, Web Console, CLI and
documentation files with the latest counterparts).
-v Display errors that could occur during the installation

For example:

sudo ./DataSunrise_Suite_XXX.linux.64bit.run install -f --no-password

3. Specify the DataSunrise installation folder in the Target directory line if necessary.

Note: DataSunrise is installed into the opt/datasunrise folder by default.


3 DataSunrise Installation | 29

Figure 7: Note that DataSunrise generates a password for the Web Console at the end of the installation
process (by default)

3.4.1 Installing DataSunrise from the Repository (Debian)


If you're going to install DataSunrise on Debian operating system, you can do it using the Debian repository. Do the
following:
1. Install the dirmngr package for accessing the OpenPGP keyservers:

sudo apt-get -y install dirmngr

2. Import the public GPG key:

sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com F965B722

3. Add the repository to the sources list:

echo "deb http://repository.datasunrise.com stretch non-free" | sudo tee /etc/apt/sources.list.d/


datasunrise.list

4. Update the system and install the DataSunrise package:

sudo apt update


sudo apt -y install datasunrise

5. Check the DataSunrise installation:

https://<Server_IP>:11000

6. If necessary, replace DataSunrise's SSL certificate with a new one.


7. Set administrator's password because DataSunrise is installed from the repository with no password.
3 DataSunrise Installation | 30

3.4.2 Installing DataSunrise from the RPM Repository


(RHEL 8, CentOS 8)
If you're going to install DataSunrise on RHEL 8 or CentOS 8 operating system, you can do it using the RPM
repository. Do the following:
1. Import the public GPG key:

sudo rpm --import https://rpm_repo.datasunrise.com/key.pub

2. Create the /etc/yum.repos.d/DS.repo file:

sudo touch /etc/yum.repos.d/DS.repo

3. Edit /etc/yum.repos.d/DS.repo:

[DS]
name=Datasunrise
baseurl=http://rpm_repo.datasunrise.com/release/RHEL/
enabled=1
gpgcheck=1

4. Install DataSunrise package from the repository:

sudo yum install DataSunrise_Suite.x86_64

5. Check the DataSunrise installation:

https://<Server_IP>:11000

6. If necessary, replace DataSunrise's SSL certificate with a new one.


7. Set administrator's password because DataSunrise is installed from the repository with no password.

3.5 DataSunrise Installation Folder


This subsection describes DataSunrise files and structure of the installation folder.

Figure 8: DataSunrise files and folders

1. DataSunrise folders:
3 DataSunrise Installation | 31

Folder name Description


cmdline Contains the DataSunrise Command Line Interface (CLI) files
doc Contains DataSunrise documentation (User Guide, CLI Guide, Release Notes,
EULA)
gwt Contains the Web Console files
logs Log files (Back end, Core, Web Console logs)

2. DataSunrise files:
File name Description
AppBackendService The system process required for operation of the Web Console and control
of AppFirewallCore
appfirewall.pem SSL certificate for the Web Console
AppFirewallCore Program's Core
audit.db SQLite database file to store audit data (the Audit Storage)
cacert.pem SSL certificate required for online updates
dictionary.db Contains the program settings, DataSunrise-specific objects such as
database profiles, user profiles, rules, etc.
event.db System events logs
libcrypto.so.10 OpenSSL library
libssl.so.10 OpenSSL library
proxy.pem OpenSSL keys and certificates used for proxies by default
standart_application_queries.db Contains the queries used by Oracle SQL Developer (refer to the Query
Groups subsection of the User Guide for more information)
start_firewall.sh The script that starts the datasunrise system service
stop_firewall.sh The script that stops the datasunrise system service

3.6 Starting DataSunrise


DataSunrise needs the datasunrise service running to operate. This service starts the DataSunrise's Back end and
Core automatically.
You can start the DataSunrise service manually by executing the following command via the Linux Terminal:

sudo service datasunrise start

To stop the DataSunrise service, execute the following command:

sudo service datasunrise stop


3 DataSunrise Installation | 32

3.7 Copying DataSunrise Settings to Another


DataSunrise Instance
To use your DataSunrise settings for another DataSunrise instance installed on another server, do the following:
1. Stop the DataSunrise's system service.
2. Copy the dictionary.db, event.db and audit.db files from the source DataSunrise installation folder.
3. Install a new DataSunrise instance on the destination server.
4. Stop the DataSunrise's system service on the destination host.
5. Move the dictionary.db, event.db and audit.db files to the new DataSunrise instance's installation folder.
6. Start the DataSunrise's system service on the destination host.
7. Use the DataSunrise's Web Console to confirm the completeness of the migration of all policies and
configurations.

3.8 Switching Between Multiple DataSunrise


Instances
DataSunrise supports switching between multiple Web Consoles installed on multiple independent servers or
clusters. To do it, perform the following:
1. Create a file named "local_servers_config.json" with the following content:

{
"list": [
{"clusterName": "cluster1", "clusterURL": "https://localhost:11000/?ds=1"},
{"clusterName": "cluster2", "clusterURL": "https://localhost:11100/?ds=2"}
]
}
2. Place the file into the $AF_HOME folder ("/opt/datasunrise" by default).
3. Select DataSunrise server of interest in the drop-down list located at the top of the screen.

3.9 Upgrading DataSunrise


To update DataSunrise to the latest version, do the following:
Download the latest version of DataSunrise from the official web site and run the installation file using the update
command:

chmod +x ./DataSunrise_Suite_X_X_X.linux.64bit.run
sudo ./DataSunrise_Suite_X_X_X.linux.64bit.run update
3 DataSunrise Installation | 33

Note: During the update process, the installer creates a backup folder (/opt/datasunrise/backup/) where all files
required for installation roll back are retained in case of any issues that would result from the upgrade.
In case the update procedure fails and you need to return to the previous state, all the backup files should be
manually copied to the main DataSunrise directory.
The update procedure may take several minutes. During that period the DataSunrise service may become
unavailable if you're not running multiple DataSunrise instances in High Availability (HA) configuration at that time.

3.10 Downgrading DataSunrise


During the DataSunrise upgrade process, the installer creates a back up folder where all files required to roll back
the installation are retained in case of any issues that would result from the upgrade. To restore a previous version of
DataSunrise, perform the following:
1. Run the DataSunrise installer file with the rollback parameter:

sudo ./DataSunrise.run rollback

You will get a list of all available back ups to decide the point to which you would like to reverse the current
configuration.
2. Choose the correct back up to restore from (select its number. Note that the higher number is, the newer the
back up is.

3.11 Restoring Access to the Web Console if


the Password is Lost
You can't restore a DataSunrise administrator's password if you lost it, but you can set a new one. To change the
admin user password, use Linux Terminal to do the following:
1. Use the cd command to navigate to the DataSunrise installation folder (opt/datasunrise by default):

cd /opt/datasunrise

2. Define the LD_LIBRARY_PATH environment variable:

export LD_LIBRARY_PATH=/opt/datasunrise

3. Run the AppBackendService file with the set_admin_password parameter. Set a new password as the
parameter's value:

sudo ./AppBackendService set_admin_password=<new_password>

4. Check the access to the Web Console using administrator account and updated password
3 DataSunrise Installation | 34

3.12 Adding a DataSunrise server on Ubuntu


to an Active Directory domain
Let’s assume that we have a server, sqlsrv1.HAG.LOCAL with MSSQLSERVER instance running under HAG
\Administrator account. The following instruction describes how to make it work with DataSunrise.
1. First, update the repositories:

sudo apt-get update


sudo apt-get upgrade

2. Install krb5-user, samba and winbind packages:

sudo apt-get install krb5-user samba winbind


sudo apt-get install libpam-krb5 libpam-winbind libnss-winbind

3. Configure DNS: specify the AD domain controller's IP address as the server's domain. To do this, use Network
Manager or edit the /etc/resolv.conf file in the following way:

domain <Your domain name>


search <Your domain name>
nameserver <AD Domain controller IP>

Specify your machine's name in the /etc/hostname file. For example:

ubuntu01

Edit the /etc/hosts file: add an entry with the full domain name of your machine and the short host name
resolved to an internal IP address:

127.0.0.1 localhost
127.0.1.1 ubuntu01.mydomain.com ubuntu01

4. Configure time synch:


Install ntpd:

sudo apt-get install ntp

Add information about your time server to the /etc/ntp.conf file:

<server> <Full domain name>

Restart ntpd:

sudo /etc/init.d/ntp restart

5. Configure Samba:
Edit the /etc/samba/smb.conf file as shown below:

[global]
# The values for these two options should be in upper case. Workgroup should lack the section
coming after the dot, and realm - is the full domain name:
workgroup = <MYDOMAIN>
3 DataSunrise Installation | 35
realm = <MYDOMAIN.COM>

# These two options are responsible for AD authorization:


security = ADS
encrypt passwords = true
# Important options:
dns proxy = no
socket options = TCP_NODELAY

# If you don't want Samba to become a domain or workgroup leader or a domain controller,
# use the settings below:
domain master = no
local master = no
preferred master = no
os level = 0
domain logons = no

# Disable printers:
load printers = no
show add printer wizard = no
printcap name = /dev/null
disable spoolss = yes

Then execute the following command:

testparm

You should get something like that:

# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[printers]"
Processing section "[print$]"
Loaded services file OK.
Server role: ROLE_DOMAIN_MEMBER
Press enter to see a dump of your service definitions

6. Configure Winbind:
Add the following lines to /etc/samba/smb.conf → [global]:

# Domain and virtual users comparison using Winbind.

# Identifiers ranges for virtual users and groups:


dmap config * : range = 10000-20000
idmap config * : backend = tdb
# We don't recommend to disable these options:
winbind enum groups = yes
winbind enum users = yes
# Use default domain for user names. When this option is disabled, user names and group names
# will be used with a domain name: DOMAIN\username instead of username
winbind use default domain = yes
# If you want to enable the usage of CLI for the domain users,
# add the following line
template shell = /bin/bash
# For automatic updating of Kerberos ticket by the pam_winbind.so module, add the following:
winbind refresh tickets = yes

Restart Winbind and Samba:

sudo /etc/init.d/winbind stop


sudo smbd restart
sudo /etc/init.d/winbind start
3 DataSunrise Installation | 36
Run:

sudo testparm

If you get the "rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)", edit /etc/security/limits.conf:

# Add the following lines to the end of the file:


* - nofile 16384
root - nofile 16384

Reboot your machine. Run testparm. Ensure that Winbind is trusted by AD:

# sudo wbinfo -t
checking the trust secret for domain DCN via RPC calls succeeded

Ensure that Winbind sees AD users and user groups:

sudo wbinfo -u
sudo wbinfo -g

7. Add Winbind as the source of users and user groups:


Edit the following lines in the /etc/nsswitch.conf file in the following way:

passwd: compat winbind


group: compat winbind

Edit the /etc/nsswitch.conf in the following way:

files: dns mdns4_minimal[NotFoud=return] mdns4

Execute the following commands to ensure that Ubuntu gets the information about users and groups from
Winbind:

sudo getent passwd


sudo getent group
sudo getent passwd 'usernameAD'
sudo getent group 'groupnameAD'

8. Include your machine in the domain:

net ads join -U <user name> -D <DOMAIN NAME>

You should get something like that:

# net ads join -U username -D MYDOMAIN


Enter username's password:
Using short domain name -- MYDOMAIN
Joined 'ubuntu01' to dns domain 'mydomain.com'

Check the status:

net ads testjoin


3 DataSunrise Installation | 37
You should get something like that:

#net ads testjoin


Join is OK

It means that your machine has been added to the domain.


To see the domain resources, you can install smbclient:

sudo apt-get install smbclient

To do this, use the following command:

smbclient -k -L workstation

9. Configure Kerberos:
Edit the /etc/krb5.conf by adding your domain name and your domain controller. Note that all names are case-
sensitive:

[libdefaults]
default_realm = <DOMAIN NAME>
[realms]
<DOMAIN NAME> = {
kdc = <full domain name>
kdc = <full domain name>
admin_server = <domain controller full domain name>
default_domain = <domain name>
}
[domain_realm]
.<domain name> = <DOMAIN NAME>
<domain name> = <DOMAIN NAME>

Check if you can authorize in the domain. Note that the domain name should be in upper case:

kinit <username>@<DOMAIN>

To ensure that you got a Kerberos ticket, use the following command:

klist

To remove all tickets:

kdestroy

3.13 DataSunrise Removal


To uninstall DataSunrise, perform the following:
Initiate a program removal by executing the following command via the Linux CLI:

sudo ./DataSunrise_Suite_X_XX.linux.64bit.run remove


3.13.1 Removing DataSunrise Using the Repository
(Debian)
If you're going to remove DataSunrise from your Debian-powered machine, you can do it using the Debian
repository. Do the following:
1. To remove DataSunrise from your machine, execute the following command:

sudo apt remove datasunrise

2. To completely remove DataSunrise from your machine and delete its installation folder, execute the following
commands:

sudo apt purge datasunrise


sudo rm -rf /opt/datasunrise

Note: "purge" alone doesn't delete DataSunrise installation folder. This folder contains Dictionary and audit files
you might need. If you don't need the installation folder anymore, execute the -rm command as shown above.

3.13.2 Removing DataSunrise Using the RPM Repository


(RHEL, CentOS 8)
If you're going to remove DataSunrise from your RHEL or CentOS 8 operating system, you can do it using the RPM
repository. Do the following:
Execute the following command:

sudo yum remove DataSunrise_Suite.x86_64


4 Multi-Server Configuration (High Availability Mode) | 39

4 Multi-Server Configuration (High


Availability Mode)
Along with running a single instance of DataSunrise, you can configure multiple servers to implement Failover and
Scalability. This feature enables you to run multiple DataSunrise instances on separate servers sharing a common
configuration (Dictionary). If some of the servers go offline, other servers keep working guarantying consistent
traffic processing without an impact on system availability.
DataSunrise includes the Shared Sessions feature which enables performing authentication to the Web Console
on one of the DataSunrise servers and all other servers will share queries inside the same session. Once a logout/
session timeout occurs on one instance, the session will be closed on all other DataSunrise instances. This applies to
the Web Console only and is not suitable for proxying database traffic.
DataSunrise needs access to the Dictionary database to be able to load the software configuration. Thus,
DataSunrise cannot start without a Dictionary. In case a Dictionary database is disabled AFTER DataSunrise has been
started, DataSunrise will continue working because configuration had already been loaded.
In HA mode, DataSunrise can continue working without a connection to a remote Dictionary database. Periodically
(by default, in 5 minutes from the moment of the last changes made to the Dictionary), DataSunrise creates a
Dictionary backup and stores it in the "AF_CONFIG/systemBackupDictionary" folder. Each DataSunrise server creates
its own copy of the shared Dictionary. If DataSunrise uses a Dictionary backup for working, further backing up is not
performed. The built-in SQLite database is used to create Dictionary backups.
If a main Dictionary can't be accessed, DataSunrise's Backend and Core connect to a backed up Dictionary to use
it. The Backend and Core access Dictionaries independently, so it's possible that they use a main and a reserve
Dictionary at the same time.

Note: we recommend using a PostgreSQL or Amazon RDS Aurora PostgreSQL database for Dictionary and Audit
Storage. Remote configuration of DataSunrise is available only for PostgreSQL 8.2 or higher.

In the tables below you can find information on performance (transactions per second, TPS) of Amazon Aurora
PostgreSQL used as an Audit Storage. Note that these results were got while testing databases on Amazon RDS.
Results for non-RDS databases may differ.

Table

Audit Storage / DS EC2 1 x m5.4xlarge 4 x m5.4xlarge 8 x m5.4xlarge


r5.large TPS = 7949 TPS = 9863 TPS = 12753
r5.xlarge TPS = 10225 TPS = 18251 TPS = 18748
r5.2xlarge TPS = 12258 TPS = 25011 TPS = 26262
r5.4xlarge TPS = 13567 TPS = 32212 TPS = 41799
r5.8xlarge TPS = 13712 TPS = 33559 TPS = 46355
4 Multi-Server Configuration (High Availability Mode) | 40
Table

Audit Storage DS EC2: 4 x m5.4xlarge


r5.4xlarge TPS = 32212
r6g.4xlarge TPS = 32409
r5.8xlarge TPS = 33559
r6g.8xlarge TPS = 35356
r5.12xlarge TPS = 36571
r6g.12xlarge TPS = 39448
r5.16xlarge TPS = 38018
r6g.16xlarge TPS = 41710

4.1 Preparing Databases to be Used as a


Dictionary/Audit Storage
When deploying DataSunrise in multi-server configuration, a PostgreSQL, MySQL/MariaDB or MS SQL Server
database is used to store the common Dictionary and Audit data. First, you should use a database user with
sufficient privileges (admin for example) to create a database or schema to store that data and then create a user
that can be used for access to that data. In general, such a user should have read/write access to your "Dictionary"
schema or database.
Database type Supported version
PostgreSQL 9.1-12.1+
MySQL 5.5.3+ for Audit Storage and 5.7.2+ for Dictionary
MS SQL Server SQL Server 2016 (13.x+)
MariaDB 10.3-10.8+

4.1.1 Preparing a PostgreSQL Database


Note that remote configuration of DataSunrise is available only for PostgreSQL 9.1-12.1 or higher.
1. To prepare your PostgreSQL database to be used as a Dictionary/Audit Storage, create a new database user first.
This user will be used as the owner of your Dictionary/Audit Storage and this user will be used for establishing a
connection with your Dictionary/Audit Storage databases:

CREATE USER <User_name> WITH PASSWORD <Password>;

2. Create databases for the Audit Storage and the Dictionary.

CREATE DATABASE <Dictionary_DB_name> OWNER <User_name>;


CREATE DATABASE <Audit_Storage_DB_name> OWNER <User_name>;

3. Create schemas for the Audit Storage and the Dictionary

CREATE SCHEMA IF NOT EXISTS <Dictionary_schema_name> AUTHORIZATION <User_name>;


CREATE SCHEMA IF NOT EXISTS <Audit_Storage_schema_name> AUTHORIZATION <User_name>;
4 Multi-Server Configuration (High Availability Mode) | 41
4. Grant the required privileges to the user:

GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA <Dictionary_schema_name> TO <User_name>;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA <Audit_Storage_schema_name> TO
<User_name>;
GRANT CREATE, USAGE ON SCHEMA <Dictionary_schema_name> TO <User_name>;
GRANT CREATE, USAGE ON SCHEMA <Audit_Storage_schema_name> TO <User_name>;

4.1.1.1 Preparing an AWS RDS PostgreSQL Database


Note that remote configuration of DataSunrise is available only for PostgreSQL 9.1-12.1 or higher.
To prepare your AWS RDS PostgreSQL database to be used as a Dictionary/Audit Storage, create a new database
user first. This user will be used as the owner of your Dictionary/Audit Storage and this user will be used for
establishing a connection with your Dictionary/Audit Storage databases. Execute the following commands to create
a User, Database, Schema and grant the user the required privileges:

CREATE USER <User_name> WITH PASSWORD <Password>;


CREATE ROLE <Role> WITH PASSWORD <Password>;
GRANT <Role> TO <Superuser>;
GRANT <Role> TO <User_name>;
CREATE DATABASE <Dictionary_DB_name> owner=<Role>;
CREATE SCHEMA IF NOT EXISTS <Schema_name> AUTHORIZATION <Role>;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA <Schema> TO <Role>;
GRANT CREATE, USAGE ON SCHEMA <Schema> TO <Role>;

4.1.2 Preparing a MySQL Database


To use a MySQL (5.5.3+ for Audit Storage and 5.7.2+ for Dictionary, lower versions are not supported) database as
Dictionary and Audit Storage, do the following BEFORE deploying DataSunrise in HA configuration:
1. Open the my.cnf file (MySQL configuration file) and add the following lines to the [mysqld] section:

log_bin_trust_function_creators = 1
local_infile = 1
innodb_buffer_pool_size = 2147483648

2. Create a new MySQL user by executing the following command:

CREATE USER <User name> IDENTIFIED BY <Password>;

3. Create two databases: Audit Storage and Dictionary:

CREATE DATABASE <Audit Storage database name> character set utf8mb4 COLLATE utf8mb4_bin;
CREATE DATABASE <Dictionary database name> character set utf8mb4 COLLATE utf8mb4_bin;

4. Grant the required privileges to the user:

USE <Audit Storage database name>;


GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, CREATE VIEW, ALTER, DROP,
INDEX, REFERENCES, ALTER ROUTINE, CREATE ROUTINE ON <Audit Storage database name>.* TO <User name>;

USE <Dictionary database name>;


GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, ALTER, DROP, INDEX,
REFERENCES, ALTER ROUTINE, CREATE ROUTINE ON <Dictionary database name>.* TO <User name>;
grant SYSTEM_VARIABLES_ADMIN on *.* to <User name>;
grant SESSION_VARIABLES_ADMIN on *.* to <User name>;
grant all privileges on <Dictionary database name> . * to <User name> with grant option;
4 Multi-Server Configuration (High Availability Mode) | 42
For MySQL 8.0.0+, additionally grant the following privileges:

GRANT SYSTEM_VARIABLES_ADMIN ON *.* TO '<user name>'@'%';


FLUSH PRIVILEGES;

5. Define the LD_LIBRARY_PATH environment variable:

export LD_LIBRARY_PATH=/opt/datasunrise

4.1.3 Preparing an MS SQL Server Database


To use an MS SQL database as the Dictionary and Audit Storage you need at least MS SQL Server 2016 (13.x+). Do
the following BEFORE deploying DataSunrise in HA configuration:
1.
Important: On Linux, if you're using ODBC Driver for SQL Server version 18+, add the MS SQL server certificate
(or its CA) to trusted by executing the following command:

update-ca-certificates -update /etc/ssl/certs and ca-certificates.crt

2.
Note: you can download the script at here: https://www.datasunrise.com/doc/mssql_dictionary_audit_storage.sql

Execute the following script (note the comments):

create database dsaudit


go
create database dsdictionary
go
-------------------------------------------- Audit database
use dsaudit
go
-- Creates the login dsuser with password <password>.
CREATE LOGIN dsuser
WITH PASSWORD = <password>;
GO

-- Creates a database user for the login created above.


CREATE USER dsuser FOR LOGIN dsuser;
GO
EXEC sp_addrolemember N'db_owner', N'dsuser'
GO
-------------------------------------------- Dictionary database
use dsdictionary
go

-- Creates a database user for the login created above.


CREATE USER dsuser FOR LOGIN dsuser;
GO
EXEC sp_addrolemember N'db_owner', N'dsuser'
GO
--------------------------------------------
--When you drop the user:
use dsaudit
go
drop LOGIN dsuser
go
use dsdictionary
go
use master
go
4 Multi-Server Configuration (High Availability Mode) | 43
drop user dsuser
go

4.1.4 Preparing a MariaDB Database


To use a MariaDB database as the Audit Storage and Dictionary, do the following BEFORE deploying DataSunrise in
HA configuration:
1. Open the my.cnf file (MariaDB configuration file) and add the following lines to the [mysqld] section:

log_bin_trust_function_creators = 1
local_infile = 1

2. Create a new MariaDB user by executing the following command:

CREATE USER <User_name> IDENTIFIED BY <Password>;

3. Create two databases: an Audit Storage and a Dictionary:

CREATE DATABASE <Audit_Storage_database_name> character set utf8mb4 COLLATE utf8mb4_bin;


CREATE DATABASE <Dictionary_database_name> character set utf8mb4 COLLATE utf8mb4_bin;

4. Grant the required privileges to the user:

USE <Audit_Storage_database_name>;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, CREATE VIEW, ALTER, DROP,
INDEX, REFERENCES, ALTER ROUTINE, CREATE ROUTINE ON <Audit_Storage_database_name>.* TO <User_name>;

USE <Dictionary_database_name> ;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, ALTER, DROP, INDEX,
REFERENCES, TRIGGER, ALTER ROUTINE, CREATE ROUTINE ON <Dictionary_database_name>.* TO <User_name> ;
grant LOCK TABLES on <Dictionary_database_name>.* to <User_name>;
FLUSH PRIVILEGES;

4.2 Adding a DataSunrise Server to HA


Setup
A DataSunrise server can be configured during the DataSunrise installation.
1. Start the program installation with the -- remote-config parameter.

sudo ./DataSunrise<version>.run install --remote-config

2. At the end of the installation process, specify the database to store DataSunrise configuration (the Dictionary).
All servers configured to use this database, share common configuration (including the common credentials to
access the Web Console).
4 Multi-Server Configuration (High Availability Mode) | 44

Figure 9: Dictionary details

Line Description
Database type Type of database to store DataSunrise configuration in (MySQL,
PostgreSQL, Vertica, MS SQL, MariaDB)
Host IP address or name of the host the Dictionary database is installed on
Port Port number of the Dictionary database
Database Name Name of the Dictionary database
Login User name to connect to the database
Password Password to connect to the database

3. Specify the details of the current DataSunrise server.

Figure 10: Server details

Line Description
Server Name Logical name of the DataSunrise instance
Server Hostname IP address or name of the host the current DataSunrise instance is installed
on
Server Port Port number of the instance's Web Console (11000 by default)

4. Specify the database to store audit data (Audit Storage) similarly to the Dictionary database.
5. After configuring is complete, you can see all the available DataSunrise servers (the instances installed on the
separate servers and sharing common configuration) in the System Settings → Servers subsection.
6. You can also use the following parameters along with the -remote-config parameter:
4 Multi-Server Configuration (High Availability Mode) | 45

Parameter Description
--dictionary-type <mysql | postgresql | vertica | Dictionary database type
mssql | mariadb>
--dictionary-host Dictionary database hostname or IP address
--dictionary-port Dictionary database port number
--dictionary-database Dictionary database name
--dictionary-login Database user name to use for connection to the Dictionary
--dictionary-password User password to use for connection to the Dictionary
--audit-type <mysql | postgresql | redshift | aurora Audit Storage database type
| mssql | vertica>
--audit-host Audit Storage database hostname or IP address
--audit-port Audit Storage port number
--audit-database Audit Storage database name
--audit-login Database user name to use for connection to the Audit
Storage
--audit-password Password to use for connection to the Audit Storage
--server-name DataSunrise server logical name
--server-host DataSunrise server hostname or IP address
--server-port DataSunrise server port number

Example:

./DataSunrise_Suite_5_6_0_15792.linux.64bit.run install
--remote-config -v
--dictionary-type postgresql
--dictionary-host 172.16.101.30
--dictionary-port 5432
--dictionary-database datasunrise_db
--dictionary-login ds_admin
--dictionary-password Password1
--audit-type postgresql
--audit-host 172.16.101.30
--audit-port 5432
--audit-database datasunrise_db
--audit-login ds_admin
--audit-password Password1
--server-name datasunrise1
--server-host 172.16.101.247
--server-port 11000

4.3 Reviewing Servers of an Existing HA


Configuration
All existing DataSunrise servers are equal, so you can access all server's settings from the Web Console of any
DataSunrise instance:
1. Go to System Settings → Servers. Select the required server in the list and click on its name to access the
server's settings
4 Multi-Server Configuration (High Availability Mode) | 46
2. Reconfigure the server, if necessary:

Figure 11: DataSunrise Server settings


4 Multi-Server Configuration (High Availability Mode) | 47

Interface element Description


Main Settings
Logical Name Logical name of the DataSunrise server (instance)
Host IP address of the server the Instance is installed on on
Backend Port DataSunrise Backend port number (used to access the Web Console)
Core Port to Start Numbering from DataSunrise Core port number
Use HTTPS for Backend Process Use HTTPS protocol to access the Web Console
Use HTTPS for Core Processes Use HTTPS protocol to access the Core
Core and Backend Process Manager
Actions→ Restart Core Restart the Core process
Actions → Start Core Start Core process (if stopped)
Actions → Stop Core Stop running Core process
Actions → Restart Backend Restart running Backend process
Server Info
License Type Type of license activated
License Expiration Date License expiration date
Version Program version
Backend Up Time The Backend run duration
Server Time Current server time
OS Type Type of the server's operating system
OS Version Version of the server's operating system
Machine Server's hardware info
Node Name Server's name (PC name)
Encoding Encoding used on the server
Server DataSunrise server's logical name

4.4 Restoring the Configuration if your


local_settings.db is Lost
When using HA configuration, if one of the DataSunrise servers needs to be transferred to another server or
the local_settings.db is lost, DataSunrise's configuration will be changed to the default one. To avoid this, do the
following:
1. Navigate to System Settings → Servers. Select the server you want to restore access to. Note the digit in
the end of the line in the web browser's address bar (see the example below). This number corresponds to
<SERVER_ID> parameter's value you will be using at the following step. For example "1" means Server number 1.
For example:

https://localhost:11000/v2/settings/servers/server/edit/1
4 Multi-Server Configuration (High Availability Mode) | 48
2. Execute the following command:

sudo su datasunrise -s /bin/bash

3. Navigate to the DataSunrise's installation folder:

cd /opt/datasunrise/

4. Define the LD_LIBRARY_PATH environment variable:

export LD_LIBRARY_PATH=/opt/datasunrise

5. Run the following script using the details of your database to specify the location of the database where
DataSunrise Dictionary is stored:

./AppBackendService
DICTIONARY_TYPE=<type of the Dictionary database. For example "postgresql">
DICTIONARY_HOST=<Dictionary IP address or host name. For example "dstestpg.cpauhz8lxyzp.us-
east-1.rds.amazonaws.com">
DICTIONARY_PORT=<Dictionary port number. For example 5432>
DICTIONARY_DB_NAME=<Dictionary database name. For example "dict_db">
DICTIONARY_LOGIN=<Dictionary database user name. For example "dsuser">
DICTIONARY_PASS=<Dictionary database password>
RESTORE_LOCAL_SETTINGS=<SERVER_ID>

Note: replace <SERVER_ID> with the actual ID of the server you want to restore access to (see step 1). Note that
all the parameter's values should be written without quotes.

Important: if you changed your Dictionary location (IP address or host name), you can use the command from
Step 4 to create a new local_settings.db file which is required to be able to work with a new Dictionary.

4.5 Configuring Existing DataSunrise


Installation into HA Configuration
You can configure an existing SQLite-based DataSunrise installation to be deployed in High Availability mode using
the dedicated script.
The script cleans the existing DataSunrise's local_settings and replaces the existing Dictionary, server and Audit
Storage settings with the corresponding settings required for HA configuration. If you need to create multiple
DataSunrise servers with a common configuration, you should run the script at each server with DataSunrise
installed. Note that you should use the same Dictionary and Audit Storage but different server names and server IP
addresses.
To transform your existing standalone configuration into High Availability configuration, do the following:
1. Install DataSunrise either with an installer file or from the repository.
2. Locate the following files in /opt/datasunrise/scripts/setup_ha: ds_remote_config_setting_up.sh script along with
the config.conf file
3. Put the script file and the config.conf file in one directory
4. Fill out the config.conf file according to the table below:
4 Multi-Server Configuration (High Availability Mode) | 49

Parameter Description
DICTIONARY_TYPE Common Dictionary database type: mysql / postgresql
DICTIONARY_HOST The endpoint of your Dictionary database
DICTIONARY_PORT Dictionary database port number
DICTIONARY_DB_NAME Dictionary database name
DICTIONARY_SCHEMA Dictionary database schema
DICTIONARY_LOGIN Dictionary database username that should be used for connection to the
Dictionary
DICTIONARY_PASS Dictionary database password that should be used for connection

FIREWALL_SERVER_NAME DataSunrise server name (any)


FIREWALL_SERVER_HOST IP address of your DataSunrise server
FIREWALL_SERVER_CORE_PORT The Core's port number: 11001
FIREWALL_SERVER_BACKEND_PORT The Backend's port number: 11000
FIREWALL_SERVER_BACKEND_HTTPS Use HTTPS for the Backend connection: 1
FIREWALL_SERVER_CORE_HTTPS Use HTTPS for the Core connection: 1

AuditDatabaseType Audit Storage database type. Specify the corresponding value:


• SQLite: 0
• PostgreSQL: 1
• MySQL: 2
• Redshift: 3
• Aurora: 4
• Vertica: 5
• MS SQL Server: 6

AuditDatabaseHost Audit Storage server's hostname or IP address


AuditDatabasePort Audit Storage database's port number
AuditDatabaseName Audit Storage database's name
AuditDatabaseSchema Audit Storage database's schema
AuditLogin Audit Storage database's login (username)
AuditPassword Audit Storage database's password

SET_ADMIN_PASSWORD Password used to log in to the DataSunrise's Web Console

5. Use the Linux Terminal to execute the following commands:

sudo chmod +x ds_remote_config_setting_up.sh


sudo ./ds_remote_config_setting_up.sh

6. If necessary, replace DataSunrise's SSL certificate with a new one.


5 Always On | 50

5 Always On

5.1 Working with Always On Availability


Group of SQL Server
This subsection describes the basic principles of working with SQL Server's Always On availability:
1. A client connects and authorizes in a SQL Server database through the DataSunrise proxy.
2. SQL Server sends the client a command to reconnect to a secondary node.
3. DataSunrise intercepts the packet.
•If there is a proxy associated with the secondary node, the connection address is substituted with the address
of a required proxy
• If there is no proxy associated with the secondary node, a new proxy is created and the client receives the new
proxy address
• A modified packet is sent to the client.
4. Having received the reconnection command, the client connects to the required proxy.

5.2 Configuring of the Firewall Inside the


Azure Cloud for Maintenance of the SaaS
SQL Azure
Azure SQL and Always-On cluster use the same mechanism for client redirecting. When connecting to the SaaS
SQL Azure, if a client is inside Azure subnet, then SQL Azure can redirect a client to service (dynamic) servers for
balancing of network load. In such a case, a server right after authorization will return to the client an address and
port number of the service the client is to be reconnected to.
To be able to control such reconnections, at the moment when a server sends a query for reconnection, DataSunrise
replaces address of the service server with address of a proxy that services it.
In such a case, a client is reconnected to the DataSunrise's proxy and not to the Azure service server. And for each
unique reconnection, will be created an entry like this in the Event Monitor:

Rewrite route: cd164f04fd1f.tr27.westus1-a.worker.database.windows.net:11082 -> 10.1.0.6:14033

where cd164f04fd1f.tr27.westus1-a.worker.database.windows.net:11082 is service server's address and


10.1.0.6:14033 is address of DataSunrise proxy that maintains this service server.
If DataSunrise is not able to find a proxy in the current instance, then 2 scenarios are available:
• If MsSqlRedirectsDisable option is disabled (by default), a proxy will be created automatically in a current
instance (and an interface if required)
• An entry will be added to the Event Monitor:

Redirect: cd164f04fd1f.tr27.westus1-a.worker.database.windows.net:11082
5 Always On | 51
For the SaaS SQL Azure it means that a client should add proxy on this host manually (or it is done already)
to make client connections controlled by DataSunrise. Otherwise, DataSunrise will lose control over the client
connection. For a cluster with AlwaysOn enabled, it is possible to configure redirection to Readonly-replicas,
that's why if redirecting host is already configured on DataSunrise, we will see this host in redirection notification.
In both cases, a notification about redirection can be used when administering DS/AlwaysOn for diagnostics.
To add a proxy to an instance, perform the following:
• Add an interface of the target server (its cd164f04fd1f.tr27.westus1-
a.worker.database.windows.net:11082 in our case);
• Add a proxy on this interface. When using standard templates for host name (0.0.0.0 or 0:0:0:0:0:0:0:0) of such
a proxy, DataSunrise will return the client an address of an available non-local interface from DataSunrise's
host as an address for redirect.

5.3 Configuring a Remote Dictionary on AWS


Using the AWS Secrets Manager
When using HA configuration, you can use a Dictionary located on AWS's RDS and you can store your Dictionary
password using the AWS Secrets Manager:
1. Instal DataSunrise on Amazon EC2 virtual machine.
2. Configure the Secrets Manager to store the password to the Dictionary.
3. Define the LD_LIBRARY_PATH environment variable:

export LD_LIBRARY_PATH=/opt/datasunrise

4. Execute the following script using the details of your database to specify the location of the database where
DataSunrise Dictionary is stored:

./AppBackendService
DICTIONARY_TYPE=<Dictionary database type>
DICTIONARY_HOST=<Dictionary hostname (AWS hostname)>
DICTIONARY_PORT=<Dictionary database port number>
DICTIONARY_DB_NAME=<Dictionary database name>
DICTIONARY_LOGIN=<Dictionary login>
DICTIONARY_AWS_SECRET=<AWS Secret>
FIREWALL_SERVER_HOST=<DataSunrise's hostname>
FIREWALL_SERVER_BACKEND_PORT=<DataSunrise's Backed port number (11000 by default)>
FIREWALL_SERVER_CORE_PORT=<DataSunrise's Core port number (11001 by default)>
FIREWALL_SERVER_NAME=<DataSunrise's server name>
FIREWALL_SERVER_BACKEND_HTTPS=1 FIREWALL_SERVER_CORE_HTTPS=1
6 Shared Configuration (SC Mode)
When deployed in this configuration, DataSunrise Instances installed on independent servers protect independent
databases installed on the same servers. Despite that these DataSunrise instances are installed on separate servers,
they share the same configuration stored in a separate database thus you can control all DataSunrise instances
included in a Shared Configuration using a Web Console installed on any of the associated DataSunrise servers. This
configuration shares many similarities with High Availability (HA) mode but for SC no load balancers are used and
the DataSunrise instances don't protect the same database.

Figure 12: Shared Configuration deployment scheme

6.1 Deploying DataSunrise in the Shared


Configuration
To deploy DataSunrise in the Shared Configuration, you need to do the following.
1. Prepare a database to be used as the Dictionary and Audit Storage (refer to Preparing Databases to be Used as a
Dictionary/Audit Storage on page 40)
2. Install your DataSunrise instances on the corresponding servers (refer to Adding a DataSunrise Server to HA Setup
on page 43)
3. Use the Web Console to create Database Instances for each database you're going to protect. You can see
existing DataSunrise servers in the System Settings → Servers subsection.
7 Frequently Asked Questions | 53

7 Frequently Asked Questions


This section describes the most common issues DataSunrise users face.
Q. I’m trying to add a new Oracle database via the Configuration menu, but the connection is failing because
of the “Couldn’t load oci.dll” error.
Probably you installed the 32-bit version of Oracle Database Instant Client or did not set system variables correctly.
You need to install the 64-bit version of Oracle Database Instant Client and add its home directory path to the
%ORACLE_HOME% system variable. Then you need to add the same directory path to the %PATH% system
variable.

Q. I’m trying to run PostgreSQL, but the database connection fails: “[unixODBC] Missing server name, port,
or database name in call to CC_connect.” (error code 201)
Check the ODBC driver availability by executing the following command:

odbcinst -q -d

Locate the ODBC.ini file and configure it in the following way:

[postgres_i]
Description = Postgres Database
Driver = PostgreSQL
Database = postgres
Servername = 127.0.0.1
Port = 5432

Check the PostgreSQL connection by executing the following command:

isql postgres_i username password

Q. I’m trying to run DataSunrise but getting the error message: “Data source name not found and no default
driver specified”.
Basically, the data source you are attempting to connect to does not exist on your machine. On Linux and UNIX,
SYSTEM data sources are typically defined in /etc/odbc.ini. USER data sources are defined in ~/.odbc.ini.
You should grant read access to the .ini file that contains the data source. You may need to set the ODBCSYSINI,
ODBCINSTINI or ODBCINI environment variables to pinpoint the odbc.ini and odbcinst.ini files location if it hasn’t
been done before.

Q. I can't create a new Oracle instance on Ubuntu.


Most likely Oracle can’t find the missing libaio.so.1 file. Run the following command to install it on Ubuntu:

sudo apt-get install libaio1

Q. I'm trying to enter the web interface after the program update, but it displays the "Internal System Error"
message.
Most likely, you kept the web interface tab open in your browser while updating the firewall. Log out the web
interface if necessary and press Ctrl + F5 to refresh the page.

Q. I’m trying to establish a connection between DataSunrise and a MySQL database, but it fails because of
the missing ODBC MySQL driver.
7 Frequently Asked Questions | 54
Certain Linux-type operating systems don’t add the MySQL driver parameters to the odbcinst.ini file. You should do
it manually.
If necessary, install the MySQL ODBC driver by running the following commands:
• For Debian and Ubuntu:

sudo apt-get install libmyodbc libodbc1

• For CentOS, Red-Hat and Fedora:

sudo yum install mysql-connector-odbc

Edit the odbcinst.ini file. Run the following command:

sudo nano /etc/odbcinst.ini

Paste the following code into odbcinst.ini and save the file:

[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1

Update the configuration files that control ODBC access to the database servers by running the following command:

sudo odbcinst –I –d –f /etc/odbcinst.ini

Q. I’m getting the “Could not find libodbc.so.2 (unixODBC is required)” error while trying to install
DataSunrise on Ubuntu 14.04. UnixODBC is installed.
Continue the program installation.
Check if the libcrypto.so.10 and libssl.so.10 files are available in the program installation folder (/opt/datasunrise/ on
default) by executing the following command:

ll /opt/datasunrise/

Learn the odbc.so.1 file location by running the following command

locate libodbc.so.1

If libcrypto.so.10 and libssl.so.10 are available in the DataSunrise installation folder, execute the following command:

ln /usr/lib/x86_64-linux-gnu/libodbc.so.1 /opt/datasunrise/libodbc.so.2

Note: In this case, /usr/lib/x86_64-linux-gnu/ is the Linux system folder where libodbc.so.1 is located and /opt/
datasunrise/ is the DataSunrise installation folder.

If the libcrypto.so.10 and libssl.so.10 files are not available in the DataSunrise installation folder, execute the
following command:

sudo ln -s /usr/lib/x86_64-linux-gnu/libodbc.so.1 /usr/lib/x86_64-linux-gnu/libodbc.so.2


7 Frequently Asked Questions | 55

Note: In this case, /usr/lib/x86_64-linux-gnu/ is the Linux system folder where libodbc.so.1 and libodbc.so.2 are
located.

Q. I’m getting the “Could not find ‘setcap’” error while trying to install DataSunrise on OpenSUSE 42.1.
Install libcap-progs. To do this, execute the following command:

sudo zypper install libcap-progs

Q. When I’m trying to run DataSunrise in the sniffer mode, it displays the message: “Impossible to parse the
SSL connection in the sniffer mode”.
To run the firewall in the sniffer mode, you should disable SSL support in your client application settings (SSL Mode
→ Disable). You can also switch the application’s SSL Mode to “Allow” or “Prefer”, but disable SSL support in the
database server settings first.

Q. I can't update my DataSunrise. I run the latest version of the DataSunrise installer, but the installation
wizard is not able to locate the old DataSunrise installation folder.
Run the DataSunrise installer in the Repair mode. It removes the previous installation and updates your DataSunrise
to the latest version.

Q. When connecting to Aurora DB, MySQL the ODBC driver stops responding.
Most probably, you're using ODBC driver version 5.3.6, which is known to cause freezes from time to time. Install
MySQL ODBC driver version 5.3.4.

Q. A DataSunrise installation is aborted with the Permission denied error:

[test@HVLB001 ~]$ ./DataSunrise_Database_Security_Suite_XXX.linux.64bit.run


Verifying archive integrity... 100% All good.
Uncompressing SFX installer 100%
./DataSunrise_Database_Security_Suite_XXX.linux.64bit.run: line 495: ./install.sh: Permission denied

This error occurs because on certain OSs the installer cannot unpack the installation archive into the temp folder.
Execute the following command:

sudo ./DataSunrise_Database_Security_Suite_XXX.linux.64bit.run --target ./temp install

This command creates a temporary folder in the current folder and unpacks the archive into it. After the installation
is finished, delete this temp folder manually.

Q. I forgot the password to the Web Console.


You can set a new administrator password. Use the Terminal to run the DataSunrise's AppBackendService
file with the set_admin_password parameter. For example: >sudo ./AppBackendService
set_admin_password=new_password. To apply the new password, restart the DATASUNRISE system service). Execute
the following command before running the AppBackendService: export LD_LIBRARY_PATH=/opt/datasunrise.

Q. I'm using a MySQL database installed on the same PC DataSunrise is installed on. I'm querying the
database, but data audit doesn't capture the events.
Most probably, DataSunrise proxy is not intercepting the traffic. It can occur if you've configured your database
connection (in your DB profile) to access the database at "localhost" instead of "127.0.0.1". In this case, MySQL
can use the UNIX socket for connection instead of TCP. Specify the full IP address of the database and ensure that
your client application uses the TCP connection. Refer to the following pages, if necessary: http://serverfault.com/
questions/337818/how-to-force-mysql-to-connect-by-tcp-instead-of-a-unix-socket, http://www.computerhope.com/
issues/ch001079.htm

Q. I'm using an MS SQL Server database. I'm creating a target database profile, but can't properly configure
the database connection.
In the DB connection details, specify the credentials (the Default login and Password fields) used for SQL Server
authentication and not for Windows authentication. To specify the database server's host (Host field), use the actual
DB server's IP address or host name instead of server's SPN.

Q. I'm updating DataSunrise via the Web Console and getting the following error:

(17279|140037881128832) PCAP - pcapOpen: [ERROR] Can not activate pcap object. Pcap error: socket:
Operation not permitted

Also, I'm having a problem with the sniffer:

(17279|140037881128832) DS_31012E: The task of the sniffer cannot be initialized.

These problems occur because DataSunrise cannot install the required "file capabilities" (cap_net_raw and
cap_net_admin=eip to be exact). To enable these capabilities, execute the following command:

sudo setcap 'cap_net_raw,cap_net_admin=eip' /opt/datasunrise/AppFirewallCore

Q. I'm getting the following warning: "The free disk space limit for audit has been reached. The current free
disc space is XXX MB. The disk space limit is 10240 MB".
If you want to decrease the disk space threshold for this warning, navigate to System Settings → Additional and
change the "LogsDiscFreeSpaceLimit" parameter's value from 10240 to 1024 Mb for example.

Q. I'm getting the following notification: "Reached the limit on delayed packets".
This notification is displayed when the sniffer has captured a large amount of traffic in the SSL sessions started
before the DataSunrise service was started. By default, the volume of captured traffic should not exceed 10 Mb (the
pnMsSqlDelayedPacketsLimit parameter).
Sometimes this notification can be displayed if there is a huge load on the pcap driver. Thus the sniffer can capture
too much of delayed traffic. In this case you need to increase the pnMsSqlDelayedPacketsLimit parameter's value.

Q. I've updated DataSunrise and I get the following error: "PROCEDURE dsproc_<version>.initProcedures
does not exist"
Now DataSunrise uses a new method of getting metadata. Do the following steps again: subs 4.5.4.2.

Q. Can I use my target database as DataSunrise Dictionary and Audit Storage?


This is NOT reommended as this would consume the resources needed by your applications. Moreover, you can
create a lot of "noice" by resource consuming logging policy (for example empty Object Group/Query Type Rules) as
DataSunrise queries used to put/retrieve data/metadata.
It's recommended to follow some sort of Duty separation principle and host Audit Storage/Dictionary on a separate
database server. The best option for busy target databases is to separate Audit and Dictionary so that in case one of
those goes down, DataSunrise server will be able to withstand outage even better despite having all the resilience
measures for both Audit and Dictionary database outages.

You might also like