DataSunrise Database Security Admin Guide Linux
DataSunrise Database Security Admin Guide Linux
com
Administration
Guide, Linux
DataSunrise Database Security Administration Guide (Linux)
All brand names and product names mentioned in this document are trademarks, registered trademarks or service
marks of their respective owners.
No part of this document may be copied, reproduced or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, except as expressly allowed by law or permitted in writing by the
copyright holder.
The information in this document is subject to change without notice and is not warranted to be error-free. If you
find any errors, please report them to us in writing.
iii
Contents
1 General Information
In this configuration, DataSunrise can be used only for "passive security" ("active security" features such as database
firewall or masking are not supported in this mode). When deployed in Sniffer mode, DataSunrise is capable to
perform database activity monitoring only because it can't modify database traffic in this configuration. Running
DataSunrise in Sniffer mode does not require any additional reconfiguring of databases or client applications. Sniffer
mode can be used for data auditing purposes or for running DataSunrise in Learning mode.
Important: database traffic should not be encrypted. Check your database settings as some databases encrypt
traffic by default. If you're operating an SQL Server database, do not use ephemeral ciphers. DataSunrise deployed
in Sniffer mode does not support connections redirected to a random port (like Oracle). All network interfaces (the
main and the one the database is redirected to) should be added to DataSunrise's configuration.
1 General Information | 11
Proxy mode is for "active protection". DataSunrise intercepts SQL queries sent to a protected database by database
users, checks if they comply with existing security policies, and audits, blocks or modifies the incoming queries or
query results if necessary. When running in the Proxy mode, DataSunrise supports its full functionality: database
activity monitoring, database firewall, both dynamic and static data masking are available.
Important: We recommend to use DataSunrise in the proxy mode. It provides full protection and in this mode,
DataSunrise supports processing of encrypted traffic and redirect connections (it is essential for Hana, Oracle,
Vertica, MS SQL). For example in SQL Server redirects can occur when working with Azure SQL or AlwaysOn Listener.
Target database performs auditing using its integrated auditing mechanisms and saves auditing results in a
dedicated database table or in either a CSV or XML file depending on selected configuration. Then DataSunrise
1 General Information | 12
establishes a connection with the database, downloads the audit data from the database and passes it to the Audit
Storage for further analysis.
First and foremost, this configuration is intended to be used for Amazon RDS databases because DataSunrise
doesn't support sniffing on RDS.
This operation mode has two main drawbacks:
• If the database admin has access to the database logs, he can delete them
• Native auditing makes a negative impact on database performance.
Note: Dynamic SQL processing is available for PostgreSQL, MySQL and MS SQL Server
EXECUTE enables you to execute a query which is contained in a string, variable or is a result of an expression. For
example:
...
EXECUTE "select * from users";
EXECUTE "select * from ” || table_name || where_part;
EXECUTE foo();
...
Here table_name and where_part are variables, foo() is a function that returns a string. The second and third queries
are dynamic ones because we can’t tell what query will be executed in the database.
Let's take a look at the following example:
SELECT run_query();
This function takes a random query from the queries table, executes it and returns some result. DataSunrise can't
know which query will be executed beforehand because the exact query will be known when executing the following
subquery:
...
SELECT * FROM queries AS r(id, sql) ORDER BY id DESC LIMIT 1 INTO row;
...
1 General Information | 13
That's why DataSunrise wraps dynamic SQL in the special function, DS_HANDLE_SQL, that does the trick. As the
result, the original function is modified to be the following:
SELECT run_query();
SELECT DSDSNRBYLCBODMJOVNJLFJFH();
Inside the DS_HANDLE_SQL function the database sends a dynamic SQL to DataSunrise's handler. The handler
processed the query and audits, masks or blocks it respectively. Thus,
...
EXECUTE row.sql into RESULT;
...
executes not the original query contained in the queries table but a modified one.
To enable dynamic SQL processing, when creating a database instance, you need to enable the “Dynamic SQL
processing” option in the Advanced settings. Then you need to select host and dynamic SQL handler’s port. This is
a host of the machine DataSunrise is installed on, it should be available for your database because the database
connects to this host when processing dynamic SQL.
Important: it's required to provide an external IP address of the SQL handler machine ("127.0.0.1" or "localhost" will
not work).
For processing of dynamic SQL inside functions, you need to enable the “UseMetadataFunctionDDL” parameter in
the Additional parameters and check the : “Mask Queries Included in Procedures and Functions” for masking Rules
or “Process Queries to Tables and Functions through Function Call” for audit and security Rules respectively.
You can also enable dynamic SQL processing in an existing Instance's settings and specify host and port in proxy’s
settings.
Note that you need to configure a handler for each proxy and select a free port number.
PostgreSQL
In PostgreSQL, dblink is used for processing of dynamic SQL. It enables sending any SQL queries to another remote
PG databaase.
Thus, dynamic SQL handler uses a PostgreSQL emulator. User DB with the help of dblink sends a dynamic SQL to
our handler. The emulator receives new connection, performs handshakes and makes the client DB believe that it
1 General Information | 14
sends queries to a real DB. Since it's necessary to pass session id and operation id (to associate a query sent to the
emulator with the original query), all these parameters are transferred using dblink's connection string:
host=<handler_host> port=<handler_port>
dbname=<session id> user=<operation id> password=<connection id>
MySQL
In MySQL, the FEDERATED storage engine extension is used for dynamic SQL processing. It connects two remote
databases as well. But in MySQL case it's something like an extended table that is created in one DB, but the data is
stored in another DB. To create such a table, it's necessary to provide a connection string to MySQL DB.
During execution of a first dynamic SQL query, the HANDLE_SQL function and such an extended table is created in
the DSDS***_ENVIRONMENT schema. This table's connection string points at MySQL emulator at that. The table
includes the following columns: query, connection_id, session_id, operation_id and action.
First, the function INSERTs all the required parameters. The emulator processes the query, modifies it and changed
action to block if necessary. After that, the function SELECTs the resulted query and returns it.
In MySQL, for creation and execution of dynamic queries the following pair of entities is used: prepare stmt
from @var and execute stmt. Since the execution of latter means that a prepared statement already exists in
the database, we modify prepare. As a result, a complete query:
<stmt_name> in this case is the name of statement of user query. A separate procedure is created for every
name and for every addressing. Information about these procedures is stored in PreparedStatementManager.
@ds_sql_<stmt_name> is an exit parameter of HANDLE_SQL where the function puts a modified query to.
Important: for dynamic SQL processing in MySQL, federated engine should be enabled. To enable it, it's necessary
to add the federated string to the [mysqld] section of the /etc/my.cnf file. Another method: connect to your MySQL/
MariaDB with admin privileges, ensure that Federated Engine is off and enable it with the following query:
show engines;
install plugin federated soname
'ha_federated.so'
Note: the more proxies you open, the higher the RAM consumption you will experience.
Software requirements:
• Operating system: 64-bit Linux (Red Hat Enterprise Linux 7+, Debian 10+, Ubuntu 18.04 LTS+, Amazon Linux 2)
• 64-bit Windows (Windows Server 2019+) with .NET Framework 3.5 installed https://www.microsoft.com/en-us/
download/details.aspx?id=21
• Linux-compatible file system (NFS and SMB file systems are not supported)
• Web browser for accessing the Web Console:
2 Deployment Topologies
DataSunrise can be installed either on a database server or on a separate server. In both cases, the software can be
used both in the Sniffer mode and the Proxy mode.
Tip: You can use the installation method described above during firewall testing, but some DB clients will still retain
direct access to the DB. Use a system firewall (Windows Firewall or Iptables for Linux for example) to block direct
access to the DB.
Important: Many operating systems reserve port numbers less than 1024 for privileged system processes. That’s
why it’s preferable to use port numbers higher than 1024 to establish a proxy connection.
2 Deployment Topologies | 18
b) Reconfiguring client applications
• Make sure that DataSunrise uses the same port number as the database
• Configure all client applications to connect to DataSunrise, not to the database
Important: Many operating systems reserve port numbers less than 1024 for privileged system processes, so it’s
preferable to use port numbers higher than 1024.
2 Deployment Topologies | 19
To deploy DataSunrise in the Sniffer mode, configure your network switch for transferring mirrored traffic to
DataSunrise (refer to your network switch's user guide for the description of port mirroring procedure).
3 DataSunrise Installation | 20
3 DataSunrise Installation
Note: Before you begin DataSunrise installation process, please select an appropriate deployment option
(subsections Installing DataSunrise on a Database Server on page 17 and Installing DataSunrise on a Separate Server
on page 18) and perform all required preparations. Also make sure that a machine you want to install DataSunrise
on, meets the system requirements.
3.1 Prerequisites
General
First and foremost, the Linux version of DataSunrise requires UnixODBC for its operation. For details, refer to http://
www.unixodbc.org/. To install UnixODBC from the repository, execute the following command:
Debian-based OSs:
On Ubuntu 20+, you need to install the following libraries: libbsd-dev, libgssapi-krb5-2, libldap-2.4-2, liblzo2-dev.
Execute the following command to do that:
NZ_ODBC_LIB_PATH=/usr/local/nz/lib64/
NZ_ODBC_INI_PATH=/etc/
Install Java SE Runtime Environment 8 to use Data Discovery, Unstructured Masking and to be able to
generate PDFs with Report Generator: http://www.oracle.com/technetwork/java/javase/downloads/jre8-
downloads-2133155.html For NLP Data Masking you need to configure a Java Virtual Machine (JVM) for Linux
(detailed information you can find in the User Guide: NLP Data Masking (Unstructured Masking))
Database-specific
Depending on your target database type, it might be necessary to install some additional 64-bit drivers and
software:
• For Oracle database, install the Oracle Instant Client. Note that DataSunrise supports Instant Client 11.2+. You
can get a compatible Instant Client at here: https://www.datasunrise.com/support-files/oracle-instantclient19.10-
basic-19.10.0.0.0-1.x86_64.rpm, https://www.datasunrise.com/support-files/oracle-instantclient12.1-
basic-12.1.0.2.0-1.x86_64.rpm
3 DataSunrise Installation | 21
Having installed the Oracle Instant Client, add its home directory path to the $ORACLE_HOME environment
variable and to the $PATH variable. Or you can add the required path to the /etc/datasunrise.conf file. Example:
Then create a symbolic link for the required libclntsh.so library: libclntsh.so.12.1 or libclntsh.so.11.3 for example (it
depends on Oracle version):
cd /opt/instantclient_12_1
sudo ln -s libclntsh.so.12.1 libclntsh.so
For run-time linker (the ldconfig command refreshes its configuration) you need to create a corresponding .conf
file:
• For Netezza, install the dedicated ODBC driver. Download it from the IBM Fix Central website: http://
www-933.ibm.com/support/fixcentral/
Note that your IBM ID should be associated with your IBM customer ID with active support and maintenance
contract for the Netezza appliance.
• Unpack the driver's archive:
cd linux64
./unpack
[NetezzaSQL]
Driver = /usr/local/nz/lib/libnzsqlodbc3.so
Setup = /usr/local/nz/lib/libnzsqlodbc3.so
APILevel = 1
ConnectFunctions = YYN
Description = Netezza ODBC driver
DriverODBCVer = 03.51
DebugLogging = false
LogPath = /tmp
UnicodeTranslationOption = utf16
CharacterTranslationOption = all
PreFetch = 256
Socket = 16384
NZ_ODBC_LIB_PATH=/usr/local/nz/lib64/
NZ_ODBC_INI_PATH=/etc/
• Ensure that DataSunrise's script exports the required environment variables (/opt/datasunrise/start_firewall.sh):
3 DataSunrise Installation | 22
Variables export:
export NZ_ODBC_INI_PATH
export NZ_ODBC_LIB_PATH
Adding to LD_LIBRARY_PATH
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./:$ORACLE_HOME:$ORACLE_HOME/lib:$NZ_ODBC_LIB_PATH; export
LD_LIBRARY_PATH
Note: if you get Error code 33, it means that you set the NZ_ODBC_INI_PATH incorrectly:
The aforementioned error code means that you need to add /usr/local/nz/lib64/ to LD_LIBRARY_PATH:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NZ_ODBC_LIB_PATH:/usr/local/nz/lib64/
means that you should check the UnicodeTranslationOption parameter's encoding (should be utf16):
UnicodeTranslationOption=utf16
• For MS SQL Server, you might need to install the ODBC driver: https://docs.microsoft.com/en-us/sql/connect/
odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-2017
• For Hive, install the Hortonworks ODBC driver: https://hortonworks.com/downloads/.
Note: some Cloudera-issued drivers use UTF-32 encoding by default which makes it impossible to establish
a database connection. To fix this issue, edit the /opt/cloudera/hiveodbc/lib/64/cloudera.hiveodbc.ini file in the
following way:
DriverManagerEncoding=UTF-16;
• For Vertica, install the ODBC client drivers: https://my.vertica.com/download/vertica/client-drivers/ . You will need
to log into your Vertica community account or create it if you don’t have one. Having downloaded the required
drivers, log in to the system as root and do the following:
• Create an /opt/vertica/ folder:
• Copy and paste the archive with the drivers to the folder:
cp vertica_x.x..xx_odbc_64_linux.tar.gz /opt/vertica/
3 DataSunrise Installation | 23
• Change the directory to /opt/vertica/:
cd /opt/vertica/$
/etc/odbcinst.ini
• In the odbcinst.ini file, change the directory of the Vertica ODBC driver as shown below. Note that
ErrorMessagesPath shown here is the default one since Vertica error messages are located in /opt/vertica/en-
US (ODBCMessages.xml and VerticaMessages.xml files). If your location of logs is different, change the path to
the actual one.
[Vertica]
Description = Vertica_ODBC_Driver
Driver = /opt/vertica/lib64/libverticaodbc.so
ErrorMessagesPath=/opt/vertica/en-US
• If you want to set some specific settings for your Vertica client, create an /etc/vertica.ini file and paste
the code shown below into it. Note that ErrorMessagesPath should be the same as in your odbcinst.ini.
ErrorMessagesPath shown here is the default one since Vertica error messages are located in /opt/vertica/en-
US (ODBCMessages.xml and VerticaMessages.xml files). If your location of logs is different, change the path to
the actual one.
[Driver]
DriverManagerEncoding=UTF-16
ErrorMessagesPath=/opt/vertica/en-US
LogLevel=4
LogPath=/tmp
Note: we strongly suggest you to use ODBC driver version 1.X because the newest v. 2.0.0.1 is not stable.
[DB2]
Description=DB2 Driver
Driver=/home/user/сlidriver/lib/libdb2o.so
FileUsage=1
DontDLClose=1
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/sap/hdbclient/
[HDBODBC]
Driver64=/usr/sap/hdbclient/libodbcHDB.so
DriverUnicodeType=1
[HDBODBC]
Description = ODBC for SAP HANA
Driver64 = /usr/sap/hdbclient/libodbcHDB.so
FileUsage = 1
Important: on some Linux systems (Red Hat for example), you might need to install the libaio package:
or
Note: some Cloudera-issued drivers use UTF-32 encoding by default which makes it impossible to establish a
database connection. To fix this issue, edit the /opt/cloudera/impalaodbc/lib/64/cloudera.impalaodbc.ini in the
following way:
DriverManagerEncoding=UTF-16
• For Teradata, install the ODBC driver as described below. Note that ODBC drivers v. 17/XX and higher may cause
problems when working with Teradata 13.
• Download the driver: https://downloads.teradata.com/download/connectivity/odbc-driver/linux
3 DataSunrise Installation | 25
• Unpack the archive's contents:
$ mkdir /tmp/ibm.csdk.4.50.FC1.LNX
2. Navigate to the directory where your driver archive is. For example:
$ cd ~/download
3. Navigate to the directory the archive was unpacked to and start the installation. First, you can use a super user
for that:
$ cd /tmp/ibm.csdk.4.50.FC1.LNX
$ sudo ./installclientsdk
If the driver hasn't been installed, try another method of installation. Start the installation as your Linux user. First,
you need to create driver installation directories and make your user the owner of these directories:
Follow the installation wizard prompts. To display help, run the installer with key --help:
$ ./installclientsdk --help
3 DataSunrise Installation | 26
4. Add the following lines to the /edc/odbcinst.ini file (watch the character case and the path to the drivers). Note
that IBM Informix Client doesn't update the /etc/odbcinst.ini file with the required entry so you need to do it
manually:
[ODBC Drivers]
IBM INFORMIX ODBC DRIVER (64-bit)=Installed
Ensure that your IBM Infromix client package is installed in the correct directory.
5. Then, before starting DataSunrise's backend, setup ODBC in the following way: add the following line to the /etc/
datasunrise.conf file:
OR
Set up for manual starting of DataSunrise. Set the following environment variables:
It's worth noting that if the Backend is not started from the Console, the aforementioned variables should be
added tpo some file that is used for environment variables set up. Fo example: ~/.bashrc
Having added the environment variab;es to a file, relogin or reboot your machine for the variables to be applied.
In QT Creator, it's enough to add the following environment variables:
Note: if the driver down't work, check the driver path, check the /etc/odbcinst.ini file paying special attention
to paths and character case. You can check driver dependencies after that with the help of Idd. You should get
something like that:
If some libraries haven't been found, check the LD_LIBRARY_PATH variable's value and ensure that the required
libraries are installed in your system and you can access them (check rights).
3 DataSunrise Installation | 27
If errors occur anyway, add the following content to the file ~/.profile:
Then:
sudo reboot
Note: on some Linux distributions (CentOS 6.8 at least), you might need to specify a temporary folder for
unzipping the installation archive:
You can use some additional parameters to customize the installation process:
3 DataSunrise Installation | 28
--no-password Don't generate a password for the Web Console at the end of the
installation process (set the password for the Web Console after the
installation)
--extract-only Extract the DataSunrise distribution into the specified folder without
installation (in this case you can start DataSunrise manually from the
installation folder)
--no-start Don't start the DataSunrise service after the installation
--remote-config Configure remote Dictionary
remove Uninstall DataSunrise
repair Repair the DataSunrise installation
update Update DataSunrise (replace the existing binary files, Web Console, CLI and
documentation files with the latest counterparts).
-v Display errors that could occur during the installation
For example:
3. Specify the DataSunrise installation folder in the Target directory line if necessary.
Figure 7: Note that DataSunrise generates a password for the Web Console at the end of the installation
process (by default)
https://<Server_IP>:11000
3. Edit /etc/yum.repos.d/DS.repo:
[DS]
name=Datasunrise
baseurl=http://rpm_repo.datasunrise.com/release/RHEL/
enabled=1
gpgcheck=1
https://<Server_IP>:11000
1. DataSunrise folders:
3 DataSunrise Installation | 31
2. DataSunrise files:
File name Description
AppBackendService The system process required for operation of the Web Console and control
of AppFirewallCore
appfirewall.pem SSL certificate for the Web Console
AppFirewallCore Program's Core
audit.db SQLite database file to store audit data (the Audit Storage)
cacert.pem SSL certificate required for online updates
dictionary.db Contains the program settings, DataSunrise-specific objects such as
database profiles, user profiles, rules, etc.
event.db System events logs
libcrypto.so.10 OpenSSL library
libssl.so.10 OpenSSL library
proxy.pem OpenSSL keys and certificates used for proxies by default
standart_application_queries.db Contains the queries used by Oracle SQL Developer (refer to the Query
Groups subsection of the User Guide for more information)
start_firewall.sh The script that starts the datasunrise system service
stop_firewall.sh The script that stops the datasunrise system service
{
"list": [
{"clusterName": "cluster1", "clusterURL": "https://localhost:11000/?ds=1"},
{"clusterName": "cluster2", "clusterURL": "https://localhost:11100/?ds=2"}
]
}
2. Place the file into the $AF_HOME folder ("/opt/datasunrise" by default).
3. Select DataSunrise server of interest in the drop-down list located at the top of the screen.
chmod +x ./DataSunrise_Suite_X_X_X.linux.64bit.run
sudo ./DataSunrise_Suite_X_X_X.linux.64bit.run update
3 DataSunrise Installation | 33
Note: During the update process, the installer creates a backup folder (/opt/datasunrise/backup/) where all files
required for installation roll back are retained in case of any issues that would result from the upgrade.
In case the update procedure fails and you need to return to the previous state, all the backup files should be
manually copied to the main DataSunrise directory.
The update procedure may take several minutes. During that period the DataSunrise service may become
unavailable if you're not running multiple DataSunrise instances in High Availability (HA) configuration at that time.
You will get a list of all available back ups to decide the point to which you would like to reverse the current
configuration.
2. Choose the correct back up to restore from (select its number. Note that the higher number is, the newer the
back up is.
cd /opt/datasunrise
export LD_LIBRARY_PATH=/opt/datasunrise
3. Run the AppBackendService file with the set_admin_password parameter. Set a new password as the
parameter's value:
4. Check the access to the Web Console using administrator account and updated password
3 DataSunrise Installation | 34
3. Configure DNS: specify the AD domain controller's IP address as the server's domain. To do this, use Network
Manager or edit the /etc/resolv.conf file in the following way:
ubuntu01
Edit the /etc/hosts file: add an entry with the full domain name of your machine and the short host name
resolved to an internal IP address:
127.0.0.1 localhost
127.0.1.1 ubuntu01.mydomain.com ubuntu01
Restart ntpd:
5. Configure Samba:
Edit the /etc/samba/smb.conf file as shown below:
[global]
# The values for these two options should be in upper case. Workgroup should lack the section
coming after the dot, and realm - is the full domain name:
workgroup = <MYDOMAIN>
3 DataSunrise Installation | 35
realm = <MYDOMAIN.COM>
# If you don't want Samba to become a domain or workgroup leader or a domain controller,
# use the settings below:
domain master = no
local master = no
preferred master = no
os level = 0
domain logons = no
# Disable printers:
load printers = no
show add printer wizard = no
printcap name = /dev/null
disable spoolss = yes
testparm
# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[printers]"
Processing section "[print$]"
Loaded services file OK.
Server role: ROLE_DOMAIN_MEMBER
Press enter to see a dump of your service definitions
6. Configure Winbind:
Add the following lines to /etc/samba/smb.conf → [global]:
sudo testparm
If you get the "rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)", edit /etc/security/limits.conf:
Reboot your machine. Run testparm. Ensure that Winbind is trusted by AD:
# sudo wbinfo -t
checking the trust secret for domain DCN via RPC calls succeeded
sudo wbinfo -u
sudo wbinfo -g
Execute the following commands to ensure that Ubuntu gets the information about users and groups from
Winbind:
smbclient -k -L workstation
9. Configure Kerberos:
Edit the /etc/krb5.conf by adding your domain name and your domain controller. Note that all names are case-
sensitive:
[libdefaults]
default_realm = <DOMAIN NAME>
[realms]
<DOMAIN NAME> = {
kdc = <full domain name>
kdc = <full domain name>
admin_server = <domain controller full domain name>
default_domain = <domain name>
}
[domain_realm]
.<domain name> = <DOMAIN NAME>
<domain name> = <DOMAIN NAME>
Check if you can authorize in the domain. Note that the domain name should be in upper case:
kinit <username>@<DOMAIN>
To ensure that you got a Kerberos ticket, use the following command:
klist
kdestroy
2. To completely remove DataSunrise from your machine and delete its installation folder, execute the following
commands:
Note: "purge" alone doesn't delete DataSunrise installation folder. This folder contains Dictionary and audit files
you might need. If you don't need the installation folder anymore, execute the -rm command as shown above.
Note: we recommend using a PostgreSQL or Amazon RDS Aurora PostgreSQL database for Dictionary and Audit
Storage. Remote configuration of DataSunrise is available only for PostgreSQL 8.2 or higher.
In the tables below you can find information on performance (transactions per second, TPS) of Amazon Aurora
PostgreSQL used as an Audit Storage. Note that these results were got while testing databases on Amazon RDS.
Results for non-RDS databases may differ.
Table
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA <Dictionary_schema_name> TO <User_name>;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA <Audit_Storage_schema_name> TO
<User_name>;
GRANT CREATE, USAGE ON SCHEMA <Dictionary_schema_name> TO <User_name>;
GRANT CREATE, USAGE ON SCHEMA <Audit_Storage_schema_name> TO <User_name>;
log_bin_trust_function_creators = 1
local_infile = 1
innodb_buffer_pool_size = 2147483648
CREATE DATABASE <Audit Storage database name> character set utf8mb4 COLLATE utf8mb4_bin;
CREATE DATABASE <Dictionary database name> character set utf8mb4 COLLATE utf8mb4_bin;
export LD_LIBRARY_PATH=/opt/datasunrise
2.
Note: you can download the script at here: https://www.datasunrise.com/doc/mssql_dictionary_audit_storage.sql
log_bin_trust_function_creators = 1
local_infile = 1
USE <Audit_Storage_database_name>;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, CREATE VIEW, ALTER, DROP,
INDEX, REFERENCES, ALTER ROUTINE, CREATE ROUTINE ON <Audit_Storage_database_name>.* TO <User_name>;
USE <Dictionary_database_name> ;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, ALTER, DROP, INDEX,
REFERENCES, TRIGGER, ALTER ROUTINE, CREATE ROUTINE ON <Dictionary_database_name>.* TO <User_name> ;
grant LOCK TABLES on <Dictionary_database_name>.* to <User_name>;
FLUSH PRIVILEGES;
2. At the end of the installation process, specify the database to store DataSunrise configuration (the Dictionary).
All servers configured to use this database, share common configuration (including the common credentials to
access the Web Console).
4 Multi-Server Configuration (High Availability Mode) | 44
Line Description
Database type Type of database to store DataSunrise configuration in (MySQL,
PostgreSQL, Vertica, MS SQL, MariaDB)
Host IP address or name of the host the Dictionary database is installed on
Port Port number of the Dictionary database
Database Name Name of the Dictionary database
Login User name to connect to the database
Password Password to connect to the database
Line Description
Server Name Logical name of the DataSunrise instance
Server Hostname IP address or name of the host the current DataSunrise instance is installed
on
Server Port Port number of the instance's Web Console (11000 by default)
4. Specify the database to store audit data (Audit Storage) similarly to the Dictionary database.
5. After configuring is complete, you can see all the available DataSunrise servers (the instances installed on the
separate servers and sharing common configuration) in the System Settings → Servers subsection.
6. You can also use the following parameters along with the -remote-config parameter:
4 Multi-Server Configuration (High Availability Mode) | 45
Parameter Description
--dictionary-type <mysql | postgresql | vertica | Dictionary database type
mssql | mariadb>
--dictionary-host Dictionary database hostname or IP address
--dictionary-port Dictionary database port number
--dictionary-database Dictionary database name
--dictionary-login Database user name to use for connection to the Dictionary
--dictionary-password User password to use for connection to the Dictionary
--audit-type <mysql | postgresql | redshift | aurora Audit Storage database type
| mssql | vertica>
--audit-host Audit Storage database hostname or IP address
--audit-port Audit Storage port number
--audit-database Audit Storage database name
--audit-login Database user name to use for connection to the Audit
Storage
--audit-password Password to use for connection to the Audit Storage
--server-name DataSunrise server logical name
--server-host DataSunrise server hostname or IP address
--server-port DataSunrise server port number
Example:
./DataSunrise_Suite_5_6_0_15792.linux.64bit.run install
--remote-config -v
--dictionary-type postgresql
--dictionary-host 172.16.101.30
--dictionary-port 5432
--dictionary-database datasunrise_db
--dictionary-login ds_admin
--dictionary-password Password1
--audit-type postgresql
--audit-host 172.16.101.30
--audit-port 5432
--audit-database datasunrise_db
--audit-login ds_admin
--audit-password Password1
--server-name datasunrise1
--server-host 172.16.101.247
--server-port 11000
https://localhost:11000/v2/settings/servers/server/edit/1
4 Multi-Server Configuration (High Availability Mode) | 48
2. Execute the following command:
cd /opt/datasunrise/
export LD_LIBRARY_PATH=/opt/datasunrise
5. Run the following script using the details of your database to specify the location of the database where
DataSunrise Dictionary is stored:
./AppBackendService
DICTIONARY_TYPE=<type of the Dictionary database. For example "postgresql">
DICTIONARY_HOST=<Dictionary IP address or host name. For example "dstestpg.cpauhz8lxyzp.us-
east-1.rds.amazonaws.com">
DICTIONARY_PORT=<Dictionary port number. For example 5432>
DICTIONARY_DB_NAME=<Dictionary database name. For example "dict_db">
DICTIONARY_LOGIN=<Dictionary database user name. For example "dsuser">
DICTIONARY_PASS=<Dictionary database password>
RESTORE_LOCAL_SETTINGS=<SERVER_ID>
Note: replace <SERVER_ID> with the actual ID of the server you want to restore access to (see step 1). Note that
all the parameter's values should be written without quotes.
Important: if you changed your Dictionary location (IP address or host name), you can use the command from
Step 4 to create a new local_settings.db file which is required to be able to work with a new Dictionary.
Parameter Description
DICTIONARY_TYPE Common Dictionary database type: mysql / postgresql
DICTIONARY_HOST The endpoint of your Dictionary database
DICTIONARY_PORT Dictionary database port number
DICTIONARY_DB_NAME Dictionary database name
DICTIONARY_SCHEMA Dictionary database schema
DICTIONARY_LOGIN Dictionary database username that should be used for connection to the
Dictionary
DICTIONARY_PASS Dictionary database password that should be used for connection
5 Always On
Redirect: cd164f04fd1f.tr27.westus1-a.worker.database.windows.net:11082
5 Always On | 51
For the SaaS SQL Azure it means that a client should add proxy on this host manually (or it is done already)
to make client connections controlled by DataSunrise. Otherwise, DataSunrise will lose control over the client
connection. For a cluster with AlwaysOn enabled, it is possible to configure redirection to Readonly-replicas,
that's why if redirecting host is already configured on DataSunrise, we will see this host in redirection notification.
In both cases, a notification about redirection can be used when administering DS/AlwaysOn for diagnostics.
To add a proxy to an instance, perform the following:
• Add an interface of the target server (its cd164f04fd1f.tr27.westus1-
a.worker.database.windows.net:11082 in our case);
• Add a proxy on this interface. When using standard templates for host name (0.0.0.0 or 0:0:0:0:0:0:0:0) of such
a proxy, DataSunrise will return the client an address of an available non-local interface from DataSunrise's
host as an address for redirect.
export LD_LIBRARY_PATH=/opt/datasunrise
4. Execute the following script using the details of your database to specify the location of the database where
DataSunrise Dictionary is stored:
./AppBackendService
DICTIONARY_TYPE=<Dictionary database type>
DICTIONARY_HOST=<Dictionary hostname (AWS hostname)>
DICTIONARY_PORT=<Dictionary database port number>
DICTIONARY_DB_NAME=<Dictionary database name>
DICTIONARY_LOGIN=<Dictionary login>
DICTIONARY_AWS_SECRET=<AWS Secret>
FIREWALL_SERVER_HOST=<DataSunrise's hostname>
FIREWALL_SERVER_BACKEND_PORT=<DataSunrise's Backed port number (11000 by default)>
FIREWALL_SERVER_CORE_PORT=<DataSunrise's Core port number (11001 by default)>
FIREWALL_SERVER_NAME=<DataSunrise's server name>
FIREWALL_SERVER_BACKEND_HTTPS=1 FIREWALL_SERVER_CORE_HTTPS=1
6 Shared Configuration (SC Mode)
When deployed in this configuration, DataSunrise Instances installed on independent servers protect independent
databases installed on the same servers. Despite that these DataSunrise instances are installed on separate servers,
they share the same configuration stored in a separate database thus you can control all DataSunrise instances
included in a Shared Configuration using a Web Console installed on any of the associated DataSunrise servers. This
configuration shares many similarities with High Availability (HA) mode but for SC no load balancers are used and
the DataSunrise instances don't protect the same database.
Q. I’m trying to run PostgreSQL, but the database connection fails: “[unixODBC] Missing server name, port,
or database name in call to CC_connect.” (error code 201)
Check the ODBC driver availability by executing the following command:
odbcinst -q -d
[postgres_i]
Description = Postgres Database
Driver = PostgreSQL
Database = postgres
Servername = 127.0.0.1
Port = 5432
Q. I’m trying to run DataSunrise but getting the error message: “Data source name not found and no default
driver specified”.
Basically, the data source you are attempting to connect to does not exist on your machine. On Linux and UNIX,
SYSTEM data sources are typically defined in /etc/odbc.ini. USER data sources are defined in ~/.odbc.ini.
You should grant read access to the .ini file that contains the data source. You may need to set the ODBCSYSINI,
ODBCINSTINI or ODBCINI environment variables to pinpoint the odbc.ini and odbcinst.ini files location if it hasn’t
been done before.
Q. I'm trying to enter the web interface after the program update, but it displays the "Internal System Error"
message.
Most likely, you kept the web interface tab open in your browser while updating the firewall. Log out the web
interface if necessary and press Ctrl + F5 to refresh the page.
Q. I’m trying to establish a connection between DataSunrise and a MySQL database, but it fails because of
the missing ODBC MySQL driver.
7 Frequently Asked Questions | 54
Certain Linux-type operating systems don’t add the MySQL driver parameters to the odbcinst.ini file. You should do
it manually.
If necessary, install the MySQL ODBC driver by running the following commands:
• For Debian and Ubuntu:
Paste the following code into odbcinst.ini and save the file:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1
Update the configuration files that control ODBC access to the database servers by running the following command:
Q. I’m getting the “Could not find libodbc.so.2 (unixODBC is required)” error while trying to install
DataSunrise on Ubuntu 14.04. UnixODBC is installed.
Continue the program installation.
Check if the libcrypto.so.10 and libssl.so.10 files are available in the program installation folder (/opt/datasunrise/ on
default) by executing the following command:
ll /opt/datasunrise/
locate libodbc.so.1
If libcrypto.so.10 and libssl.so.10 are available in the DataSunrise installation folder, execute the following command:
ln /usr/lib/x86_64-linux-gnu/libodbc.so.1 /opt/datasunrise/libodbc.so.2
Note: In this case, /usr/lib/x86_64-linux-gnu/ is the Linux system folder where libodbc.so.1 is located and /opt/
datasunrise/ is the DataSunrise installation folder.
If the libcrypto.so.10 and libssl.so.10 files are not available in the DataSunrise installation folder, execute the
following command:
Note: In this case, /usr/lib/x86_64-linux-gnu/ is the Linux system folder where libodbc.so.1 and libodbc.so.2 are
located.
Q. I’m getting the “Could not find ‘setcap’” error while trying to install DataSunrise on OpenSUSE 42.1.
Install libcap-progs. To do this, execute the following command:
Q. When I’m trying to run DataSunrise in the sniffer mode, it displays the message: “Impossible to parse the
SSL connection in the sniffer mode”.
To run the firewall in the sniffer mode, you should disable SSL support in your client application settings (SSL Mode
→ Disable). You can also switch the application’s SSL Mode to “Allow” or “Prefer”, but disable SSL support in the
database server settings first.
Q. I can't update my DataSunrise. I run the latest version of the DataSunrise installer, but the installation
wizard is not able to locate the old DataSunrise installation folder.
Run the DataSunrise installer in the Repair mode. It removes the previous installation and updates your DataSunrise
to the latest version.
Q. When connecting to Aurora DB, MySQL the ODBC driver stops responding.
Most probably, you're using ODBC driver version 5.3.6, which is known to cause freezes from time to time. Install
MySQL ODBC driver version 5.3.4.
This error occurs because on certain OSs the installer cannot unpack the installation archive into the temp folder.
Execute the following command:
This command creates a temporary folder in the current folder and unpacks the archive into it. After the installation
is finished, delete this temp folder manually.
Q. I'm using a MySQL database installed on the same PC DataSunrise is installed on. I'm querying the
database, but data audit doesn't capture the events.
Most probably, DataSunrise proxy is not intercepting the traffic. It can occur if you've configured your database
connection (in your DB profile) to access the database at "localhost" instead of "127.0.0.1". In this case, MySQL
can use the UNIX socket for connection instead of TCP. Specify the full IP address of the database and ensure that
your client application uses the TCP connection. Refer to the following pages, if necessary: http://serverfault.com/
questions/337818/how-to-force-mysql-to-connect-by-tcp-instead-of-a-unix-socket, http://www.computerhope.com/
issues/ch001079.htm
Q. I'm using an MS SQL Server database. I'm creating a target database profile, but can't properly configure
the database connection.
In the DB connection details, specify the credentials (the Default login and Password fields) used for SQL Server
authentication and not for Windows authentication. To specify the database server's host (Host field), use the actual
DB server's IP address or host name instead of server's SPN.
Q. I'm updating DataSunrise via the Web Console and getting the following error:
(17279|140037881128832) PCAP - pcapOpen: [ERROR] Can not activate pcap object. Pcap error: socket:
Operation not permitted
These problems occur because DataSunrise cannot install the required "file capabilities" (cap_net_raw and
cap_net_admin=eip to be exact). To enable these capabilities, execute the following command:
Q. I'm getting the following warning: "The free disk space limit for audit has been reached. The current free
disc space is XXX MB. The disk space limit is 10240 MB".
If you want to decrease the disk space threshold for this warning, navigate to System Settings → Additional and
change the "LogsDiscFreeSpaceLimit" parameter's value from 10240 to 1024 Mb for example.
Q. I'm getting the following notification: "Reached the limit on delayed packets".
This notification is displayed when the sniffer has captured a large amount of traffic in the SSL sessions started
before the DataSunrise service was started. By default, the volume of captured traffic should not exceed 10 Mb (the
pnMsSqlDelayedPacketsLimit parameter).
Sometimes this notification can be displayed if there is a huge load on the pcap driver. Thus the sniffer can capture
too much of delayed traffic. In this case you need to increase the pnMsSqlDelayedPacketsLimit parameter's value.
Q. I've updated DataSunrise and I get the following error: "PROCEDURE dsproc_<version>.initProcedures
does not exist"
Now DataSunrise uses a new method of getting metadata. Do the following steps again: subs 4.5.4.2.