Nxlog User Guide
Nxlog User Guide
NXLog Ltd.
2. About NXLog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4. Available Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5. Supported Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7. System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.1. Installing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
10.2. Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
10.3. Uninstalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
11.1. Installing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
OS Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
78.2. Collecting and Parsing SCEP Data from Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
78.3. Collecting and Parsing SCEP Data from an SQL Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
127.12. Increasing the Open File Limit for NXLog Manager Using systemd . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
1
Chapter 1. About This Guide
This Guide is designed to give you all the information and skills you need to successfully deploy and configure
NXLog in your organization. The following chapters provide detailed information about NXLog, including
features, architecture, configuration, and integration with other software and devices. An NXLog Enterprise
Edition Reference Manual is included, as well as documentation for the NXLog Manager.
NXLog is available in two versions, the Community Edition and the Enterprise Edition. Features that are unique to
the Enterprise Edition are noted as such, except in the Reference Manual (the Community Edition Reference
Manual is published separately). For more details about the functionality provided by these two NXLog editions,
see the following chapters (in particular, About NXLog and Available Modules).
Though most of the content applies to all versions of NXLog Community Edition and NXLog
Enterprise Edition, this Guide was written specifically for NXLog Enterprise Edition version
WARNING
5.0.5876. Some features covered by this book may not be available in earlier versions of
NXLog, and earlier versions of NXLog may behave differently than documented here.
If you would like to copy/paste configuration content from the Guide, please do so using
WARNING the HTML format. It is not possible to guarantee appropriate selection behavior with the
PDF format.
2
Chapter 2. About NXLog
Modern IT infrastructure produces large volumes of event logging data. In a single organization, hundreds or
thousands of different devices, applications, and appliances generate event log messages. These messages
require many log processing tasks, including filtration, classification, correlation, forwarding, and storage. In most
organizations these requirements are met with a collection of scripts and programs, each with its custom format
and configuration. NXLog provides a single, high-performance, multi-platform product for solving all of these
tasks and achieving consistent results.
At NXLog’s inception, there were various logging solutions available, but none with the required features. Most
were single-threaded and Syslog-oriented, without native support for Windows. Work on NXLog began with the
goal of building a modern logger with a multi-threaded design, a clear configuration syntax, multi-platform
support, and clean source code. NXLog was born in 2009 as a closed source product heavily used in several
production deployments. The source code of NXLog Community Edition was released in November 2011.
NXLog can process event logs from thousands of different sources with volumes over 100,000 events per second.
It can accept event logs over TCP, TLS/SSL, and UDP; from files and databases; and in Syslog, Windows EventLog,
and JSON formats. NXLog can also perform advanced processing on log messages, such as rewriting, correlating,
alerting, pattern matching, scheduling, and log file rotation. It supports prioritized processing of certain log
messages, and can buffer messages on disk or in memory to work around problems with input latency or
network congestion. After processing, NXLog can store or forward event logs in any of many supported formats.
Inputs, outputs, log formats, and complex processing are implemented with a modular architecture and a
powerful configuration language.
Multi-platform deployment
Installer packages are provided for multiple platforms, including Linux, Windows, and Android. You can use
NXLog across your entire infrastructure, without resorting to different tools for different platforms.
Security
NXLog provides features throughout the application to maintain the security of your log data and systems.
The core can be configured to run as an unprivileged user, and special privileges (such as binding to ports
below 1024) are accessed through Linux capabilities rather than requiring the application to run as root.
TLS/SSL is supported for encrypted, authenticated communications and to prevent data interception or
alteration during transmission.
3
Modular architecture
NXLog has a lightweight, modular architecture, providing a reduced memory footprint and increased
flexibility for different uses. The core handles files, events, and sockets, and provides the configuration
language; modules provide the input, output, and processing capabilities. Because modules use a common
API, you can write new modules to extend the features of NXLog.
Message buffering
Log messages can be buffered in memory or on disk. This increases reliability by holding messages in a
temporary cache when a network connectivity issue or dropout occurs. Conditional buffering can be
configured by using the NXLog language to define relevant conditions. For example, UDP messages may
arrive faster than they can be processed, and NXLog can buffer the messages to disk for processing when the
system is under less load. Conditional buffering can be used to explicitly buffer log messages during certain
hours of the day or when the system load is high.
Prioritized processing
NXLog can be configured to separate high-priority log processing from low-priority log processing, ensuring
that it processes the most important data first. When the system is experiencing high load, NXLog will avoid
dropping important incoming messages. For example, incoming UDP messages can be prioritized to prevent
dropped logs if a high volume of TCP messages overloads the system.
Message durability
Built-in flow control ensures that a blocked output does not cause dropped log messages when buffers are
full. In combination with the previously mentioned parallel processing, buffering, and prioritization, the
possibility of message loss is greatly reduced.
Offline processing
Sometimes log messages need to be processed in batches for conversion, filtering, or analysis. NXLog
provides an offline mode in which it processes all input and then exits. Because NXLog does not assume that
the event time and processing time are identical, time-based correlation features can be used even during
offline log processing.
4
2.2. Enterprise Edition Features
While the NXLog Community Edition provides all the flexibility and performance of the NXLog engine, the NXLog
Enterprise Edition provides additional enhancements, including modules and core features, as well as regular
hot-fixes and updates. The Enterprise Edition provides the following enhancements.
On-the-wire compression
Log data can be transferred in compressed batches with the im_batchcompress and om_batchcompress
input/output modules. This can help in limited bandwidth scenarios.
Remote management
The dedicated xm_admin extension module enables NXLog agents to be managed remotely over a secure
SOAP/JSON SSL connection or to be integrated with existing monitoring and management tools. The
configuration, correlation rules, patterns, and certificates can all be updated remotely from the NXLog
Manager web interface or from scripts. In addition, the NXLog agent and the individual modules can be
stopped/started and log collection statistics can be queried for real-time statistics.
Crash recovery
Additional functionality is provided to guarantee a clean recovery in the case of a system crash, ensuring that
no messages are lost or duplicated.
Event correlation
The pm_evcorr processor module can efficiently solve complex event correlation tasks, with capabilities
similar to what the open-source SEC tool provides.
5
Windows as well as Linux. The im_regmon module provides monitoring of the Windows Registry.
Name resolution
The xm_resolver extension module provides cached DNS lookup functions for translating between IP
addresses and host names. User and group names can also be mapped to/from user and group ids.
Elasticsearch integration
The om_elasticsearch output module allows log data to be loaded directly into an Elasticsearch server without
requiring Logstash.
Redis Support
Redis is often used as an intermediate queue for log data. Two native modules, im_redis and om_redis, are
available to push data to and pull data from Redis servers.
SNMP input
The xm_snmp extension module can be used to parse SNMP traps. The traps can then be handled like regular
log messages: converted to Syslog, stored, forwarded, etc.
6
as well as Windows.
HDFS output
The om_webhdfs output module is available to support the Hadoop ecosystem.
Netflow support
The xm_netflow extension module can parse Netflow packets received over UDP. It supports Netflow v1, v5,
v7, v9, and IPFIX.
ZeroMQ support
ZeroMQ is a popular high performance messaging library. The im_zmq and om_zmq modules provide input
and output support for the ZeroMQ protocol.
• a graphical interface (or "dashboard") for searching logs and displaying reports,
• vulnerability detection or integration with external threat data,
• automatic analysis and correlation algorithms, or
• pre-configured compliance and retention policies.
NXLog does provide processing features that can be used to set up analysis, correlation, retention, and alerting;
NXLog can be integrated with many other products to provide a complete solution for aggregation, analysis, and
storage of log data.
7
Chapter 3. System Architecture
3.1. Event Records and Fields
In NXLog, a log message is an event, and the data relating to that event is collectively an event record. When NXLog
processes an event record, it stores the various values in fields. The following sections describe event records and
fields in the context of NXLog processing.
• The most common event record is a single line. Thus the default is LineBased for the InputType and
OutputType directives.
• It is also common for an event record to use a single UDP datagram. NXLog can send and receive UDP events
with the im_udp and om_udp modules.
• Some event records are generated using multiple lines. These can be joined into a single event record with
the xm_multiline module.
• Event records may be stored in a database. Each row in the database represents an event. In this case the
im_odbc and om_odbc modules can be used.
• It is common for structured event records to be formatted in CSV, JSON, or XML formats. The xm_csv,
xm_json, and xm_xml modules provide functions and procedures for parsing these.
• NXLog provides a Binary InputType and OutputType for use when compatibility with other logging software
is not required. This format preserves parsed fields and their types.
In NXLog, each event record consists of the raw event data (in a field named $raw_event) and additional fields
generated during processing and parsing.
3.1.2. Fields
All event log messages contain important data such as user names, IP addresses, and application names.
Traditionally, these logs have been generated as free form text messages prepended by basic metadata like the
time of the event and a severity value.
While this format is easy for humans to read, it is difficult to perform log analysis and filtering on thousands of
free-form logs. In contrast, structured logging provides means for matching messages based on key-value pairs.
With structured logging, an event is represented as a list of key-value pairs. The name of the field is the key and
the field data is the value. NXLog’s core design embraces structured logging. Using various features provided by
NXLog, a message can be parsed into a list of key-value pairs for processing or as part of the message sent to the
destination.
When a message is received by NXLog, it creates an internal representation of the log message using fields. Each
field is typed and represents a particular attribute of the message. These fields passes through the log route, and
are available in each successive module in the chain, until the log message has been sent to its destination.
1. The special $raw_event field contains the raw data received by the input module. Most input and output
modules only transfer $raw_event by default.
2. The core adds a few additional fields by default:
a. $EventReceivedTime (type: datetime) The time when the event is received. The value is not modified if
the field already exists.
b. $SourceModuleName (type: string) The name of the module instance, for input modules. The value is not
modified if the field already exists.
8
c. $SourceModuleType (type: string) The type of module instance (such as im_file), for input modules.
The value is not modified if the field already exists.
3. The input module may add other fields. For example, the im_udp module adds a $MessageSourceAddress
field.
4. Some input modules, such as im_msvistalog and im_odbc, map fields from the source directly to fields in the
NXLog event record.
5. Parsers such as the parse_syslog() procedure will add more fields.
6. Custom fields can be added by using the NXLog language and an Exec directive.
7. The NXLog language or the pm_pattern module can be used to set fields using regular expressions. See
Extracting Data.
When the configured output module receives the log message, in most cases it will use the contents of the
$raw_event field only. If the event’s fields have been modified, it is therefore important to update $raw_event
from the other fields. This can be done with the NXLog language, perhaps using a procedure like to_syslog_bsd().
A field is denoted and referenced in the configuration by a preceding dollar sign ($). See the Fields section in the
Reference Manual for more information.
9
Example 1. Processing a Syslog Message
This example shows a Syslog event and its corresponding fields as processed by NXLog. A few fields are
omitted for brevity.
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from
192.168.1.60 port 38176 ssh2↵
2. The raw event data is stored in the $raw_event field when NXLog receives a log message. The NXLog
core and input module add additional fields.
{
"raw_event": "<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user
linda from 192.168.1.60 port 38176 ssh2",
"EventReceivedTime": "2019-11-22 10:30:13",
"MessageSourceAddress": "192.168.1.1",
3. The xm_syslog parse_syslog() procedure parses the basic format of the Syslog message, reading from
$raw_event by default. This procedure adds a few more fields:
"SyslogFacility": "USER",
"SyslogSeverity": "NOTICE",
"EventTime": "2019-11-22 10:30:12",
"Hostname": "myhost",
"SourceName": "sshd",
"ProcessID": 8459,
"Message": "Failed password for invalid user linda from 192.168.1.60 port 38176 ssh2",
4. Further metadata can be extracted from the free-form $Message field with regular expressions or other
methods; see Extracting Data.
"Status": "failed",
"AuthenticationMethod": "password",
"Reason": "invalid user",
"User": "linda",
"SourceIPAddress": "192.168.1.60",
"SourcePort": 38176,
"Protocol": "ssh2"
}
Files and sockets are added to the core by the various modules, and the core delegates events when necessary.
Modules also dispatch log events to the core, which passes each one to the appropriate module. In this way, the
core can centrally control all events and the order of their execution making prioritized processing possible. Each
event belonging to the same module instance is executed in sequential order, not concurrently. This ensures that
message order is kept and allows modules to be written without concern for concurrency. Yet because the
modules and routes run concurrently, the global log processing flow remains parallelized.
10
3.2.1. Modules
A module is a foo.so or foo.dll that can be loaded by the NXLog core and provides a particular capability. A
module instance is a configured module that can be used in the configured data flow. For example, the
configuration block for an input module instance begins with <Input instancename>. See the Instance examples
below. A single module can be used in multiple instances. With regard to configuration, a module instance is
often referred to as simply a module.
Input
Functionality for accepting or retrieving log data is provided by input modules. An input module instance is a
source or producer. It accepts log data from a source and produces event records.
Output
Output modules provide functionality for sending log data to a local or remote destination. An output module
instance is a sink, destination, or consumer. It is responsible for consuming event records produced by one or
more input module instances.
Extension
The NXLog language can be extended with extension modules. Extension module instances do not process
log data directly. Instead, they provide features (usually functions and procedures) that can be used from
other parts of the processing pipeline. Many extension module instances require no directives other than the
Module directive.
In this example, the xm_syslog module is loaded by the Extension block. This module provides the
parse_syslog() procedure, in addition to other functions and procedures. In the following Input
instance, the Exec directive calls parse_syslog() to parse the Syslog-formatted event.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/log/messages'
8 Exec parse_syslog();
9 </Input>
Processor
Processor modules offer features for transforming, filtering, or converting log messages. One or more
11
processor module instances can be used in a route between input and output module instances.
Many processing functions and procedures are available through the NXLog language and
can be accessed through the Exec directive in an Input or Output block without using a
NOTE
separate processor module instance. However, a separate processor module (pm_null,
perhaps) will use a separate worker thread, providing additional processing parallelization.
3.2.2. Routes
Most log processing solutions are built around the same concept. The input is read from a source, log messages
are processed, and then log data is written to a destination. In NXLog, this path is called a "route" and is
configured with a Route block.
Routes are made up of one or more inputs, zero or more processors, and one or more outputs.
This route accepts input with the in module and sends it to the out module. This is the simplest functional
route.
nxlog.conf
1 <Route r1>
2 Path in => out
3 </Route>
12
Example 4. A Route With a Processor
This route extends the previous example by adding an intermediate processing module proc.
nxlog.conf
1 <Route r2>
2 Path in => proc => out
3 </Route>
This route uses two input modules and two output modules. Input from in1 and in2 will be combined and
sent to both out1 and out2.
nxlog.conf
1 <Route r3>
2 Path in1, in2 => out1, out2
3 </Route>
13
Example 6. Branching: Two Routes Using One Input Module
A module can be used by multiple routes simultaneously, as in this example. The in1 module instance is
only declared once, but is used by both routes.
nxlog.conf
1 <Route r1>
2 Path in => out1
3 </Route>
4
5 <Route r2>
6 Path in => proc => out2
7 </Route>
Log Queues
Every processor and output module instance has an input log queue for events that have not yet been
processed by that module instance. When the preceding module has processed an event, it is placed in this
queue. Log queues are enabled by default for all processor and output module instances; adjusting log
queue sizes is the preferred way to control buffering behavior.
Flow Control
NXLog’s flow control functionality provides automatic, zero-configuration handling of many cases where
buffering would otherwise be required. Flow control takes effect when the following sequence of events
occurs in a route:
1. a processor or output module instance is not able to process log data at the incoming rate,
2. that module instance’s log queue becomes full, and
14
3. the preceding input or processor module instance has flow control enabled (which is the default).
In this case, flow control will cause the input or processor module instance to suspend processing until the
succeeding module instance is ready to accept more log data.
For more information about these and other buffering features, including log queue persistence, disabling flow
control, read/write buffers, and examples for specific scenarios, see Using Buffers.
• Agent-Based Collection: NXLog runs on the system that is generating the log data.
• Agent-Less Collection: Hosts or devices generate log data and send it over the network to NXLog.
• Offline Log Processing: The nxlog-processor(8) tool performs batch log processing.
We recommend agent-based log collection for most use cases. In particular, we recommend this
NOTE mode if you need strong security and reliability or need to transform log data before it leaves
the system on which it was generated.
Agent-based log collection offers several important advantages over agent-less collection.
• Log data can be collected from more sources. For example, you can collect logs directly from files, instead of
relying on a logging process to send log data across the network.
• NXLog’s processing features are available. You can filter, normalize, and rewrite log data before sending it to
a destination, whether a NXLog instance or a log aggregation system. This includes the ability to send
structured log data, such as JSON and key-value pairs.
• You have full control over the transfer of the log data. Messages can be sent using a variety of protocols,
including over TLS/SSL encrypted connections for security. Log data can be sent in compressed batches and
can be buffered if necessary.
• Log collection in this mode is more reliable. NXLog includes delivery guarantees and flow control systems
which ensure your log data reaches its destination. You can monitor the health of the NXLog agent to verify
its operational integrity.
Although agent-based collection has many compelling advantages, it is not well suited to some use cases.
• Many network and embedded systems, such as routers and firewalls, do not support installing third-party
software. In this case it would not be possible to install the NXLog agent.
• Installing the NXLog agent on each system in a large-scale deployment may not be practical compared to
reading from the existing logging daemon on each system.
15
3.4.2. Agent-Less Collection
With this mode of log collection, a server or device sends log data to an NXLog instance over the network, using
its native protocols. NXLog collects and processes the information that it receives.
We recommend agent-less log collection in cases where agent-based log collection is not
NOTE feasible, for example from legacy or embedded systems that do not support installing the
NXLog agent.
• It is not necessary to install an NXLog agent application on the target system to collect log data from it.
• Generally, a device or system requires only minimal configuration to send log data over the network to an
NXLog instance in its native format.
Agent-less log collection has some disadvantages that should be taken into consideration.
• Agent-less log collection may provide lower performance than agent-based collection. On Windows systems,
the Windows Management Instrumentation process can consume more system resources than the NXLog
agent.
• Reliability is also a potential issue. Since most Syslog log forwarders use UDP to transfer log data, some data
could be lost if the server restarts or becomes unreachable over the network. Unlike agent-based log
collection, you often cannot monitor the health of the logging source.
• Data transfers are less secure when using agent-less collection since most Syslog sources transfer data over
unencrypted UDP.
• BSD Syslog (RFC 3164) and IETF Syslog (RFC 5424) sources (see Collecting and Parsing Syslog)
• Windows EventLog sources (with NXLog Enterprise Edition):
◦ The MSRPC protocol, using the im_msvistalog module (see Remote Collection With im_msvistalog)
◦ Windows Event Forwarding, using the im_wseventing module (see Remote Collection With
im_wseventing)
Common input sources are files and databases. This tool is useful for log processing tasks such as:
16
Chapter 4. Available Modules
The following modules are provided with NXLog. Modules which are only available in NXLog Enterprise Edition
are noted. For detailed information about which modules are available for specific platforms, see the Modules by
Platform and Modules by Package sections.
Module Description
xm_admin — Remote Management Adds secure remote administration capabilities to NXLog using
(Enterprise Edition only) SOAP or JSON over HTTP/HTTPS.
xm_aixaudit — AIX Auditing (Enterprise Parses AIX audit events that have been written to file.
Edition only)
xm_asl — Apple System Logs (Enterprise Parses events in the Apple System Log (ASL) format.
Edition only)
xm_bsm — Basic Security Module Auditing Supports parsing of events written to file in Sun’s Basic Security
(Enterprise Edition only) Module (BSM) Auditing binary format.
xm_cef — CEF (Enterprise Edition only) Provides functions for generating and parsing data in the
Common Event Format (CEF) used by HP ArcSight™ products.
xm_charconv — Character Set Conversion Provides functions and procedures to help you convert strings
between different character sets (code pages).
xm_exec — External Program Execution Passes log data through a custom external program for
processing, either synchronously or asynchronously.
xm_grok — Grok Patterns (Enterprise Edition Provides support for parsing events with Grok patterns.
only)
xm_kvp — Key-Value Pairs Provides functions and procedures to parse and generate data
that is formatted as key-value pairs.
xm_leef — LEEF (Enterprise Edition only) Provides functions for parsing and generating data in the Log
Event Extended Format (LEEF), which is used by IBM Security
QRadar products.
xm_msdns — DNS Server Debug Log Parses Microsoft Windows DNS Server debug logs
Parsing (Enterprise Edition only)
xm_multiline — Multi-Line Message Parser Parses log entries that span multiple lines.
17
Module Description
xm_netflow — NetFlow (Enterprise Edition Provides a parser for NetFlow payload collected over UDP.
only)
xm_nps — NPS (Enterprise Edition only) Provides functions and procedures for processing data in NPS
Database Format stored in files by Microsoft Radius services.
xm_pattern — Pattern Matcher (Enterprise Applies advanced pattern matching logic to log data, which can
Edition only) give greater performance than normal regular expression
statements. Replaces pm_pattern.
xm_resolver — Resolver (Enterprise Edition Resolves key identifiers that appear in log messages into more
only) meaningful equivalents, including IP addresses to host names, and
group/user IDs to friendly names.
xm_snmp — SNMP Traps (Enterprise Edition Parses SNMPv1 and SNMPv2c trap messages.
only)
xm_syslog — Syslog Provides helpers that let you parse and output the BSD Syslog
protocol as defined by RFC 3164.
xm_w3c — W3C (Enterprise Edition only) Parses data in the W3C Extended Log File Format, the BRO format,
and Microsoft Exchange Message Tracking logs.
Module Description
im_acct — BSD/Linux Process Accounting Collects process accounting logs from a Linux or BSD kernel.
(Enterprise Edition only)
im_aixaudit — AIX Auditing (Enterprise Collects AIX audit events directly from the kernel.
Edition only)
im_azure — Azure (Enterprise Edition only) Collects logs from Microsoft Azure applications.
im_bsm — Basic Security Module Auditing Collects audit events directly from the kernel using Sun’s Basic
(Enterprise Edition only) Security Module (BSM) Auditing API.
im_checkpoint — Check Point OPSEC Provides support for collecting logs remotely from Check Point
(Enterprise Edition only) devices over the OPSEC LEA protocol.
18
Module Description
im_dbi — DBI Collects log data by reading data from an SQL database using the
libdbi library.
im_etw — Event Tracing for Windows (ETW) Implements ETW controller and consumer functionality in order to
(Enterprise Edition only) collect events from the ETW system.
im_file — File Collects log data from a file on the local file system.
im_fim — File Integrity Monitoring Scans files and directories and reports detected changes.
(Enterprise Edition only)
im_http — HTTP/HTTPS (Enterprise Edition Accepts incoming HTTP or HTTPS connections and collects log
only) events from client POST requests.
im_kafka — Apache Kafka (Enterprise Edition Implements a consumer for collecting from a Kafka cluster.
only)
im_kernel — Kernel (Enterprise Edition only Collects log data from the kernel log buffer.
for some platforms)
im_linuxaudit — Linux Audit System Configures and collects events from the Linux Audit System
(Enterprise Edition only)
im_null — Null Acts as a dummy log input module, which generates no log data.
You can use this for testing purposes.
im_oci — OCI (Enterprise Edition only) Reads log messages from an Oracle database.
im_odbc — ODBC (Enterprise Edition only) Uses the ODBC API to read log messages from database tables.
im_perl — Perl (Enterprise Edition only) Captures event data directly into NXLog using Perl code.
im_pipe — Named Pipes (Enterprise Edition This module can be used to read log messages from named pipes
only) on UNIX-like operating systems.
im_python — Python (Enterprise Edition only) Captures event data directly into NXLog using Python code.
im_regmon — Windows Registry Monitoring Periodically scans the Windows registry and generates event
(Enterprise Edition only) records if a change in the monitored registry entries is detected.
im_ruby — Ruby (Enterprise Edition only) Captures event data directly into NXLog using Ruby code.
im_ssl — SSL/TLS Collects log data over a TCP connection that is secured with
Transport Layer Security (TLS) or Secure Sockets Layer (SSL).
im_uds — Unix Domain Socket Collects log data over a Unix domain socket (typically /dev/log).
19
Module Description
im_winperfcount — Windows Performance Periodically retrieves the values of the specified Windows
Counters (Enterprise Edition only) Performance Counters to create an event record.
im_wseventing — Windows Event Collects EventLog from Windows clients that have Windows Event
Forwarding (Enterprise Edition only) Forwarding configured.
im_zmq — ZeroMQ (Enterprise Edition only) Provides incoming message transport over ZeroMQ, a scalable
high-throughput messaging library.
Module Description
pm_blocker — Blocker Blocks log data from progressing through a route. You can use this
module for testing purposes, to simulate when a route is blocked.
pm_filter — Filter Forwards the log data only if the condition specified in the Filter
module configuration evaluates to true. This module has been
deprecated. Use the NXLog language drop() procedure
instead.
pm_hmac — HMAC Message Integrity Protect messages with HMAC cryptographic checksumming. This
(Enterprise Edition only) module has been deprecated.
pm_hmac_check — HMAC Message Integrity Check HMAC cryptographic checksums on messages. This module
Checker (Enterprise Edition only) has been deprecated.
pm_pattern — Pattern Matcher Applies advanced pattern matching logic to log data, which can
give greater performance than normal regular expression
statements in Exec directives. This module has been
deprecated. Use the xm_pattern module instead.
pm_transformer — Message Format Provides parsers for various log formats, and converts between
Converter them. This module has been deprecated. Use the xm_syslog,
xm_csv, xm_json, and xm_xml modules instead.
20
Module Description
om_batchcompress — Batched Provides a compressed network transport for outgoing messages
Compression over TCP or SSL (Enterprise with optional SSL/TLS encryption. Pairs with the
Edition only) im_batchcompress input module.
om_blocker — Blocker Blocks log data from being written. You can use this module for
testing purposes, to simulate when a route is blocked.
om_dbi — DBI Stores log data in an SQL database using the libdbi library.
om_eventdb — EventDB (Enterprise Edition Uses libdrizzle to insert log message data into a MySQL database
only) with a special schema.
om_null — Null Acts as a dummy log output module. The output is not written or
sent anywhere. You can use this module for testing purposes.
om_odbc — ODBC (Enterprise Edition only) Uses the ODBC API to write log messages to database tables.
om_perl — Perl (Enterprise Edition only) Uses Perl code to handle output log messages from NXLog.
om_pipe — Named Pipes (Enterprise Edition This module allows log messages to be sent to named pipes on
only) UNIX-like operating systems.
om_python — Python (Enterprise Edition Uses Python code to handle output log messages from NXLog.
only)
om_ruby — Ruby (Enterprise Edition only) Uses Ruby code to handle output log messages from NXLog.
om_ssl — SSL/TLS Sends log data over a TCP connection that is secured with
Transport Layer Security (TLS) or Secure Sockets Layer (SSL).
om_udpspoof — UDP with IP Spoofing Sends log data over a UDP connection, and spoofs the source IP
(Enterprise Edition only) address to make packets appear as if they were sent from another
host.
om_webhdfs — WebHDFS (Enterprise Edition Stores log data in Hadoop HDFS using the WebHDFS protocol.
only)
om_zmq — ZeroMQ (Enterprise Edition only) Provides outgoing message transport over ZeroMQ, a scalable
high-throughput messaging library.
21
4.5.1. AIX 7.1
Table 5. Available Modules in nxlog-5.0.5874-1.aix7.1.ppc.rpm
4.5.2. AmazonLinux 2
Table 6. Available Modules in nxlog-5.0.5874_amzn2_aarch64.tar.bz2
22
Package Input Output Processor Extension
nxlog-5.0.5874_amzn2_aarch64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog-pcap-5.0.5874_amzn2_aarch64.rpm im_pcap
nxlog-systemd-5.0.5874_amzn2_aarch64.rpm im_systemd
nxlog-wseventing- im_wseventing
5.0.5874_amzn2_aarch64.rpm
23
Package Input Output Processor Extension
nxlog-5.0.5874_rhel6_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog-checkpoint-5.0.5874_rhel6_x86_64.rpm im_checkpoint
nxlog-wseventing-5.0.5874_rhel6_x86_64.rpm im_wseventing
24
Package Input Output Processor Extension
nxlog-5.0.5874_rhel7_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog-checkpoint-5.0.5874_rhel7_x86_64.rpm im_checkpoint
nxlog-pcap-5.0.5874_rhel7_x86_64.rpm im_pcap
nxlog-systemd-5.0.5874_rhel7_x86_64.rpm im_systemd
nxlog-wseventing-5.0.5874_rhel7_x86_64.rpm im_wseventing
25
Package Input Output Processor Extension
nxlog-5.0.5874_rhel8_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog-checkpoint-5.0.5874_rhel8_x86_64.rpm im_checkpoint
nxlog-pcap-5.0.5874_rhel8_x86_64.rpm im_pcap
nxlog-systemd-5.0.5874_rhel8_x86_64.rpm im_systemd
nxlog-wseventing-5.0.5874_rhel8_x86_64.rpm im_wseventing
26
Package Input Output Processor Extension
nxlog-5.0.5874_generic_deb_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_java om_java pm_transform xm_gelf
im_kafka om_kafka er xm_go
im_kernel om_null pm_ts xm_grok
im_linuxaudit om_pipe xm_java
im_mark om_raijin xm_json
im_null om_redis xm_kvp
im_pipe om_ssl xm_leef
im_redis om_tcp xm_msdns
im_ssl om_udp xm_multiline
im_tcp om_udpspoof xm_netflow
im_testgen om_uds xm_nps
im_udp om_webhdfs xm_pattern
im_uds xm_resolver
im_wseventing xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
27
Package Input Output Processor Extension
nxlog-5.0.5874_debian10_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog- im_checkpoint
checkpoint_5.0.5874_debian10_amd64.deb
nxlog-pcap_5.0.5874_debian10_amd64.deb im_pcap
nxlog-systemd_5.0.5874_debian10_amd64.deb im_systemd
nxlog- im_wseventing
wseventing_5.0.5874_debian10_amd64.deb
28
Package Input Output Processor Extension
nxlog-5.0.5874_debian8_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog- im_checkpoint
checkpoint_5.0.5874_debian8_amd64.deb
nxlog-pcap_5.0.5874_debian8_amd64.deb im_pcap
nxlog-systemd_5.0.5874_debian8_amd64.deb im_systemd
nxlog- im_wseventing
wseventing_5.0.5874_debian8_amd64.deb
29
Package Input Output Processor Extension
nxlog-5.0.5874_debian9_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog- im_checkpoint
checkpoint_5.0.5874_debian9_amd64.deb
nxlog-pcap_5.0.5874_debian9_amd64.deb im_pcap
nxlog-systemd_5.0.5874_debian9_amd64.deb im_systemd
nxlog- im_wseventing
wseventing_5.0.5874_debian9_amd64.deb
4.5.10. FreeBSD 11
Table 14. Available Modules in nxlog-5.0.5874_fbsd_x86_64.tgz
30
Package Input Output Processor Extension
nxlog-5.0.5874_fbsd_x86_64.tgz im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_null pm_transform xm_fileop
im_java om_pipe er xm_gelf
im_kernel om_raijin pm_ts xm_go
im_mark om_redis xm_grok
im_null om_ssl xm_java
im_pcap om_tcp xm_json
im_pipe om_udp xm_kvp
im_redis om_udpspoof xm_leef
im_ssl om_uds xm_msdns
im_tcp om_webhdfs xm_multiline
im_testgen xm_netflow
im_udp xm_nps
im_uds xm_pattern
im_wseventing xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib
4.5.11. MacOS
Table 15. Available Modules in nxlog-5.0.5874_macos.pkg
31
Package Input Output Processor Extension
nxlog-5.0.5874_macos.pkg im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_kafka pm_transform xm_fileop
im_java om_null er xm_gelf
im_kafka om_pipe pm_ts xm_go
im_kernel om_raijin xm_grok
im_mark om_redis xm_java
im_null om_ssl xm_json
im_pcap om_tcp xm_kvp
im_pipe om_udp xm_leef
im_redis om_udpspoof xm_msdns
im_ssl om_uds xm_multiline
im_tcp om_webhdfs xm_netflow
im_testgen xm_nps
im_udp xm_pattern
im_uds xm_resolver
im_wseventing xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib
32
Package Input Output Processor Extension
nxlog-5.0.5874_windows_x64.msi im_azure om_batchcom pm_blocker xm_admin
im_batchcomp press pm_buffer xm_aixaudit
ress om_blocker pm_evcorr xm_asl
im_etw om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_exec pm_hmac_che xm_crypto
im_fim om_file ck xm_csv
im_go om_go pm_norepeat xm_exec
im_http om_http pm_null xm_filelist
im_internal om_java pm_pattern xm_fileop
im_java om_kafka pm_transform xm_gelf
im_kafka om_null er xm_go
im_mark om_odbc pm_ts xm_grok
im_mseventlo om_perl xm_java
g om_raijin xm_json
im_msvistalog om_redis xm_kvp
im_null om_ssl xm_leef
im_odbc om_tcp xm_msdns
im_perl om_udp xm_multiline
im_redis om_udpspoof xm_netflow
im_regmon om_webhdfs xm_nps
im_ssl xm_pattern
im_tcp xm_perl
im_testgen xm_resolver
im_udp xm_rewrite
im_winperfcou xm_snmp
nt xm_soapadmi
im_wseventing n
xm_syslog
xm_w3c
xm_xml
xm_zlib
33
Package Input Output Processor Extension
nxlog-5.0.5874_nano.zip im_azure om_batchcom pm_blocker java/jni/libjava
im_batchcomp press pm_buffer nxlog
ress om_blocker pm_evcorr xm_admin
im_etw om_elasticsear pm_filter xm_aixaudit
im_exec ch pm_hmac xm_asl
im_file om_exec pm_hmac_che xm_cef
im_fim om_file ck xm_charconv
im_go om_go pm_norepeat xm_crypto
im_http om_http pm_null xm_csv
im_internal om_java pm_pattern xm_exec
im_java om_kafka pm_transform xm_filelist
im_kafka om_null er xm_fileop
im_mark om_odbc pm_ts xm_gelf
im_mseventlo om_perl xm_go
g om_raijin xm_grok
im_msvistalog om_redis xm_java
im_null om_ssl xm_json
im_odbc om_tcp xm_kvp
im_perl om_udp xm_leef
im_redis om_udpspoof xm_msdns
im_regmon om_webhdfs xm_multiline
im_ssl xm_netflow
im_tcp xm_nps
im_testgen xm_pattern
im_udp xm_perl
im_winperfcou xm_resolver
nt xm_rewrite
im_wseventing xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib
34
Package Input Output Processor Extension
nxlog-5.0.5874_generic_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_java om_java pm_transform xm_gelf
im_kafka om_kafka er xm_go
im_kernel om_null pm_ts xm_grok
im_linuxaudit om_pipe xm_java
im_mark om_raijin xm_json
im_null om_redis xm_kvp
im_pipe om_ssl xm_leef
im_redis om_tcp xm_msdns
im_ssl om_udp xm_multiline
im_tcp om_udpspoof xm_netflow
im_testgen om_uds xm_nps
im_udp om_webhdfs xm_pattern
im_uds xm_resolver
im_wseventing xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
4.5.15. SLES 12
Table 19. Available Modules in nxlog-5.0.5874_sles12_x86_64.tar.bz2
35
Package Input Output Processor Extension
nxlog-5.0.5874_sles12_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog-checkpoint-5.0.5874_sles12_x86_64.rpm im_checkpoint
nxlog-pcap-5.0.5874_sles12_x86_64.rpm im_pcap
nxlog-systemd-5.0.5874_sles12_x86_64.rpm im_systemd
nxlog-wseventing-5.0.5874_sles12_x86_64.rpm im_wseventing
4.5.16. SLES 15
Table 20. Available Modules in nxlog-5.0.5874_sles15_x86_64.tar.bz2
36
Package Input Output Processor Extension
nxlog-5.0.5874_sles15_x86_64.rpm im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog-checkpoint-5.0.5874_sles15_x86_64.rpm im_checkpoint
nxlog-pcap-5.0.5874_sles15_x86_64.rpm im_pcap
nxlog-systemd-5.0.5874_sles15_x86_64.rpm im_systemd
nxlog-wseventing-5.0.5874_sles15_x86_64.rpm im_wseventing
37
Package Input Output Processor Extension
nxlog-5.0.5874_solaris_x86.pkg.gz im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_null pm_transform xm_fileop
im_java om_pipe er xm_gelf
im_mark om_raijin pm_ts xm_go
im_null om_redis xm_grok
im_pipe om_ssl xm_java
im_redis om_tcp xm_json
im_ssl om_udp xm_kvp
im_tcp om_udpspoof xm_leef
im_testgen om_uds xm_msdns
im_udp om_webhdfs xm_multiline
im_uds xm_netflow
im_wseventing xm_nps
xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib
38
Package Input Output Processor Extension
nxlog-5.0.5874_solaris_sparc.pkg.gz im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_bsm
im_bsm ch pm_hmac xm_cef
im_exec om_exec pm_hmac_che xm_charconv
im_file om_file ck xm_crypto
im_fim om_go pm_norepeat xm_csv
im_go om_http pm_null xm_exec
im_http om_java pm_pattern xm_filelist
im_internal om_null pm_transform xm_fileop
im_java om_pipe er xm_gelf
im_mark om_raijin pm_ts xm_go
im_null om_redis xm_grok
im_pipe om_ssl xm_java
im_redis om_tcp xm_json
im_ssl om_udp xm_kvp
im_tcp om_udpspoof xm_leef
im_testgen om_uds xm_msdns
im_udp om_webhdfs xm_multiline
im_uds xm_netflow
im_wseventing xm_nps
xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_xml
xm_zlib
39
Package Input Output Processor Extension
nxlog-5.0.5874_ubuntu16_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog- im_checkpoint
checkpoint_5.0.5874_ubuntu16_amd64.deb
nxlog-pcap_5.0.5874_ubuntu16_amd64.deb im_pcap
nxlog-systemd_5.0.5874_ubuntu16_amd64.deb im_systemd
nxlog- im_wseventing
wseventing_5.0.5874_ubuntu16_amd64.deb
40
Package Input Output Processor Extension
nxlog-5.0.5874_ubuntu18_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog- im_checkpoint
checkpoint_5.0.5874_ubuntu18_amd64.deb
nxlog-pcap_5.0.5874_ubuntu18_amd64.deb im_pcap
nxlog-systemd_5.0.5874_ubuntu18_amd64.deb im_systemd
nxlog- im_wseventing
wseventing_5.0.5874_ubuntu18_amd64.deb
41
Package Input Output Processor Extension
nxlog-5.0.5874_ubuntu20_amd64.deb im_acct om_batchcom pm_blocker xm_admin
im_azure press pm_buffer xm_aixaudit
im_batchcomp om_blocker pm_evcorr xm_asl
ress om_elasticsear pm_filter xm_cef
im_exec ch pm_hmac xm_charconv
im_file om_eventdb pm_hmac_che xm_crypto
im_fim om_exec ck xm_csv
im_go om_file pm_norepeat xm_exec
im_http om_go pm_null xm_filelist
im_internal om_http pm_pattern xm_fileop
im_kernel om_null pm_transform xm_gelf
im_linuxaudit om_pipe er xm_go
im_mark om_raijin pm_ts xm_grok
im_null om_redis xm_json
im_pipe om_ssl xm_kvp
im_redis om_tcp xm_leef
im_ssl om_udp xm_msdns
im_tcp om_udpspoof xm_multiline
im_testgen om_uds xm_netflow
im_udp om_webhdfs xm_nps
im_uds xm_pattern
xm_resolver
xm_rewrite
xm_snmp
xm_soapadmi
n
xm_syslog
xm_w3c
xm_wtmp
xm_xml
xm_zlib
nxlog- im_checkpoint
checkpoint_5.0.5874_ubuntu20_amd64.deb
nxlog-pcap_5.0.5874_ubuntu20_amd64.deb im_pcap
nxlog-systemd_5.0.5874_ubuntu20_amd64.deb im_systemd
nxlog- im_wseventing
wseventing_5.0.5874_ubuntu20_amd64.deb
42
im_acct nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
43
im_batchcompress nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
44
im_exec nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
45
im_fim nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
46
im_http nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
47
im_java nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-java-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-java-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-java-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-java-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-java_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-java_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-java_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-java-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-java-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-java_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-java_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-java_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
48
im_linuxaudit nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
49
im_null nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
50
im_perl nxlog-perl-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-perl-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-perl-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-perl-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-perl_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-perl_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-perl_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-perl-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-perl-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-perl_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-perl_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-perl_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
51
im_redis nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
52
im_systemd nxlog-systemd-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-systemd-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-systemd-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-systemd_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-systemd_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-systemd_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-systemd-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-systemd-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-systemd_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-systemd_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-systemd_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
53
im_udp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
54
im_wseventing nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-wseventing-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-wseventing-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-wseventing-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-wseventing-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-wseventing_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-wseventing_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-wseventing_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-wseventing-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-wseventing-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-wseventing_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-wseventing_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-wseventing_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
55
om_blocker nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
56
om_eventdb nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
57
om_go nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
58
om_java nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-java-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-java-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-java-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-java-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-java_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-java_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-java_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-java-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-java-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-java_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-java_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-java_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
59
om_null nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
60
om_pipe nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
61
om_redis nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
62
om_tcp nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
63
om_udpspoof nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
64
om_webhdfs nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
65
xm_admin nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
66
xm_asl nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
67
xm_charconv nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
68
xm_csv nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
69
xm_filelist nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
70
xm_gelf nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
71
xm_grok nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
72
xm_json nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
73
xm_leef nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
74
xm_multiline nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
75
xm_nps nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
76
xm_python nxlog-python-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-python-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-python-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-python_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-python_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-python_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-python-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-python-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-python_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-python_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-python_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
77
xm_ruby nxlog-ruby-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-ruby-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-ruby-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-ruby_5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-ruby_5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-ruby_5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-ruby-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-ruby_5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-ruby_5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-ruby_5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
78
xm_syslog nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
79
xm_xml nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
80
pm_blocker nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
81
pm_evcorr nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
82
pm_hmac nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
83
pm_norepeat nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
84
pm_pattern nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
85
pm_ts nxlog-5.0.5874-1.aix7.1.ppc.rpm (AIX 7.1)
nxlog-5.0.5874_amzn2_aarch64.rpm (AmazonLinux 2)
nxlog-5.0.5874_rhel6_x86_64.rpm (CentOS 6, RHEL 6)
nxlog-5.0.5874_rhel7_x86_64.rpm (CentOS 7, RHEL 7)
nxlog-5.0.5874_rhel8_x86_64.rpm (CentOS 8, RHEL 8)
nxlog-5.0.5874_generic_deb_amd64.deb (DEB Generic)
nxlog-5.0.5874_debian10_amd64.deb (Debian Buster)
nxlog-5.0.5874_debian8_amd64.deb (Debian Jessie)
nxlog-5.0.5874_debian9_amd64.deb (Debian Stretch)
nxlog-5.0.5874_fbsd_x86_64.tgz (FreeBSD 11)
nxlog-5.0.5874_macos.pkg (MacOS)
nxlog-5.0.5874_windows_x64.msi (Microsoft Windows 64bit)
nxlog-5.0.5874_nano.zip (Microsoft Windows Nano)
nxlog-5.0.5874_generic_x86_64.rpm (RPM Generic)
nxlog-5.0.5874_sles12_x86_64.rpm (SLES 12)
nxlog-5.0.5874_sles15_x86_64.rpm (SLES 15)
nxlog-5.0.5874_solaris_x86.pkg.gz (Solaris 10 i386)
nxlog-5.0.5874_solaris_sparc.pkg.gz (Solaris 10 sparc)
nxlog-5.0.5874_ubuntu16_amd64.deb (Ubuntu 16.04)
nxlog-5.0.5874_ubuntu18_amd64.deb (Ubuntu 18.04)
nxlog-5.0.5874_ubuntu20_amd64.deb (Ubuntu 20.04)
86
Deployment
87
Chapter 5. Supported Platforms
The following operating systems and architectures are fully supported, except as noted. For more information
about types of log collection that are available for specific platforms, see the corresponding chapter in OS
Support.
NXLog also provides generic packages compiled against glibc 2.5 to support RPM based legacy
distributions such as Redhat 5.11 and SLES 11 on both 32 and 64 bit hardware.
NOTE
The packages are named as nxlog-X.XX.XXXX_generic_glibc2.5_rpm_x86_64.rpm and
nxlog-X.XX.XXXX_generic_glibc2.5_rpm_i386.rpm respectively and available in the beta
version as well.
Under the Technical Support Services Agreement, Linux and BSD binary packages may be provided
NOTE upon request for operating systems that have reached their end-of-life date (like RHEL 5), for
legacy 32-bit hardware, or for less common distributions (such as Linux Mint).
88
Operating System Architectures
Microsoft Windows Server 2012 x86_64
While the im_odbc input module is included in the Windows Nano Server package, currently
NOTE
Microsoft does not provide a reverse forwarder to support the ODBC API.
Docker x86_64
For log sources of the above platforms, see Apple macOS, IBM AIX, and Oracle Solaris.
The following Microsoft Windows operating systems are unsupported due to having reached end-of-life status,
but are known to work with NXLog.
89
Chapter 6. Product Life Cycle
NXLog Enterprise Edition, NXLog Community Edition, and NXLog Manager all use the versioning scheme X.Y.Z.
• X denotes the MA JOR release version. Long-term support is provided for each major release when
applicable.
• Y denotes the MINOR version. Minor releases provide backward compatible enhancements and features
during the lifetime of a major release.
• Z denotes the REVISION NUMBER. Hot-fix revisions may be released within the same minor version.
Upgrades within the same major version are backward compatible. Features and changes that may not be
backward compatible are added to major releases only.
For supported products, the end-of-life (EOL) date is at least one year after the next major version is released.
NXLog Enterprise Edition 4.x One year after the release of NXLog Enterprise Edition 5.0
NXLog Manager 5.x One year after the release of NXLog Manager 6.0
90
Chapter 7. System Requirements
In order to function efficiently, each NXLog product requires a certain amount of available system resources on
the host system. The table below provides general guidelines to use when planning an NXLog deployment. Actual
system requirements will vary based on the configuration and event rate; therefore, both minimum and
recommended requirements are listed. Always thoroughly test a deployment to verify that the desired
performance can be achieved with the system resources available.
These requirements are in addition to the operating system’s requirements, and the
NOTE requirements should be combined with the NXLog Manager’s System Requirements
cumulatively for systems running both NXLog Enterprise Edition and NXLog Manager.
Minimum Recommende
d
Processor cores 1 2
Memory/RAM 60 MB 250 MB
91
Chapter 8. Digital Signature Verification
Security regulations for organizations may require verifying the identity of software sources as well as the
integrity of the software obtained from those software sources. In order to facilitate such regulation compliance,
and to guarantee the authenticity and integrity of downloaded installer files, NXLog installer packages are
digitally signed.
In some cases, like with RPM packages, a public key is required to verify the digital signature. For this, the Public
PGP Key can be downloaded from NXLog’s public contrib repository.
This example uses the generic RPM package. Change the name of the package to match the
NOTE
package used in your environment.
1. Import the downloaded NXLog public key into the RPM with the following command:
2. Verify the package signature with the imported public key using the following command:
NXLog is displayed as a signer for the installer. The algorithm used for the signature and the timestamp is
also visible.
3. In the Signature list, select NXLog, then click Details to display additional information about the signature.
In the General tab, the signer information and countersignatures are displayed. Click on View Certificate to
display the certificate or select the Advanced tab to display signature details.
92
1. Double-click the installer package.
2. Click on the padlock icon in the upper-right corner of the installer window to display information about the
certificate.
For valid packages a green tick is displayed, indicating the validity of the certificate.
3. Click on the triangle next to Details to display additional information about the certificate.
93
Chapter 9. Red Hat Enterprise Linux & CentOS
9.1. Installing
1. Download the appropriate NXLog installation file from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct file for the target
platform.
Platform Archive
RHEL 6 or CentOS 6 nxlog-5.0.5876_rhel6_x86_64.tar.bz2
The RHEL 6 and RHEL 7 archives above each contain several RPMs (see Packages in a
RHEL Archive below). These RPMs have dependencies on system-provided RPMs.
NOTE The generic RPM above contains all the libraries (such as libpcre and libexpat) that are
needed by NXLog. The only dependency is libc. However, some modules are not
available (im_checkpoint, for example). The advantage of the generic RPM is that it can
be installed on most RPM-based Linux distributions.
2. Transfer the file to the target server using SFTP or a similar secure method.
3. Log in to the target server and extract the contents of the archive (unless you are using the generic package):
Package Description
nxlog-5.0.5876_rhel7.x86_64.rpm The main NXLog package
# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2
94
b. If you are installing the nxlog-zmq package, enable the EPEL repository so ZeroMQ dependencies will be
available:
c. Use yum to install the required NXLog packages (or the generic package) and dependencies.
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
9.2. Upgrading
To upgrade an NXLog installation to the latest release, use yum as in the installation instructions above.
To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same version, follow the
installation instructions.
The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4 above). Changing to a different user and group during upgrade is not
supported.
9.3. Uninstalling
To uninstall NXLog, use yum remove. To remove any packages that were dependencies of NXLog but are not
required by any other packages, include the --setopt=clean_requirements_on_remove=1 option. Verify the
operation before confirming!
This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).
95
Chapter 10. Debian & Ubuntu
10.1. Installing
1. Download the appropriate NXLog installation file from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, download the correct file for the target
platform.
Platform Archive
Debian 8 (Jessie) nxlog-5.0.5876_debian8_amd64.tar.bz2
2. Transfer the file to the target server using SFTP or a similar secure method.
3. Log in to the target server and extract the contents of the archive (unless you are using the generic package):
Package Description
nxlog-5.0.5876_amd64.deb The main NXLog package
# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2
b. Use dpkg to install the required NXLog packages (or the generic package, if you are using that).
# dpkg -i nxlog-5.0.5876_amd64.deb
96
c. If dpkg returned errors about uninstalled dependencies, use apt-get to install them and complete the
NXLog installation.
# apt-get -f install
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
8. Check that the NXLog service is running with the service command.
10.2. Upgrading
To upgrade an NXLog installation to the latest release, or to replace a trial installation of NXLog Enterprise Edition
with a licensed copy, use dpkg as explained in the installation instructions above.
# dpkg -i nxlog-5.0.5876_amd64.deb
When upgrading to a licensed copy with additional NXLog trial packages installed, such as nxlog-
trial-python, use dpkg -i --auto-configure.
Make sure to edit this example to include all nxlog-trial packages that are actually installed.
# apt-get -f install
The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4a above). Changing to a different user and group during upgrade is not
supported.
10.3. Uninstalling
To uninstall NXLog, use apt-get. To remove any unused dependencies (system-wide), include the --auto
-remove option. Verify the operation before confirming!
97
# apt-get remove '^nxlog*'
Use apt-get purge instead to also remove configuration files. But in either case, this
procedure may not remove all files that were created in order to configure NXLog, or that were
NOTE
created as a result of NXLog’s logging operations. To find these files, consult the configuration
files that were used with NXLog and check the installation directory (/opt/nxlog).
98
Chapter 11. SUSE Linux Enterprise Server
11.1. Installing
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct archive for your system.
Platform Archive
SUSE Linux Enterprise Server 11 nxlog-5.0.5876_sles11_x86_64.tar.bz2
The SLES 11, SLES 12 and SLES 12 archives above each contain several RPMs (see
NOTE Packages in an SLES Archive below). These RPMs have dependencies on system-
provided RPMs.
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server and extract the contents of the archive.
Package Description
nxlog-5.0.5876_sles12.x86_64.rpm The main NXLog package
4. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and NXLOG_GROUP
environment variables. During installation a new user and and a new group will be created based on these
environment variables. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.
# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2
5. Install the required NXLog packages and their dependencies (this example installs the main NXLog package
only).
99
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
9. Check that the NXLog service is running with the systemctl command.
11.2. Upgrading
To update an NXLog installation to the latest release, use zypper as in the installation instructions above.
To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same version, follow the
installation instructions.
The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4 above). Changing to a different user and group during upgrade is not
supported.
11.3. Uninstalling
To uninstall NXLog, use zypper remove. To remove any packages that were dependencies of NXLog but are not
required by any other packages, include the --clean-deps option. Verify the operation before confirming!
This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).
100
Chapter 12. FreeBSD
12.1. Installing
NXLog is available as a precompiled package for FreeBSD. Follow these steps to install NXLog.
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the nxlog-
5.0.5876_fbsd_x86_64.tgz package.
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server as the root user.
4. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and NXLOG_GROUP
environment variables. During installation a new user and and a new group will be created based on these
environment variables. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.
The installation path is /opt/nxlog. Configuration files are located in /opt/nxlog/etc. The rc init script is
placed in /etc/rc.d/ on installation. An nxlog user account is created, and NXLog will run under this user
by default.
# vi /opt/nxlog/etc/nxlog.conf
General information about configuring NXLog can be found in Configuration. For more details about
configuring NXLog to collect logs on BSD, see the FreeBSD summary.
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
8. To enable NXLog, add the line nxlog_enable="YES" to /etc/rc.conf. Then manage the NXLog service with
the service(8) utility.
101
12.2. Upgrading
To upgrade NXLog, first remove the old version and then install the new version.
12.3. Uninstalling
1. Use the pkg(7) utility to uninstall the NXLog package.
The uninstall script will remove NXLog along with the user, group, and files. The pkg utility will not remove
new or modified files.
2. Manually remove the base directory. This will remove any new or modified files left behind by the previous
step.
102
# rm -rf /opt/nxlog
103
Chapter 13. OpenBSD
13.1. Installing
NXLog is available as precompiled packages for OpenBSD. Follow these steps to install NXLog.
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct package for your
system.
Platform Package
OpenBSD 6.0 nxlog-5.0.5876-obsd6_0_x86_64.tgz
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server as the root user.
4. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and NXLOG_GROUP
environment variables. During installation a new user and and a new group will be created based on these
environment variables. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.
# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2
5. Install NXLog with the pkg_add(1) utility. The OpenBSD package is currently unsigned, use the -D unsigned
flag to install.
The installation prefix is /opt/nxlog. Configuration files are located in /opt/nxlog/etc. The rc init script is
placed in /etc/rc.d on installation.
# vi /opt/nxlog/etc/nxlog.conf
General information about configuring NXLog can be found in Configuration. For more details about
configuring NXLog to collect logs on BSD, see the OpenBSD summary.
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
104
# rcctl enable nxlog
# rcctl start nxlog
nxlog(ok)
# rcctl stop nxlog
nxlog(ok)
# rcctl disable nxlog
You can also use rcctl(8) to check and set the configuration flags.
13.2. Upgrading
To upgrade from a previous NXLog version (whether a licensed copy or trial), use the pkg_add(1) utility. This
example shows an upgrade from version 3.0.1865 to 5.0.5876.
# pkg_add -U nxlog-5.0.5876-obsd6_2_x86_64.tgz
nxlog-3.0.1865-obsd6_2\->5.0.5876-obsd6_2: ok
Read shared items: ok
To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same version, use pkg_add
with the replace flag (-r).
# pkg_add -r nxlog-5.0.5876-obsd6_2_x86_64.tgz
The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 4 above). Changing to a different user and group during upgrade is not
supported.
13.3. Uninstalling
To uninstall NXLog, follow these steps.
# pkg_delete nxlog
nxlog-5.0.5876-obsd6_2: ok
Read shared items: ok
--- -nxlog-5.0.5876-obsd6_2 -------------------
The uninstall script will remove NXLog along with the user, group, and files. The pkg_delete utility will not
remove new files or modified configuration files.
2. Manually remove the base directory. This will remove any new or modified files left behind by the previous
step.
105
# rm -rf /opt/nxlog
106
Chapter 14. Microsoft Windows
14.1. Installing
First, download the NXLog MSI file from the NXLog website.
1. Log in to your account, then click My account at the top of the page.
2. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct package for your system.
Platform Package
Microsoft Windows, 32-bit nxlog-5.0.5876_windows_x86.msi
Using the 32-bit installer to install NXLog on a 64-bit system is unsupported and not
recommended. To override the installer check and proceed anyway, use the
WARNING
SKIP_X64_CHECK=1 property (for example, msiexec /i nxlog-
5.0.5876_windows_x64.msi /q SKIP_X64_CHECK=1).
• Installing Interactively
• Installing with Msiexec
• Deploying via Group Policy
See also the MSI for NXLog Agent Setup add-on, which provides an example MSI package for bootstrapping
NXLog agents.
The service Startup type of newer versions of NXLog is set to Automatic (Delayed Start)
NOTE instead of Automatic. To change this option, open the service control manager and alter the
Startup type in the General tab.
4. Start NXLog by opening the Service Manager, finding the nxlog service in the list, and starting it. To run it in
the foreground instead, invoke the nxlog.exe executable with the -f command line argument.
5. Open the NXLog log file (by default, C:\Program Files\nxlog\data\nxlog.log) with Notepad and check
for errors.
107
Some text editors (such as Wordpad) use exclusive locking and will refuse to open the log
NOTE
file while NXLog is running.
To allow Windows to prompt for administrator privileges, but otherwise install unattended, use /qb instead.
These steps were tested with a Windows Server 2016 domain controller and a Windows 7 client.
NOTE There are multiple ways to configure NXLog deployment with Group Policy. The required steps
for your network may vary from those listed below.
c. Provide a name for the group (for example, nxlog). Use the Security group type and Global context (or
the context suitable for your case).
d. Add computers to the group by selecting one or more, clicking Actions › Add to a group…, and entering
the group name (nxlog).
3. Create a network share for distributing the NXLog files.
a. Create a folder in the desired location (for example, C:\nxlog-dist).
b. Set up the folder as a share: right-click, select Properties, open the Sharing tab, and click [ Share… ].
c. Add the group (nxlog) and click [ Share ]. Take note of the share name provided by the wizard, it will be
needed later (for example, \\WINSERV1\nxlog-dist).
d. Copy the required files to the shared folder. If using NXLog Manager, this will include at least three files:
nxlog-5.0.5876_windows_x64.msi, managed.conf, and CA certificate agent-ca.pem. If not using
NXLog Manager, use a custom nxlog.conf instead of managed.conf, omit the CA certificate, and include
any other files required by the configuration.
NOTE
The file managed.conf is located in the C:\Program Files\nxlog\conf\nxlog.d\ directory. Prior
to NXLog version 5, it had the name log4ensics.conf and was located in the C:\Program
Files\nxlog\conf\ directory.
108
4. Create a Group Policy Object (GPO) for the NXLog deployment.
a. Open the Group Policy Management console (gpmc.msc).
b. In the console tree, under Domains, right-click on your domain and click Create a GPO in this domain,
and Link it here…; this will create a GPO under the Group Policy Objects folder and link it to the
domain.
c. Name the GPO (for example, nxlog) and click [ OK ].
a. Under Computer Configuration › Policies › Software Settings, right-click Software installation. Click
New › Package… to create a deployment package for NXLog.
b. Browse to the network share and open the nxlog-5.0.5876_windows_x64.msi package. It is important
to use the Uniform Naming Convention (UNC) path (for example, \\WINSERV1\nxlog-dist) so the file
will be accessible by remote computers.
c. Select the Assigned deployment method.
6. Add the required files to the GPO by following these steps for each file.
a. Under Computer Configuration › Preferences › Windows Settings, right-click on Files. Click New ›
File.
b. Select the Replace action in the drop-down.
c. Choose the source file on the network share (for example, \\WINSERV1\nxlog-dist\managed.conf or
\\WINSERV1\nxlog-dist\agent-ca.pem).
d. Type in the destination path for the file (for example, C:\Program
Files\nxlog\conf\nxlog.d\managed.conf or C:\Program Files\nxlog\cert\agent-ca.pem).
e. Check Apply once and do not reapply under the Common tab for files that should only be deployed
109
once. This is especially important for managed.conf because NXLog Manager will write configuration
changes to that file.
f. Click [ OK ] to create the File in the GPO.
7. After the Group Policy is updated on the clients and NXLog is installed, one more reboot will be required
before the NXLog service starts automatically.
For more information about Group Policy, see the following TechNet and MSDN articles:
14.2. Upgrading
To upgrade NXLog to the latest release, or to replace a trial installation of NXLog Enterprise Edition with a
licensed copy, follow these steps.
1. Run the new MSI installer as described in the Installing section (interactively, with Msiexec, or via Group
Policy). The installer will detect the presence of the previous version and perform the upgrade within the
current installation directory.
To upgrade from v3.x, uninstall the previous version before installing the new version (see
Uninstalling). This is necessary to transition from a per-user to a per-machine installation.
NOTE This check can be skipped by passing the SKIP_PERUSER_CHECK property (such as msiexec
/i nxlog-5.0.5876_windows_x64.msi /q SKIP_PERUSER_CHECK=1). Note that using
SKIP_PERUSER_CHECK is unsupported and not recommended.
If the Services console (services.msc) is running, the installer may request the computer
NOTE to be rebooted or display a permission denied error. Please ensure that the Services
console is not running before attempting an upgrade.
2. Start the upgraded NXLog service via the Services console (services.msc) or by rebooting the system.
Check the log file (by default, C:\Program Files\nxlog\data\nxlog.log) to verify logging is working as
expected.
If you want to downgrade to a previous version of NXLog, you will need to manually uninstall the
NOTE
current version first. See Uninstalling.
14.3. Uninstalling
NXLog can be uninstalled in several different ways.
110
• By using msiexec and the original NXLog MSI.
In addition to the above, NXLog provides a method to remove the Windows Registry traces after uninstalling.
NXLog v3.x installers will remove log4ensics.conf and nxlog.conf during the
WARNING uninstallation process, even if they have been modified. If these files need to be preserved,
they should be backed up to another location before uninstalling NXLog v3.x.
This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed (except for v3.x
NOTE
installers as noted above). You may wish to remove the installation directory (by default,
C:\Program Files\nxlog) once the uninstallation process has completed.
1. Open the Group Policy Object (GPO) originally created for installation (see Create a Group Policy Object).
2. For each NXLog version that has been deployed, right-click the package and either:
◦ click All Tasks › Remove…, and choose the Immediately uninstall removal method; or
◦ click Properties, open the Deployment tab, and check Uninstall this application when it falls out of
the scope of management.
In this case, NXLog will be uninstalled when the GPO is no longer applied to the
NOTE computer. An additional action will be required, such as removing the selected
computer(s) from the nxlog group created in Set up an Active Directory group.
To remove the possibly left Windows Registry entries, use the following command:
To complete the procedure, the following files need to be present in the same directory:
111
• uninstall-x64.bat - The main script.
• The exact version of the MSI installer, with which NXLog was installed.
The necessary files can be downloaded from the windows-uninstall directory of NXLog’s public contrib repository.
To start the automatic uninstall and trace removal procedure, use the following command:
The Readme.MD file in the public contrib repository explains details of the script operation.
Deployment via Group Policy already provides a way to deploy the configuration files. For this
NOTE reason, it might be more preferable to configure NXLog via GPO instead of creating a custom
MSI as described in this section.
112
Chapter 15. Microsoft Nano Server
15.1. Installing
Follow these steps to deploy NXLog on a Nano Server system.
Microsoft Nano Server does not support the installation of MSI files. In its place, Microsoft
introduced the APPX format. The sandboxing and isolation imposed by the APPX format was
NOTE
found to be an unnecessary complication when deploying NXLog; therefore, users are provided
with a ZIP file that allows for manual installation instead.
2. Transfer the NXLog ZIP file to the Microsoft Nano Server. One way to do so is to use WinRM and the Copy-
Item cmdlet. Uncompress the ZIP file at C:\Program Files\nxlog using the Expand-Archive cmdlet as
shown below.
3. To register NXLog as a service, navigate to the installation directory and execute the following.
4. Configure NXLog by editing the C:\Program Files\nxlog\nxlog.conf file. General information about
configuring NXLog can be found in Configuration. For more details about configuring NXLog to collect logs on
Windows, see the Microsoft Windows summary.
Because Microsoft Nano Server does not have a native console editor, the configuration file
NOTE must be edited on a different system and then transferred to the Nano Server. Alternatively,
a third party editor could be installed.
NXLog in now installed, registered, and configured. The NXLog service can be started by running Start-Service
nxlog.
15.2. Upgrading
To upgrade NXLog to the latest release, follow these steps.
2. Back up any configuration files that have been altered, such as nxlog.conf, managed.conf, and any
certificates.
3. Either delete the nxlog directory and follow the installation procedure again or use the -Force parameter
when extracting the NXLog ZIP file. There is no need to register the service again.
113
4. Restore any configuration files and certificates.
5. Start the NXLog service by running Start-Service nxlog.
15.3. Uninstalling
To uninstall NXLog, follow this procedure.
2. Unregister the NXLog service by navigating to the NXLog directory and running .\nxlog.exe -u.
The following installation options require altering the Windows Registry. Incorrect modifications
NOTE could potentially render the system unusable. Always double check the commands and ensure
it will be possible to revert to a known working state before altering the registry.
1. Follow the same installation procedure outlined above, but choose a different DestinationPath when
expanding the ZIP file. Also register the NXLog service as shown above.
2. At this point the registry entry for the NXLog service needs to be altered. View the current setting:
Type : 16
Start : 2
ErrorControl : 0
ImagePath : "c:\Program Files\nxlog\nxlog.exe" -c "c:\Program Files\nxlog\nxlog.conf"
DisplayName : nxlog
DependOnService : {eventlog}
ObjectName : LocalSystem
PSPath :
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\nxlog
PSParentPath :
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services
PSChildName : nxlog
PSDrive : HKLM
PSProvider : Microsoft.PowerShell.Core\Registry
3. The value of the ImagePath parameter needs to be modified in order to update the location of both the
NXLog executable and the configuration file. For example, if NXLog is installed in C:\nxlog, run the following
command to update the registry key.
4. The configuration file (nxlog.conf) also needs to be edited to reflect this change to a non-default installation
directory. Make sure define ROOT points to the correct location.
114
15.4.2. Service Startup Type
The service Startup type of newer versions of NXLog defaults to Automatic (Delayed Start) instead of
Automatic. This is controlled by the DelayedAutostart parameter. To revert back to the old behavior, run the
following command.
115
Chapter 16. Apple macOS
16.1. Installing
To install NXLog under macOS, follow the steps below. You will need administrator privileges to complete the
installation process.
1. Download the appropriate NXLog install package from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the correct package for your
system.
Platform Package
macOS 10.14 and earlier (pre-Catalina) nxlog-5.0.5876_macos-precatalina.pkg
2. Optional: To change the NXLog user and group for the installation, create a /tmp/.nxlog file with the
following command. During installation a new user and a new group will be created using the values
specified in this command. They will be used for User and Group directives in nxlog.conf, and for the
ownership of some directories under /opt/nxlog. Specifying an already existing user or group is not
supported. The created user and group will be deleted on NXLog removal.
3. Install the NXLog package. You can do the installation interactively or with the command line installer.
As of version 4.5 the installer should be signed with our developer certificate. If you see the following
message with an earlier version, go to System Preferences › Security & Privacy and click [ Open
Anyway ], then follow the instructions shown by the installer.
116
◦ To install the package using the command line installer, run the following command.
Upon installation, all NXLog files are placed under /opt/nxlog. The launchd(8) script is installed in
/Library/LaunchDaemons/com.nxlog.plist and has the KeepAlive flag set to true (launchd will
automatically restart NXLog). NXLog log files are managed by launchd and can be found in /var/log/.
$ sudo /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
6. To apply your changes, stop NXLog with the following command. The launchd manager will restart the
daemon and the new configuration will be loaded.
117
16.2. Upgrading
To upgrade NXLog, follow the installation instructions.
The installation script will not modify the existing configuration files. After the installation has completed, NXLog
will restart automatically.
The same user and group will be used for the upgrade that were used for the original
NOTE installation (see installation step 2 above). Changing to a different user and/or group during
upgrade is not supported.
16.3. Uninstalling
To properly uninstall NXLog, follow these steps.
This will remove custom configuration files, certificates, and any other files in the listed
WARNING
directories. Save these files to another location first if you do not wish to discard them.
NOTE Use the -n switch (instead of -y) if you would like to preserve user data.
2. Delete user data if you are sure it will not be needed anymore.
2. Delete the nxlog user and group that were created during installation. If a non-default user/group were used
during installation (see installation step 2 above), remove those instead.
This will remove custom configuration files, certificates, and any other files in the listed
WARNING
directories. Save these files to another location first if you do not wish to discard them.
118
Chapter 17. Docker
17.1. Installing
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the nxlog-
5.0.5876_docker.tar.gz archive (which is based on CentOS 7).
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server and extract the contents of the archive.
Package Description
Dockerfile The main NXLog Docker definition file
4. Configure NXLog. Custom configuration files can be placed in the build directory of the NXLog Docker
version, before the build. Every file ending with .conf will be copied into the Docker image and placed in the
/opt/nxlog/etc directory.
◦ It is also possible to specify the IP address of an NXLog Manager instance at build time. In this case,
NXLog will connect automatically at startup. Before build, the CA certificate file, exported from NXLog
Manager in PEM format and named agent-ca.pem, must be placed in the Docker build directory.
7. Check that the NXLog container is running with the docker command.
17.2. Upgrading
The upgrade process consists of creating a new NXLog Docker image build and running a new container instance
119
with the newly built image.
5. Any old containers and images that are no longer needed can be removed with docker rm -v
<containerID> and docker rmi <imageID>, respectively. See Uninstalling below for more information.
17.3. Uninstalling
The uninstallation process of the NXLog Docker version is simply removing the running container and the image.
1. Get the container ID of the running NXLog instance and stop the running container.
$ docker rm -v <containerID>
3. Any other remaining containers that are not running can be listed with docker ps -a, and removed.
$ docker images
$ docker rmi <containerID>
120
Chapter 18. IBM AIX
18.1. Installing
1. Download the appropriate NXLog installer package from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › NXLog Enterprise Edition files tab, choose the nxlog-5.0.5876_aix_ppc.rpm
package.
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Install the required NXLog package.
a. Optional: To change the NXLog user and group for the installation, set the NXLOG_USER and
NXLOG_GROUP environment variables. During installation a new user and and a new group will be created
based on these environment variables. They will be used for User and Group directives in nxlog.conf,
and for the ownership of some directories under /opt/nxlog. Specifying an already existing user or
group is not supported. The created user and group will be deleted on NXLog removal.
# export NXLOG_USER=nxlog2
# export NXLOG_GROUP=nxlog2
# /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
# ./init start
18.2. Upgrading
To update an NXLog installation to the latest release, use rpm as in the installation instructions above.
The same user and group will be used for the upgrade as was used for the original installation
NOTE (see installation step 3a above). Changing to a different user and group during upgrade is not
supported.
18.3. Uninstalling
To uninstall NXLog use rpm with the -e option.
121
# rpm -e nxlog
This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).
122
Chapter 19. Oracle Solaris
19.1. Installing
1. Download the appropriate NXLog install archive from the NXLog website.
a. Log in to your account, then click My account at the top of the page.
b. Under the Downloads › My downloads tab, choose the correct archive for your system.
Platform Archive
Solaris 10/11 x86 archive nxlog-5.0.5876_solaris_x86.pkg.gz
2. Use SFTP or a similar secure method to transfer the archive to the target server.
3. Log in to the target server and extract the contents of the archive.
$ gunzip nxlog-5.0.5876_solaris_sparc.pkg.gz
4. Optional: To change the NXLog user and group for the installation, create a
/var/sadm/install/admin/nxlog-user_group file with the following command. During installation a new
user and and a new group will be created based on the names specified. They will be used for User and
Group directives in nxlog.conf, and for the ownership of some directories under /opt/nxlog. Specifying an
already existing user or group is not supported. The created user and group will be deleted on NXLog
removal.
◦ For a quiet install, use an administration file. Place the file (nxlog-adm in this example) in the
/var/sadm/install/admin/ directory.
123
nxlog-adm
mail=
instance=overwrite
partial=nocheck
runlevel=nocheck
idepend=nocheck
rdepend=nocheck
space=quit
setuid=nocheck
conflict=nocheck
install
action=nocheck
basedir=/opt/nxlog
networktimeout=60
networkretries=3
authentication=quit
keystore=/var/sadm/security
proxy=
$ sudo /opt/nxlog/bin/nxlog -v
2017-03-17 08:05:06 INFO configuration OK
8. Check that the NXLog service is running with the svcs command.
$ svcs nxlog
online 12:40:37 svc:system/nxlog:default
9. Manage the NXLog service with svcadm (restart the service to load the edited configuration file).
To replace a trial installation of NXLog Enterprise Edition with a licensed copy of the same
NOTE
version, follow the same installation instructions (use instance=overwrite as shown).
19.2. Upgrading
19.2.1. Updating to a Minor Release
To update an NXLog installation to the latest minor release, remove the old version and then install the new
version.
1. Before removing the old version, run the backup script from /opt/nxlog/bin/backup. The backup script will
create a backup directory in /opt (the directory will be named according to this format: /opt/nxlog-
backup-YYYYMMDD_hhmmss).
124
$ sudo pkgrm NXnxlog
3. To install the new NXLog release, use pkgadd as in the installation instructions above.
4. After reinstalling NXLog, use the restore script from the latest backup directory to restore data to the new
NXLog installation.
1. Perform steps 1-3 from Updating to a Minor Release. Do not use restore (step 4).
2. Manually migrate the necessary parts of the backup content to the new installation.
From NXLog version 5.0, the configuration file log4ensics.conf changed to managed.conf and it is in a
different location. This file contains NXLog Manager related configuration.
NOTE nxlog.conf shipped with v5.0 has NXLog Manager integration disabled by default.
v 4.x v 5.0
/opt/nxlog-backup- /opt/nxlog/etc/nxlog.d/managed.con
date_time/lib/nxlog/log4ensics.conf f
/opt/nxlog-backup-date_time/nxlog/cert/* /opt/nxlog/var/lib/nxlog/cert/
19.3. Uninstalling
To uninstall NXLog, use pkgrm. To remove the package files from the client’s file system, include the -A option.
This procedure may not remove all files that were created while configuring NXLog. Likewise,
any files created as a result of NXLog’s logging operations will not be removed. To find these
NOTE
files, examine the configuration files that were used with NXLog and check the installation
directory (/opt/nxlog).
125
Chapter 20. Hardening NXLog
20.1. Running Under a Non-Root User on Linux
NXLog can be configured to improve security by running as a non-root user. The User and Group global
directives specify the user and group for the NXLog process to run as. On Linux installations, NXLog is configured
by default to run as the nxlog user and nxlog group as shown below.
Running as nxlog:nxlog
1 User nxlog
2 Group nxlog
Some operations require privileges that are normally not available to the nxlog user. In this case, the simplest
solution is to configure NXLog to retain full root privileges by removing the User and Group directives from the
configuration. This is not recommended, however; it is more secure to grant only the required privileges and to
avoid running NXLog as root. See the following sections for more information.
This command sets the CAP_NET_BIND_SERVICE capability for the NXLog executable.
This command sets both the CAP_NET_BIND_SERVICE and the CAP_NET_RAW capabilities.
Verify with this command, or by adding the -v (verify) flag to the setcap command.
# getcap /opt/nxlog/bin/nxlog
126
Use built-in capability support
NXLog will automatically set the Linux CAP_SYS_ADMIN capability before dropping root privileges.
The process is divided into two parts. First, a base policy is created. Then the policy is deployed and tailored to
the specific requirements of the current NXLog configuration.
nxlog.te
Base policy information; this file defines all the types and rules for a particular domain.
nxlog.fc
File system information; this file defines he security contexts that are applied to files when the policy is
installed.
nxlog.if
Interface information; this file defines the default file context for the system.
nxlog.sh
A helper shell script for compiling and deploying the policy module and fixing the labeling on the system; for
use only on the target system.
nxlog_selinux.spec
A specification file that can be used to generate an RPM package from the policy, useful for deploying the
policy on another system later. This spec file is generated on RPM-based systems only.
2. Start the SELinux Policy Generation Tool from the system launcher.
3. In the first screen, select Standard Init Daemon for the policy type, then click [ Forward ].
127
4. On the second screen, enter the following details for the application and user role, then click [ Forward ].
Name
A custom name for the role (for example, nxlog)
Executable
The path to the NXLog executable (for example, /opt/nxlog/bin/nxlog)
Init script
The path of the NXLog system init script (for example, /etc/rc.d/init.d/nxlog)
5. On the third screen, enter the TCP and UDP used by the NXLog deployment, then click [ Forward ]. If the
ports are unknown or not yet determined, then leave these fields blank; they can be customized later.
128
6. On the fourth screen, select the appropriate application traits for NXLog, then click [ Forward ]. The default
configuration requires only the Interacts with the terminal trait. For collecting Syslog messages or creating
files in /tmp, include the appropriate traits.
7. On the fifth screen, specify all the arbitrary files and directories that the NXLog installation should have
access to, then click [ Forward ]. The default configuration requires only the NXLog system directory,
/opt/nxlog. Include the paths of any custom log files that NXLog needs to access.
129
8. Additional SELinux configuration values can be set on the sixth screen. None of these are required for NXLog.
Click [ Forward ] to continue.
9. The policy files are generated on the final screen. Click [ Save ] to write the policy to disk.
Additional managed directories can be added to the policy by passing to the -w parameter
NOTE
the full directory paths separated by spaces (for example, -w /opt/nxlog /var/log).
3. The policy files are generated when the command exits successfully; the policy is written to the current
working directory.
When set to permissive mode, SELinux generates alerts rather than actively blocking actions
WARNING as it does in enforcing mode. Because this reduces system security, it is recommended that
this be done in a test environment.
1. Make sure that NXLog is correctly configured with all required functionality.
130
2. Stop the NXLog service.
3. Transfer the files containing your SELinux base policy to the target system. All the files should be in the same
directory.
4. Apply the SELinux base policy by executing the policy script. This script will compile the policy module, set the
appropriate security flags on the directories specified, and install the policy.
$ sudo ./nxlog.sh
You may see the error message libsemanage.add_user: user system_u not in
password file. This is caused by a bug in the selinux-policy RPM or selinux-policy-
default DEB package and does not affect the policy at all. It has been fixed in later
releases.
NOTE
You may see the error message InvalidRBACRuleType: a is not a valid RBAC rule
type. This is from a bug in the policycoreutils package. It only affects man page
generation, which is not generated in this case. This has been fixed in later releases.
6. Set SELinux to permissive mode. All events which would have been prevented by SELinux will now be
permitted and logged to /var/log/audit/audit.log (including events not related to NXLog).
$ sudo setenforce 0
7. Start and then stop the NXLog service. Any actions taken by NXLog that are not permitted by the policy will
result in events logged by the Audit system. Run audit2allow -a -l -w to view all policy violations (with
descriptions) since the last policy reload.
If NXLog has been configured to listen on TCP port 1514, but the appropriate rules are not specified in
the current SELinux policy, then various audit events will be generated when the NXLog process
initializes and binds to that port. These events can be viewed from the Audit log file directly, with
ausearch, or with audit2allow (as shown below).
$ sudo audit2allow -a -l -w
type=AVC msg=audit(1524239322.612:473): avc: denied { listen } for pid=5697 comm="nxlog"
lport=1514 scontext=system_u:system_r:nxlog_t:s0 tcontext=system_u:system_r:nxlog_t:s0
tclass=tcp_socket
Was caused by:
Missing type enforcement (TE) allow rule.
You can use audit2allow to generate a loadable module to allow this access.
Additional log messages will be generated for any other file or network action not permitted by the
SELinux policy. These actions would all be denied by SELinux when set to enforcing mode.
8. Use the helper script --update option to add additional rules to the policy based on logged policy violations
with the nxlog context. Review the suggested changes and press y to update the policy. If no changes are
required, the script will exit zero.
131
$ sudo ./nxlog.sh --update
The script will offer to add any required rules. The following output corresponds to the example in the
previous step.
require {
type nxlog_rw_t;
type nxlog_t;
class capability dac_override;
class tcp_socket { bind create listen setopt };
class file execute;
class capability2 block_suspend;
}
9. Set the SELinux policy to enforcing mode. This can be set permanently in /etc/selinux/config.
$ sudo setenforce 1
In enterprise environments managed by Group Policy, the dedicated user account and its
NOTE
permissions must be managed by the domain administrator.
1. Create a new user account. Open the Computer Management console (compmgmt.msc), expand Local
Users and Groups and right-click on Users. Select New User… from the context menu.
132
2. Enter the svc-nxlog user name, description, and password; enable the Password never expires check box;
and click [Create].
3. Open the Services console (services.msc), right-click the nxlog service, and select Properties.
4. Under the Log On tab, select the This Account radio button, click [Browse…], select the svc-nxlog user
account, and enter the password. Then click [OK]. Windows will warn you that the service must be restarted.
133
5. Open the Local Security Settings console (secpol.msc), expand Local Policies, then select User Rights
Assignment in the left pane.
134
7. Click [Add User or Group…] and select the new user. The new user should appear in the list. Click [OK].
8. Add the new user to the the Manage auditing and security log policy also.
9. In Windows Explorer, browse to the NXLog installation directory (by default, C:\Program Files
(x86)\nxlog on 64-bit systems), right-click, and select Properties. Under the Security tab, select the new
user from the Group or user names list. Check Allow for the following permissions, and then click [OK].
◦ Modify
◦ Read & Execute
◦ List Folder Contents
◦ Read
◦ Write
135
10. In the Services console (services.msc), right-click the nxlog service and select Restart.
11. Check the NXLog log files for start-up errors. Successful startup should look like this:
nxlog.log
2016-11-16 16:53:10 INFO nxlog-5.0.5876 started↵
2016-11-16 16:53:10 INFO connecting to 192.168.40.43↵
2016-11-16 16:53:12 INFO successfully connected to 192.168.40.43:1514↵
2016-11-16 16:53:12 INFO successfully connected to agent manager at 192.168.40.43:4041 in SSL
mode↵
On some Windows systems, this procedure may result in the following access denied error
when attempting to access the Windows EventLog:
In this case, wevtutil can be used to set ACLs on the Windows EventLog. For more details, see
the Giving Non Administrators permission to read Event Logs Windows 2003 and Windows 2008
TechNet article.
136
Chapter 21. Relocating NXLog
While not officially supported, it is possible to relocate NXLog to a different directory than where it was installed
originally. The procedure shown below assumes that NXLog was installed normally, using the system’s package
manager. While it is also possible to manually extract the files from the package and perform a manual
installation in a custom directory, this is not covered here but the basic principals are the same. This procedure
has been tested in GNU/Linux systems and should work in any system that supports run-time search paths.
Both relocation and manual installation can result in a non-functional NXLog agent.
Furthermore, subsequent update and removal using the system’s package manager may
WARNING
not work correctly. Follow this procedure at your own risk. This is not recommended for
inexperienced users.
Move the NXLog directory structure to the new location. Though not required, it is best to keep the original
directory structure. Then proceed to the following sections.
NOTE In the examples that follow, NXLog is being relocated from /opt/nxlog to /opt/nxlog_new.
/etc/init.d/nxlog
BASE=/opt/nxlog_new
pidfile=$BASE/var/run/nxlog/nxlog.pid
nxlog=$BASE/bin/nxlog
conf=$BASE/etc/nxlog.conf
nxlog="$nxlog -c $conf"
On systems that use a hybrid System V and systemd, reload the init files by executing the following command.
# systemctl daemon-reload
137
nxlog.service
[Service]
Type=simple
User=root
Group=root
PIDFile=/opt/nxlog_new/var/run/nxlog/nxlog.pid
ExecStartPre=/opt/nxlog_new/bin/nxlog -v -c /opt/nxlog_new/etc/nxlog.conf
ExecStart=/opt/nxlog_new/bin/nxlog -f -c /opt/nxlog_new/etc/nxlog.conf
ExecStop=/opt/nxlog_new/bin/nxlog -s -c /opt/nxlog_new/etc/nxlog.conf
ExecReload=/opt/nxlog_new/bin/nxlog -r -c /opt/nxlog_new/etc/nxlog.conf
KillMode=process
# systemctl daemon-reload
nxlog.conf
1 define BASE /opt/nxlog_new
2 define CERTDIR %BASE%/var/lib/nxlog/cert
3 define CONFDIR %BASE%/etc/nxlog.d
4 define LOGDIR %BASE%/var/log/nxlog
5 define LOGFILE "%LOGDIR%/nxlog.log"
6
7 SpoolDir %BASE%/var/spool/nxlog
8
9 # default values:
10 PidFile %BASE%/var/run/nxlog/nxlog.pid
11 CacheDir %BASE%/var/spool/nxlog
12 ModuleDir %BASE%/lib/nxlog/modules
Depending on the architecture and whether system supplied libraries are used, NXLog will store
NOTE
the modules under a different directory such as %BASE%/libexec/nxlog/modules.
# ldd /opt/nxlog_new/bin/nxlog
138
linux-vdso.so.1 => (0x00007ffc15d36000)
libpcre.so.1 => /opt/nxlog/lib/libpcre.so.1 (0x00007ff7f311e000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007ff7f2f14000)
libcap.so.2 => /lib64/libcap.so.2 (0x00007ff7f2d0f000)
libapr-1.so.0 => /opt/nxlog/lib/libapr-1.so.0 (0x00007ff7f2ad9000)
librt.so.1 => /lib64/librt.so.1 (0x00007ff7f28d0000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff7f2699000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff7f247d000)
libc.so.6 => /lib64/libc.so.6 (0x00007ff7f20bb000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff7f336d000)
libattr.so.1 => /lib64/libattr.so.1 (0x00007ff7f1eb6000)
libfreebl3.so => /lib64/libfreebl3.so (0x00007ff7f1cb3000)
Notice that libpcre and libapr are pointing to the included libraries in /opt/nxlog/lib/. To change the run-
time search path of the binaries, a tool such as chrpath or patchelf can be used.
Depending on the distribution, chrpath may have a limitation on the path length for the -r
<path> | --replace <path> option: "The new path must be shorter or the same length as the
current path."
/opt/nxlog_new/bin/nxlog: RUNPATH=/opt/nxlog/lib
new rpath '/opt/nxlog_new/lib' too large; maximum length 14
If your system has the chrpath limitation documented above, skip to Modifying rpath with patchelf.
nxlog: RPATH=/opt/nxlog/lib:/home/builder/workspace/nxlog3-rpm-generic-amd64/rpmbuild/BUILD/nxlog-
deps/opt/nxlog/lib
nxlog: new RPATH: /opt/nxlog_new/lib
NXLog modules are also linked against statically included libraries. Therefore, if the run-time search path of the
binaries required a change, then the rpath of the modules needs updated as well. To change the run-time
search path of all the modules (or binaries) in a directory, use a command like this.
# chrpath -r /opt/nxlog_new/lib *
139
On success the command prompt returns with no message, or if this is the first time patchelf has been run
after installation, the following warning will be shown:
warning: working around a Linux kernel bug by creating a hole of 1748992 bytes in ‘nxlog’
To confirm the modification of rpath, run ldd again on the binary. The new path should displayed in the output:
# ldd /opt/nxlog_new/bin/nxlog
linux-vdso.so.1 => (0x00007ffc15d36000)
libpcre.so.1 => /opt/nxlog_new/lib/libpcre.so.1 (0x00007ff7f311e000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007ff7f2f14000)
libcap.so.2 => /lib64/libcap.so.2 (0x00007ff7f2d0f000)
libapr-1.so.0 => /opt/nxlog_new/lib/libapr-1.so.0 (0x00007ff7f2ad9000)
librt.so.1 => /lib64/librt.so.1 (0x00007ff7f28d0000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff7f2699000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff7f247d000)
libc.so.6 => /lib64/libc.so.6 (0x00007ff7f20bb000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff7f336d000)
libattr.so.1 => /lib64/libattr.so.1 (0x00007ff7f1eb6000)
libfreebl3.so => /lib64/libfreebl3.so (0x00007ff7f1cb3000)
NXLog modules are also linked against statically included libraries. Therefore, if the run-time search path of the
binaries required a change, then the rpath of the modules needs updated as well. Unlike chrpath which accepts
a (*) wildcard for all modules (or binaries) in a given directory, patchelf can only be run on a single file. To
automate the process of changing rpath on multiple files, a shell script will need to be written if relocating
NXLog will need to be done on a regular basis, or on more than one installation.
140
Chapter 22. Monitoring and Recovery
Considerable resources continue to be invested in maintaining the quality and reliability of NXLog. However, due
to the complexity of modern software, producing bug-free software is practically impossible. This section
describes potential ways to automatically recover from an NXLog crash. Note that there are other monitoring
solutions besides these presented here which may also be of interest.
While Monit can monitor and react to several conditions, the configuration presented here instructs Monit to
restart NXLog after a crash. To do so, include the following in the Monit configuration. It may be necessary to edit
the paths to match your installation. Then restart Monit.
/etc/monit/monitrc
check process nxlog with pidfile /opt/nxlog/var/run/nxlog/nxlog.pid
start program = "/etc/init.d/nxlog start"
stop program = "/etc/init.d/nxlog stop"
On recent Linux distributions employing systemd, the start and stop directives should use
NOTE
systemd calls instead (for example, /bin/systemctl start nxlog).
To simulate an NXLog crash, terminate the nxlog process by issuing the following command (where <PID>
represents the current nxlog process ID).
# kill -9 <PID>
141
Figure 2. Recovery settings in the SCM
Newer versions of NXLog enable automatic recovery during installation. For older versions,
NOTE automatic recovery can be enabled by manually editing the values under the Recovery tab of
the SCM.
To simulate an NXLog crash, execute the following in PowerShell (where <PID> represents the process ID of
NXLog).
142
Configuration
143
Chapter 23. Configuration Overview
NXLog uses Apache style configuration files. The configuration file is loaded from its default location, or it can be
explicitly specified with the -c command line argument.
The configuration file is comprised of blocks and directives. Blocks are similar to XML tags containing multiple
directives. Directive names are not case-sensitive but arguments sometimes are. A directive and its argument
must be specified on the same line. Values spanning multiple lines must have the newline escaped with a
backslash (\). A typical case for this is the Exec directive. Blank lines and lines starting with the hash mark (#) are
ignored. Configuration directives referring to a file or a path can be quoted with double quotes (") or single
quotes ('). This applies to both global and module directives.
The configuration file can be logically divided into three parts: global parameters, module instances, and route
instances.
This configuration exemplifies the logical structure. The global parameters section contains two directives.
The modules section contains both an input and output instance. The route section contains a single route
with a path directing a single input to a single output.
nxlog.conf
1 # Global section
2 User nxlog
3 Group nxlog
4
5 # Modules section
6 <Input in>
7 Module im_null
8 </Input>
9
10 <Output out>
11 Module om_null
12 </Output>
13
14 # Route section
15 <Route r>
16 Path in => out
17 </Route>
The LogFile directive sets a destination file for NXLog internal logs. If this directive is unset, the log file is disabled
and internal NXLog logs are not written to file (unless configured via the im_internal module). See also Rotating
the Internal Log File.
With the User and Group directives set, NXLog will drop root privileges after starting and run under the specified
user and group. These directives are ignored if running on Windows.
After starting, NXLog will change its working directory to the directory specified by the SpoolDir directive. Non-
absolute paths in the configuration will be relative to this directory.
See the Reference Manual for a complete list of available global directives.
144
23.2. Modules
NXLog will only load modules which are specified in the configuration file and used in an active route. A module
instance is specified according to its corresponding module type (Extension, Input, Processor, or Output).
Each module instance must have a unique name and a Module directive. The following is a skeleton
configuration block for an input module.
nxlog.conf
1 <Input instancename>
2 Module im_module
3 ...
4 </Input>
For more details about module instance names, see Configuration in the Reference Manual.
23.3. Routes
Routes define the flow and processing order of the log messages. Each route instance must have a unique name
and a Path.
This Route instance, named example, takes logs from Input module instances named in1 and in2,
processes the logs with the proc Processor module instance, and sends the resulting logs to both Output
module instances out1 and out2. These named module instances must be defined elsewhere in the
configuration file.
nxlog.conf
1 <Route example>
2 Path in1, in2 => proc => out1, out2
3 </Route>
For more details about route instance names, see Configuration in the Reference Manual.
If no Route block is specified in the configuration, NXLog will automatically generate a route, with all the Input
and Output instances specified in a single path.
145
Example 12. An Automatic Route Block
NXLog can use a configuration with no Route block, such as the following.
nxlog.conf
1 <Input in1>
2 Module im_null
3 </Input>
4
5 <Input in2>
6 Module im_null
7 </Input>
8
9 <Output out1>
10 Module om_null
11 </Output>
12
13 <Output out2>
14 Module om_null
15 </Output>
An NXLog define works in a similar way to the C language, where the pre-processor substitutes the value in
places where the macro is used. The NXLog configuration parser replaces all occurrences of the defined name
with its value, and then after this substitution the configuration check occurs.
146
Example 13. Using Defines
This example shows the use of two defines: BASEDIR and IGNORE_DEBUG. The first is a simple constant, and
its value is used in two File directives. The second is an NXLog language statement, it is used in an Exec
directive.
nxlog.conf
1 define BASEDIR /var/log
2 define IGNORE_DEBUG if $raw_event =~ /debug/ drop();
3
4 <Input messages>
5 Module im_file
6 File '%BASEDIR%/messages'
7 </Input>
8
9 <Input proftpd>
10 Module im_file
11 File '%BASEDIR%/proftpd.log'
12 Exec %IGNORE_DEBUG%
13 </Input>
The define directive can be used for statements as shown above, but multiple statements should be specified
using a code block, with curly braces ({}), to result in the expected behavior.
The following example shows an incorrect use of the define directive. After substitution, the drop()
procedure will always be executed; only the warning message will be logged conditionally.
nxlog.conf (incorrect)
1 define ACTION log_warning("dropping message"); drop();
2
3 <Input in>
4 Module im_file
5 File '/var/log/messages'
6 Exec if $raw_event =~ /dropme/ %ACTION%
7 </Input>
To avoid this problem, the action should be defined using a code block.
147
Example 15. Using Environment Variables
This is similar to the previous example using a define, but here the value is fetched from the environment.
nxlog.conf
1 envvar BASEDIR
2
3 <Input in>
4 Module im_file
5 File '%BASEDIR%/messages'
6 </Input>
The SpoolDir directive does not take effect until after the configuration has been parsed, so
relative paths specified with these directives are relative to the working directory where NXLog
NOTE was started from. Generally, it is recommended to use absolute paths. If desired, define
directives can be used to simulate relative paths (see Using Defines to Include a Configuration
File).
With the include directive it is possible to specify a file or set of files to be included in the current NXLog
configuration.
This example includes the contents of the /opt/nxlog/etc/syslog.conf file in the current configuration.
nxlog.conf
1 include /opt/nxlog/etc/syslog.conf
In this example, two define directives are used to include an eventlog.conf configuration file on Windows
by defining parts of the path to this file.
nxlog.conf
1 define ROOT C:\Program Files (x86)\nxlog
2 define CONFDIR %ROOT%\conf
3 include %CONFDIR%\eventlog.conf
The include directive also supports filenames containing the wildcard character (*). For example, multiple .conf
files could be saved in the nxlog.d directory—or some other custom configuration directory—and then
automatically included in the NXLog configuration in ascending alphabetical order along with the nxlog.conf
file.
Each included file might contain a small set of configuration information focused exclusively on a single log
source. This essentially establishes a modular design for maintaining larger configurations. One benefit of this
modular configuration approach is the ability to add/remove .conf files to/ from such a directory for
enabling/disabling specific log sources without ever needing to modify the main nxlog.conf configuration.
148
This solution could be used to specify OS-specific configuration snippets (like windows2003.conf) or application-
specific snippets (such as syslog.conf).
Including subdirectories inside the configuration directory is not supported, neither are wildcarded directories.
This example includes all .conf files located under the /opt/nxlog/etc/nxlog.d path.
nxlog.conf
1 include /opt/nxlog/etc/nxlog.d/*.conf
nxlog.conf
define CONFDIR /opt/nxlog/etc/nxlog.d
include %CONFDIR%/*.conf
This example includes all .conf files from the nxlog.d folder on Windows.
nxlog.conf
1 include C:\Program Files\nxlog\conf\nxlog.d\*.conf
nxlog.conf
1 define CONFDIR C:\Program Files\nxlog\conf\nxlog.d
2 include %CONFDIR%\*.conf
With the include_stdout directive, an external command can be used to provide configuration content. There are
many ways this could be used, including fetching, decrypting, and validating a signed configuration from a
remote host, or generating configuration content dynamically.
nxlog.conf
1 include_stdout /opt/nxlog/etc/fetch_conf.sh
149
Chapter 24. NXLog Language
The NXLog core has a built-in interpreted language. This language can be used to make complex decisions or
build expressions in the NXLog configuration file. Code written in the NXLog language is similar to Perl, which is
commonly used by developers and administrators for log processing tasks. When NXLog starts and reads its
configuration file, directives containing NXLog language code are parsed and compiled into pseudo-code. If a
syntax error is found, NXLog will print the error. This pseudo-code is then evaluated at run-time, as with other
interpreted languages.
The features of the NXLog language are not limited to those in the NXLog core: modules can register functions
and procedures to supplement built-in functions and procedures (see the xm_syslog functions, for example).
Due to the simplicity of the language there is no error handling available to the user, except for
function return values. If an error occurs during the execution of the NXLog pseudo-code,
usually the error is printed in the NXLog logs. If an error occurs during log message processing it
NOTE is also possible that the message will be dropped. If sophisticated error handling or more
complex processing is required, additional message processing can be implemented in an
external script or program via the xm_exec module, in a dedicated NXLog module, or in Perl via
the xm_perl module.
Types
All fields and other expressions in the NXLog language are typed.
Expressions
An expression is evaluated to a value at run-time and the value is used in place of the expression. All
expressions have types. Expressions can be used as arguments for some module directives.
Statements
The evaluation of a statement will cause a change in the state of the NXLog engine, the state of a module
instance, or the current event. Statements often contain expressions. Statements are used as arguments for
the Exec module directive, where they are then executed for each event (unless scheduled).
Variables
Variables store data persistently in a module instance, across multiple event records.
Statistical Counters
NXLog provides statistical counters with various algorithms that can be used for realtime analysis.
150
Example 21. Statements vs. Configurations
While this Guide provides many configuration examples, in some cases only statement examples are given.
Statements must be used with the Exec directive (or Exec block). The following statement example shows
one way to use the parsedate() function.
The following configuration example uses the above statement in an Exec block.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/var/log/app.log'
4 <Exec>
5 if $raw_event =~ /^(\w{3} \d{2} \d{2}:\d{2}:\d{2})/
6 $EventTime = parsedate($1);
7 </Exec>
8 </Input>
24.1. Types
The NXLog language is a typed language. Fields, literals, and other expressions evaluate to values with specific
types. This allows for stricter type-safety syntax checking when parsing the configuration. Note that fields and
some functions can return values with types that can only be determined at run-time.
The language provides only simple types. Complex types such as arrays and hashes (associative
NOTE arrays) are not supported. The language does support the undefined value similar to that in
Perl. See the xm_perl module if you require more complex types.
A log’s format must be parsed before its individual parts can be used for processing (see Fields). But even after
the message has been parsed into its parts, additional processing may still be required, for example, to prepare a
timestamp for comparison with another timestamp. This is a situation where typing is helpful: by converting all
timestamps to the datetime type they can be easily compared—and converted back to strings later if required—
using the functions and procedures provided. The same applies to other types.
151
Example 22. Typed Fields in a Syslog Event Record
The following illustrates the four steps NXLog performs with this configuration as it manually processes a
Syslog event record using only regular expressions on the core field $raw_event and the core function
parsedate().
nxlog.conf
1 <Input in>
2 # 1. New event record created
3 Module im_udp
4 Host 0.0.0.0
5 Port 514
6 <Exec>
7 # 2. Timestamp parsed from Syslog header
8 if $raw_event =~ /^(\w{3} \d{2} \d{2}:\d{2}:\d{2})/
9 {
10 # 3. parsedate() function converts from string to datetime
11 $EventTime = parsedate($1);
12 # 4. Datetime fields compared
13 if ( $EventReceivedTime - $EventTime ) > 60000000
14 log_warning('Message delayed more than 1 minute');
15 }
16 </Exec>
17 </Input>
1. NXLog creates a new event record for the incoming log message. The new event record contains the
$raw_event string type field, with the contents of the entire Syslog string.
2. A regular expression is used to parse the timestamp from the event. The captured sub-string is a string
type, not a datetime type.
3. The parsedate() function converts the captured string to a datetime type.
4. Two datetime fields are compared to determine if the message was delayed during delivery. The
datetime type $EventReceivedTime field is added by NXLog to each event when it is received.
For a full list of types, see the Reference Manual Types section. For NXLog language core functions that can be
used to work with types, see Functions. For functions and procedures that can work with types related to a
particular format, see the module corresponding to the required format.
24.2. Expressions
An expression is a language element that is dynamically evaluated to a value at run-time. The value is then used
in place of the expression. Each expression evaluates to a type, but not always to the same type.
The following language elements are expressions: literals, regular expressions, fields, operations, and functions.
152
Example 23. Using Parentheses (Round Brackets) Around Expressions
There are three statements below, one per line. Each statement contains multiple expressions, with
parentheses added in various ways.
1 if 1 + 1 == (1 + 1) log_info("2");
2 if (1 + 1) == (1 + 1) log_info("2");
3 if ((1 + 1) == (1 + 1)) log_info("2");
This simple statement uses the log_info() procedure with an expression as its argument. In this case the
expression is a literal.
Here is a function (also an expression) that is used in the same procedure. It generates an internal event
with the current time when each event is processed.
1 log_info(now());
The File directive of the om_file module supports expressions. This allows the output filename to be set
dynamically for each individual event.
nxlog.conf
1 <Output out>
2 Module om_file
3 File "/var/log/nxlog/out_" + strftime($EventTime, "%Y%m%d")
4 </Output>
24.2.1. Literals
A literal is a simple expression that represents a fixed value. Common literals include booleans, integers, and
strings. The type of literal is detected by the syntax used to declare it.
NOTE This section demonstrates the use of literals by using examples with assignment statements.
Boolean literals can be declared using the constants TRUE or FALSE. Both are case-insensitive.
Integer literals are declared with an unquoted integer. Negative integers, hexademical notation, and base-2
modifiers (Kilo, Mega, and Giga) are supported.
153
Setting Integer Literals
1 $Count = 42;
2 $NegativeCount = -42;
3 $BigCount = 42M;
4 $HexCount = 0x2A;
String literals are declared by quoting characters with single or double quotes. Escape sequences are available
when using double quotes.
For a list of all available literals, see the Reference Manual Literals section.
Examples in this section use only simple patterns. See Extracting Data and other topic-specific
NOTE
sections for more extensive examples.
The event record will be discarded if the $raw_event field matches the regular expression.
Regular expression matching can also be used for extensive parsing, by capturing sub-strings for field
assignment.
If the $raw_event field contains the regular expression, the two fields will be set to the corresponding
captured sub-strings.
Regular expression matching also supports named capturing groups. This can be useful when writing long
regular expressions. Each captured group is automatically added to the event record as a field with the same
name.
154
Example 28. Named Capturing Groups
This regular expression uses the named groups TestNumber and TestName to add corresponding
$TestNumber and $TestName fields to the event record.
Regular expression substitution can be used to modify a string. In this case, the regular expression follows the
form s/pattern/replace/. The result of the expression will be assigned to the field to the left of the operator.
The first regular expression match will be removed from the $raw_event field.
Global substitution is supported with the /g modifier. Without the /g modifier, only the first match in the string
will be replaced.
Every whitespace character in the $AlertType field will be replaced with an underscore (_).
1 $AlertType =~ s/\s/_/g;
A statement can be conditionally executed according to the success of a regular expression substitution.
For more information, see the following sections in the Reference Manual: Regular Expressions, =~, and !~.
24.2.3. Fields
When NXLog receives a log message, it creates an event record for it. An event record is a set of fields (see Fields
for more information). A field is an expression which evaluates to a value with a specific type. Each field has a
name, and in the NXLog language it is represented with the dollar sign ($) prepended to the name of the field,
like Perl’s scalar variables.
Fields are only available in an evaluation context which is triggered by a log message. For example, using a value
of a field in the Exec directive of a Schedule block will result in a run-time error because the scheduled execution
is not triggered by a log message.
Because it is through fields that the NXLog language accesses the contents of an event record, they are
frequently referenced. The following examples show some common ways that fields are used in NXLog
configurations.
155
Example 32. Assigning a Value to a Field
This statement uses assignment to set the $Department field on log messages.
1 $Department = 'customer-service';
If the $Hostname field does not match, the message will be discarded with the drop() procedure.
This statement will generate an internal event if $SeverityValue integer field is greater than 2 (NXLog
INFO severity). The generated event will include the contents of the $Message field.
24.2.4. Operations
Like other programming languages and especially Perl, the NXLog language has unary operations, binary
operations, and the conditional ternary operation. These operations are expressions and evaluate to values.
Unary Operations
Unary operations work with a single operand and evaluate to a boolean value.
This statement uses the defined operator to log a message only if the $Hostname field is defined in the
event record.
Binary Operations
Binary operations work with two operands and evaluate to a value. The type of the evaluated value depends
on the type of the operands. Execution might result in a run-time error if the type of the operands are
unknown at compile time and then evaluate to types which are incompatible with the binary operation when
executed.
This statement uses the == operator to drop the event if the $Hostname field matches.
Ternary Operation
The conditional or ternary operation requires three operands. The first is an expression that evaluates to a
156
boolean. The second is an expression that is evaluated if the first expression is TRUE. The third is an
expression that is evaluated if the first expression is FALSE.
This statement sets the $Important field to TRUE if $SeverityValue is greater than 2, or FALSE
otherwise. The parentheses are optional and have been added here for clarity.
For a full list of supported operations, see the Reference Manual Operations section.
24.2.5. Functions
A function is an expression which always returns a value. A function cannot be used without using its return
value. Functions can be polymorphic: the same function can take different argument types.
Many NXLog language features are provided through functions. As with other types of expressions, and unlike
procedures, a function never modifies the state of the NXLog engine, the state of the module, or the current
event.
See the list of core functions. Modules can provide additional functions for use with the NXLog language.
These statements use the now() function (returning the current time) and the hostname() function
(returning the hostname of the system running NXLog) to set fields.
1 $EventTime = now();
2 $Relay = hostname();
Here, any event with a $Message field over 4096 bytes causes an internal log to be generated.
24.3. Statements
The evaluation of a statement will usually result in a change in the state of the NXLog engine, the state of a
module, or the log message.
Statements are used with the Exec module directive. A statement is terminated by a semicolon (;).
With this input configuration, an internal NXLog log message will be generated for each message received.
nxlog.conf
1 <Input in>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 Exec log_info("Message received on UDP port 514");
6 </Input>
Multiple statements can be specified, these will be evaluated and executed in order. Statements can also be
157
given on multiple lines by using line continuation or by enclosing the statements in an Exec block.
This configuration generates an internal log message and sets the $File field.
nxlog.conf
1 <Input in1>
2 Module im_file
3 File '/var/log/app.log'
4 Exec log_info("App message read from log"); $File = file_name();
5 </Input>
This is the same, but the backslash (\) is used to continue the Exec directive to the next line.
nxlog.conf
1 <Input in2>
2 Module im_file
3 File '/var/log/app.log'
4 Exec log_info("App message read from log"); \
5 $File = file_name();
6 </Input>
The following configuration is functionally equivalent to the previous configuration above. However, by
creating an Exec block, multiple statements can be specified without the need for a backslash (\) line
continuation at the end of each line.
nxlog.conf
1 <Input in3>
2 Module im_file
3 File '/var/log/app.log'
4 <Exec>
5 log_info("App message read from log");
6 $File = file_name();
7 </Exec>
8 </Input>
Statements can also be executed based on a schedule by using the Exec directive of a Schedule block. The Exec
directive is slightly different in this example. Because its execution depends solely on a schedule instead of any
incoming log events, there is no event record that can be associated with it. The $File field assignment in the
example above would be impossible.
158
Example 41. Using a Statement in a Schedule
nxlog.conf
1 <Input syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Schedule>
6 When @hourly
7 Exec log_info("The syslog_udp input module instance is active.");
8 </Schedule>
9 </Input>
24.3.1. Assignment
Each event record is made up of fields, and assignment is the primary way that a value is written to a field in the
NXLog language. The assignment operation is declared with an equal sign (=). This operation loads the value
from the expression evaluated on the right into an event record field on the left.
This input instance uses assignment operations to add two fields to each event record.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/var/log/messages'
4 <Exec>
5 $Department = 'processing';
6 $Tier = 1;
7 </Exec>
8 </Input>
24.3.2. Block
Statements can be declared inside a block by surrounding them with curly braces ({}). A statement block in the
configuration is parsed as if it were a single statement. Blocks are typically used with conditional statements.
159
Example 43. Using Statement Blocks
This statement uses a block to execute two statements if the $Message field matches.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/var/log/messages'
4 <Exec>
5 if $Message =~ /^br0:/
6 {
7 log_warning('br0 interface state changed');
8 $Tag = 'network';
9 }
10 </Exec>
11 </Input>
24.3.3. Procedures
While functions are expressions that evaluate to values, procedures are statements that perform actions. Both
functions and procedures can take arguments. Unlike functions, procedures never return values. Instead, a
procedure modifies its argument, the state of the NXLog engine, the state of a module, or the current event.
Procedures can be polymorphic: the same procedure can take different argument types.
Many NXLog language features are provided through procedures. See the list of available procedures. Modules
can provide additional procedures for use with the NXLog language.
This example uses the parse_syslog() procedure, provided by the xm_syslog module, to parse each Syslog-
formatted event record received via UDP.
nxlog.conf
1 <Input in>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 Exec parse_syslog();
6 </Input>
24.3.4. If-Else
The if or conditional statement allows a statement to be executed based on the boolean value of an expression.
When the boolean is TRUE, the statement is executed. An optional else keyword can be followed by another
statement to be executed if the boolean is FALSE.
160
Example 45. Using If Statements
This example uses an if statement and the drop() procedure to discard any event that matches the regular
expression.
nxlog.conf
1 <Input in1>
2 Module im_file
3 File '/var/log/messages'
4 Exec if $raw_event =~ /junk/ drop();
5 </Input>
Here, any event not matching the regular expression will be dropped.
nxlog.conf
1 <Input in2>
2 Module im_file
3 File '/var/log/messages'
4 Exec if not ($raw_event =~ /important/) drop();
5 </Input>
Finally, this statement shows more extensive use of the if statement, with an else clause and blocks defined
by curly braces ({}).
nxlog.conf
1 <Input in3>
2 Module im_file
3 File '/var/log/messages'
4 <Exec>
5 if $raw_event =~ /alert/
6 {
7 log_warning('Detected alert message');
8 }
9 else
10 {
11 log_info('Discarding non-alert message');
12 drop();
13 }
14 </Exec>
15 </Input>
24.4. Variables
While NXLog provides fields for storing data during the processing of an event, they are only available for the
duration of that event record and can not be used to store a value across multiple events. For this purpose,
module variables can be used. A variable stores a value for the module instance where it is set. It can only be
accessed from the same module where it was created: a variable with the same name is a different variable
when referenced from another module.
Each module variable can be created with an expiry value or an infinite lifetime. If an expiry is used, the variable
will be destroyed automatically when the lifetime expires. This can be used as a garbage collection method or to
reset variable values automatically.
A module variable is referenced by a string value and can store a value of any type. Module variables are
supported by all modules. See the create_var(), delete_var(), set_var(), and get_var() procedures.
161
Example 46. Using Module Variables
If the number of login failures exceeds 3 within 45 seconds, then an internal log message is generated.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/var/log/messages'
4 <Exec>
5 if $Message =~ /login failure/
6 {
7 if not defined get_var('login_failures')
8 { # create the variable if it doesn't exist
9 create_var('login_failures', 45);
10 set_var('login_failures', 1);
11 }
12 else
13 { # increase the variable and check if it is over the limit
14 set_var('login_failures', get_var('login_failures') + 1);
15 if get_var('login_failures') >= 3
16 log_warning(">= 3 login failures within 45 seconds");
17 }
18 }
19 </Exec>
20 </Input>
The pm_evcorr module is recommended instead for this case. This algorithm does not
reliably detect failures because the lifetime of the variable is not affected by set_var(). For
example, consider login failures at 0, 44, 46, and 47 seconds. The lifetime of the variable
will be set when the first failure occurs, causing the variable to be cleared at 45 seconds.
NOTE
The variable is created with a new expiry at 46 seconds, but then only two failures are
noticed. Also, this method can only work in real-time because the timing is not based on
values available in the log message (although the event time could be stored in another
variable).
A statistical counter can be created with the create_stat() procedure call. After it is created, it can be updated with
the add_stat() procedure call. The value of the counter can be read with the get_stat() function call. Note that the
value of the statistical counter is only recalculated during these calls, rather than happening automatically. This
can result in some slight distortion of the calculated value if the add and read operations are infrequent.
A time value can also be specified during creation, updating, and reading. This makes it possible for statistical
counters to be used with offline log processing.
162
Example 47. Using Statistical Counters
This input configuration uses a Schedule block and a statistical counter with the RATEMAX algorithm to
calculate the maximum rate of events over a 1 hour period. An internal log message is generated if the rate
exceeds 500 events/second at any point during the 1 hour period.
nxlog.conf
1 <Input in>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 <Exec>
6 parse_syslog();
7 if defined get_stat('eps') add_stat('eps', 1, $EventReceivedTime);
8 </Exec>
9 <Schedule>
10 Every 1 hour
11 <Exec>
12 create_stat('eps', 'RATEMAX', 1, now(), 3600);
13 if get_stat('eps') > 500
14 log_info('Inbound TCP rate peaked at ' + get_stat('eps')
15 + ' events/second during the last hour');
16 </Exec>
17 </Schedule>
18 </Input>
163
Chapter 25. Reading and Receiving Logs
This chapter discusses log sources that you may need to use with NXLog, including:
UDP
The im_udp module handles incoming messages over UDP.
This input module instance shows the im_udp module configured with the default options: localhost
only and port 514.
nxlog.conf
1 <Input udp>
2 Module im_udp
3 Host localhost
4 Port 514
5 </Input>
The UDP protocol does not guarantee reliable message delivery. It is recommended to use
the TCP or SSL transport modules instead if message loss is a concern.
NOTE
Though NXLog was designed to minimize message loss even in the case of UDP, adjusting
the kernel buffers may reduce the likelihood of UDP message loss on a system under heavy
load. The Priority directive in the Route block can also help.
TCP
The im_tcp module handles incoming messages over TCP. For TLS/SSL, use the im_ssl module.
This input module instance accepts TCP connections from any host on port 1514.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 </Input>
SSL/TLS
The im_ssl module handles incoming messages over TCP with SSL/TLS security.
164
Example 50. Using the im_ssl Module
The following input module instance listens for SSL/TLS encrypted incoming logs on port 6514. The
certificate file paths are specified relative to a previously defined CERTDIR.
nxlog.conf
1 <Input in>
2 Module im_ssl
3 Host 0.0.0.0
4 Port 6514
5 CAFile %CERTDIR%/ca.pem
6 CertFile %CERTDIR%/client-cert.pem
7 CertKeyFile %CERTDIR%/client-key.pem
8 </Input>
Syslog
To receive Syslog over the network, use one of the network modules above, coupled with xm_syslog. Syslog
parsing is not required if you only need to forward or store the messages as they are. See also Accepting
Syslog via UDP, TCP, or TLS.
With this example configuration, NXLog listens for messages on TCP port 1514. The xm_syslog extension
module provides the Syslog_TLS InputType (for octet-framing) and the parse_syslog() procedure for
parsing Syslog messages.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 # "Syslog_TLS" is for octet framing and may be used with TLS/SSL
10 InputType Syslog_TLS
11 Exec parse_syslog();
12 </Input>
165
Example 52. Using the im_dbi Module
This example uses libdbi and the MySQL driver to read records from the logdb database.
nxlog.conf
1 <Input in>
2 Module im_dbi
3 Driver mysql
4 Option host 127.0.0.1
5 Option username mysql
6 Option password mysql
7 Option dbname logdb
8 SQL SELECT id, facility, severity, hostname, timestamp, application, \
9 message FROM log
10 </Input>
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString DSN=mssql;database=mydb;
4 SQL SELECT RecordNumber as id, DateOccured as EventTime, \
5 data as Message from logtable WHERE RecordNumber > ?
6 </Input>
This example reads from the specified file without performing any additional processing.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/var/log/messages"
4 </Input>
166
Example 55. Using the im_uds Module
With this configuration, NXLog will read messages from the /dev/log socket. NXLog’s flow control
feature must be disabled in this case (see the FlowControl directive in the Reference Manual).
nxlog.conf
1 <Input in>
2 Module im_uds
3 UDS /dev/log
4 FlowControl FALSE
5 </Input>
This example uses the tail command to read messages from a file.
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input in>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>
167
Chapter 26. Processing Logs
This chapter deals with various tasks that might be required after a log message is received by NXLog.
The following sections provide configuration examples for parsing log formats commonly used by applications.
Field Description
host IP address of the client
authuser Username of the user accessing the document (not applicable for public documents)
168
Example 57. Parsing the Common Log Format
This configuration uses a regular expression to parse the fields in each record. The parsedate() function is
used to convert the timestamp string into a datetime type for later processing or conversion as required.
nxlog.conf
<Input access_log>
Module im_file
File "/var/log/apache2/access.log"
<Exec>
if $raw_event =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
\ HTTP\/\d\.\d\"\ (\S+)\ (\S+)/
{
$Hostname = $1;
if $2 != '-' $AccountName = $2;
$EventTime = parsedate($3);
$HTTPMethod = $4;
$HTTPURL = $5;
$HTTPResponseStatus = $6;
if $7 != '-' $FileSize = $7;
}
</Exec>
</Input>
169
Example 58. Parsing the Combined Log Format
This example is like the previous one, except it parses the two additional fields unique to the Combined Log
Format. An om_file instance is also shown here which has been configured to discard all events not related
to the user john and write the remaining events to a file in JSON format.
nxlog.conf
<Extension _json>
Module xm_json
</Extension>
<Input access_log>
Module im_file
File "/var/log/apache2/access.log"
<Exec>
if $raw_event =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
\ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
\ \"([^\"]+)\"/
{
$Hostname = $1;
if $2 != '-' $AccountName = $2;
$EventTime = parsedate($3);
$HTTPMethod = $4;
$HTTPURL = $5;
$HTTPResponseStatus = $6;
if $7 != '-' $FileSize = $7;
if $8 != '-' $HTTPReferer = $8;
if $9 != '-' $HTTPUserAgent = $9;
}
</Exec>
</Input>
<Output out>
Module om_file
File '/var/log/john_access.log'
<Exec>
if not (defined($AccountName) and ($AccountName == 'john')) drop();
to_json();
</Exec>
</Output>
For information about using the Common and Combined Log Formats with the Apache HTTP Server, see Apache
HTTP Server.
170
Example 59. Parsing a Syslog Event With parse_syslog()
This example shows a Syslog event as it is received via UDP and processed by the parse_syslog() procedure.
Syslog Message
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from 192.168.1.60
port 38176 ssh2↵
The following configuration loads the xm_syslog extension module and then uses the Exec directive to
execute the parse_syslog() procedure for each event.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog();
10 </Input>
11
12 <Output out>
13 Module om_null
14 </Output>
This results in the following fields being added to the event record by parse_syslog().
Field Value
$Message Failed password for invalid user linda from 192.168.1.60 port 38176 ssh2
$SyslogSeverityValue 6
$SyslogSeverity INFO
$SeverityValue 2
$Severity INFO
$SyslogFacilityValue 4
$SyslogFacility AUTH
$Hostname myhost
$SourceName sshd
$ProcessID 8459
171
Example 60. Complex CSV Format Conversion
This example reads from the input file and parses it with the parse_csv() procedure from the csv1 instance
where the field names, types, and order within the record are defined. The $date field is then set to the
current time and the $number field is set to 0 if it is not already defined. Finally, the to_csv() procedure from
the csv2 instance is used to generate output with the additional date field, a different delimiter, and a
different field order.
nxlog.conf
1 <Extension csv1>
2 Module xm_csv
3 Fields $id, $name, $number
4 FieldTypes integer, string, integer
5 Delimiter ,
6 </Extension>
7
8 <Extension csv2>
9 Module xm_csv
10 Fields $id, $number, $name, $date
11 Delimiter ;
12 </Extension>
13
14 <Input filein>
15 Module im_file
16 File "/tmp/input"
17 <Exec>
18 csv1->parse_csv();
19 $date = now();
20 if not defined $number $number = 0;
21 csv2->to_csv();
22 </Exec>
23 </Input>
24
25 <Output fileout>
26 Module om_file
27 File "/tmp/output"
28 </Output>
Input Sample
1, "John K.", 42
2, "Joe F.", 43
Output Sample
1;42;"John K.";2011-01-15 23:45:20
2;43;"Joe F.";2011-01-15 23:45:20
26.1.4. JSON
The xm_json module provides procedures for generating and parsing log data in JSON format.
172
Example 61. Using the xm_json Module for Parsing JSON
This example reads JSON-formatted data from file with the im_file module. Then the parse_json() procedure
is used to parse the data, setting each JSON field to a field in the event record.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/app.json"
8 Exec parse_json();
9 </Input>
Here, the to_json() procedure is used to write all the event record fields to $raw_event in JSON format. This
is then written to file using the om_file module.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/json.log"
8 Exec to_json();
9 </Output>
Log Sample
#Version: 1.0↵
#Date: 2011-07-01 00:00:00↵
#Fields: date time cs-method cs-uri↵
2011-07-01 00:34:23 GET /foo/bar1.html↵
2011-07-01 12:21:16 GET /foo/bar2.html↵
2011-07-01 12:45:52 GET /foo/bar3.html↵
2011-07-01 12:57:34 GET /foo/bar4.html↵
173
Example 63. Parsing W3C Format With xm_w3c
This configuration reads the W3C format log file and parses it with the xm_w3c module. The fields in the
event record are converted to JSON and the logs are forwarded via TCP.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension w3c_parser>
6 Module xm_w3c
7 </Extension>
8
9 <Input w3c>
10 Module im_file
11 File '/var/log/httpd-log'
12 InputType w3c_parser
13 </Input>
14
15 <Output tcp>
16 Module om_tcp
17 Host 192.168.12.1
18 Port 1514
19 Exec to_json();
20 </Output>
The W3C format can also be parsed with the xm_csv module if using NXLog Community Edition.
174
Example 64. Parsing W3C Format With xm_csv
The following configuration reads a W3C file and tokenizes it with the CSV parser. Header lines starting with
a leading hash mark (#) are ignored. The $EventTime field is set from the parsed date and time fields.
The fields in the xm_csv module instance below must be updated to correspond with the
NOTE
fields in the W3C file to be parsed.
nxlog.conf
1 <Extension w3c_parser>
2 Module xm_csv
3 Fields $date, $time, $HTTPMethod, $HTTPURL
4 FieldTypes string, string, string, string
5 Delimiter ' '
6 EscapeChar '"'
7 QuoteChar '"'
8 EscapeControl FALSE
9 UndefValue -
10 </Extension>
11
12 <Extension _json>
13 Module xm_json
14 </Extension>
15
16 <Input w3c>
17 Module im_file
18 File '/var/log/httpd-log'
19 <Exec>
20 if $raw_event =~ /^#/ drop();
21 else
22 {
23 w3c_parser->parse_csv();
24 $EventTime = parsedate($date + " " + $time);
25 }
26 </Exec>
27 </Input>
26.1.6. XML
The xm_xml module can be used for generating and parsing structured data in XML format.
This configuration uses the im_file module to read from file. Then the parse_xml() procedure parses the
XML into fields in the event record.
nxlog.conf
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/app.xml"
8 Exec parse_xml();
9 </Input>
175
Example 66. Using the xm_xml Module for Generating XML
Here, the fields in the event record are used by the to_xml() procedure to generate XML, which is then
written to file by the om_file module.
nxlog.conf
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/logs.xml"
8 Exec to_xml();
9 </Output>
26.2. Alerting
NXLog can be configured to generate alerts when specific conditions are met. Here are some ways alerting could
be implemented.
In this example Output, all messages not matching the regular expression are dropped, and remaining
messages are piped to a custom alerter script.
nxlog.conf
1 <Output out>
2 Module om_exec
3 Command /usr/local/sbin/alerter
4 Arg -
5 Exec if not ($raw_event =~ /alertcondition/) drop();
6 </Output>
Without the Exec directive above, all messages received by the module would be passed to the alerter
script as defined by the Command directive. The optional Arg directive passes its value to the Command
script.
176
Example 68. Using xm_exec with an External Alerter
In this example Input, each message matching the regular expression is piped to a new instance of
alerter, which is executed asynchronously (does not block additional processing by the calling module).
nxlog.conf
1 <Extension _exec>
2 Module xm_exec
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 <Exec>
10 if $raw_event =~ /alertcondition/
11 exec_async("/usr/local/sbin/alerter");
12 </Exec>
13 </Input>
In this example, an email is sent using exec_async() when the regular expression condition is met.
nxlog.conf
1 <Extension _exec>
2 Module xm_exec
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 <Exec>
10 if $raw_event =~ /alertcondition/
11 {
12 exec_async("/bin/sh", "-c", 'echo "' + $Hostname + '\n\nRawEvent:\n' +
13 $raw_event + '"|/usr/bin/mail ' +
14 '-a "Content-Type: text/plain; charset=UTF-8" ' +
15 '-s "ALERT" user@domain.com');
16 }
17 </Exec>
18 </Input>
NOTE DEBUG level events are not generated by the im_internal module.
177
Example 70. Using log_warning() for Alerting
If a message matches the regular expression, an internal log event is generated with level WARNING.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/var/log/app.log"
4 Exec if $raw_event =~ /alertcondition/ log_warning("ALERT");
5 </Input>
This example shows the default read and write buffers used by NXLog for a simple route. Each buffer is
limited to 65,000 bytes.
nxlog.conf
1 <Input file>
2 Module im_file
3 File '/tmp/in.log'
4
5 # Set read buffer size, in bytes (default)
6 BufferSize 65000
7 </Input>
8
9 <Output tcp>
10 Module om_tcp
11 Host 192.168.1.1
12
13 # Set write buffer size, in bytes (default)
14 BufferSize 65000
15 </Output>
16
17 <Route r>
18 Path file => tcp
19 </Route>
178
26.3.2. Log Queues
Every processor and output module instance has an input log queue for events that have not yet been processed
by that module instance. When the preceding module has processed an event, it is placed in this queue. Because
log queues are enabled by default for all processor and output module instances, they are the preferred way to
adjust buffering behavior.
The size of a module instance’s log queue can be configured with the LogqueueSize directive.
This example shows the default log queue used by NXLog in a simple route. Up to 100 events will be placed
in the queue to be processed by the om_batchcompress instance.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3 </Input>
4
5 <Output batch>
6 Module om_batchcompress
7 Host 192.168.2.1
8
9 # Set log queue size, in events (default)
10 LogqueueSize 100
11 </Output>
12
13 <Route r>
14 Path eventlog => batch
15 </Route>
By default, log queues are stored in memory. NXLog can be configured to persist log queues to disk with the
PersistLogqueue directive. NXLog will further sync all writes to a disk-based queue with SyncLogqueue. These
directives can be used to prevent data loss in case of interrupted processing—at the expense of reduced
performance—and can be used both globally or for a particular module. For more information, see Reliable
Message Delivery.
Any events remaining in the log queue will be written to disk when NXLog is stopped, regardless
NOTE
of the value of PersistLogqueue.
179
Example 73. A Persistent Log Queue
In this example, the om_elasticsearch instance is configured with a persistent and synced log queue. Each
time an event is added to the log queue, the event will be written to disk and synced before processing
continues.
nxlog.conf
1 <Input acct>
2 Module im_acct
3 </Input>
4
5 <Output elasticsearch>
6 Module om_elasticsearch
7 URL http://192.168.2.2:9200/_bulk
8
9 # Set log queue size, in events (default)
10 LogqueueSize 100
11
12 # Use persistent and synced log queue
13 PersistLogqueue TRUE
14 SyncLogqueue TRUE
15 </Output>
16
17 <Route r>
18 Path acct => elasticsearch
19 </Route>
1. a processor or output module instance is not able to process log data at the incoming rate,
2. that module instance’s log queue becomes full, and
3. the input or processor module instance responsible for feeding the log queue has flow control enabled.
In this case, flow control will cause the input or processor module instance to suspend processing until the
succeeding module instance is ready to accept more log data.
180
Example 74. Flow Control Enabled
This example shows the NXLog’s default flow control behavior with a basic route. Events are collected from
the Windows Event Log with im_msvistalog and forwarded with om_tcp. The om_tcp instance will be blocked
if the destination is unreachable or the network can not handle the events quickly enough.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3
4 # Flow control enabled (default)
5 FlowControl TRUE
6 </Input>
7
8 <Output tcp>
9 Module om_tcp
10 Host 192.168.1.1
11 </Output>
12
13 <Route r>
14 Path eventlog => tcp
15 </Route>
The om_tcp instance is unable to connect to the destination host and its log queue is full. Because the
im_msvistalog instance has flow control enabled and the next module in the route is blocked, it has been
paused. No events will be read from the Event Log until the tcp instance becomes unblocked.
Flow control is enabled by default, and can be set globally or for a particular module instance with the
FlowControl directive. Generally, flow control provides automatic, zero-configuration handling of cases where
buffering would otherwise be required. However, there are some situations where flow control should be
disabled and buffering should be explicitly configured as required.
181
Example 75. Flow Control Disabled
In this example, Linux Audit messages are collected with im_linuxaudit and forwarded with om_http. Flow
control is disabled for im_linuxaudit to prevent processes from being blocked due to an Audit backlog. To
avoid loss of log data in this case, the LogqueueSize directive could be used as shown in Increasing the Log
Queue Size to Protect Against UDP Message Loss.
nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 <Rules>
4 -D
5 -w /etc/passwd -p wa -k passwd
6 </Rules>
7
8 # Disable flow control to prevent Audit backlog
9 FlowControl FALSE
10 </Input>
11
12 <Output http>
13 Module om_http
14 URL http://192.168.2.1:8080/
15 </Output>
16
17 <Route r>
18 Path audit => http
19 </Route>
The om_http instance is unable to forward log data, and its log queue is full. Because it has flow control
disabled, the im_linuxaudit instance remains active and continues to process log data. However, all events
will be discarded until the om_http log queue is no longer full.
182
In a disk-based pm_buffer instance, events are not written to disk unless the log queue of the
succeeding module instance is full. For this reason, a disk-based pm_buffer instance does not
NOTE reduce peformance in the way that a persistent log queue does. Additionally, pm_buffer (and
other processor modules) should not be used if crash-safe processing is required; see Reliable
Message Delivery.
This example shows a route with a large disk-based buffer provided by the pm_buffer module. A warning
message will be generated when the buffer size crosses the threshold specified.
nxlog.conf
1 <Input udp>
2 Module im_udp
3 </Input>
4
5 <Processor buffer>
6 Module pm_buffer
7 Type Disk
8
9 # 40 MiB buffer
10 MaxSize 40960
11
12 # Generate warning message at 20 MiB
13 WarnLimit 20480
14 </Processor>
15
16 <Output ssl>
17 Module om_ssl
18 Host 10.8.0.2
19 CAFile %CERTDIR%/ca.pem
20 CertFile %CERTDIR%/client-cert.pem
21 CertKeyFile %CERTDIR%/client-key.pem
22 </Output>
23
24 <Route r>
25 Path udp => buffer => ssl
26 </Route>
• The UDP modules (im_udp, om_udp, and om_udpspoof) can be configured to set the socket buffer size
(SO_RCVBUF or SO_SNDBUF) with the respective SockBufSize directive.
183
• The external program and scripting support found in some modules (like im_exec, im_perl, im_python,
im_ruby, om_exec, om_perl, om_python, and om_ruby) can be used to implement custom buffering
solutions.
• Some modules (such as om_batchcompress, om_elasticsearch, and om_webhdfs) buffer events internally in
order to forward events in batches.
• The pm_blocker module can be used to programmatically block or unblock the log flow in a route, and in this
way control buffering. Or it can be used to test buffering.
• The om_blocker module can be used to test buffering behavior by simulating a blocked output.
The following diagram shows all buffers used in a simple im_udp => om_tcp route. The socket buffers are
only applicable to networking modules.
184
Example 78. Increasing the Log Queue Size to Protect Against UDP Message Loss
In this configuration, log messages are accepted with im_udp and forwarded with om_tcp. The log queue
size of the output module instance is increased to 5000 events to buffer messages in case the output
becomes blocked. To further reduce the risk of data loss, the socket buffer size is increased with the
SockBufSize directive and the route priority is increased with Priority.
nxlog.conf
1 <Input udp>
2 Module im_udp
3
4 # Raise socket buffer size
5 SockBufSize 150000000
6 </Input>
7
8 <Output tcp>
9 Module om_tcp
10 Host 192.168.1.1
11
12 # Keep up to 5000 events in the log queue
13 LogqueueSize 5000
14 </Output>
15
16 <Route udp_to_tcp>
17 Path udp => tcp
18
19 # Process events in this route first
20 Priority 1
21 </Route>
The output is blocked because the network is not able to handle the log data quickly enough.
185
Example 79. Using a pm_buffer Instance to Protect Against UDP Message Loss
Instead of raising the size of the log queue, this example uses a memory-based pm_buffer instance to
buffer events when the output becomes blocked. A warning message will be generated if the buffer size
exceeds the specified WarnLimit threshold.
nxlog.conf
1 <Input udp>
2 Module im_udp
3
4 # Raise socket buffer size
5 SockBufSize 150000000
6 </Input>
7
8 <Processor buffer>
9 Module pm_buffer
10 Type Mem
11
12 # 5 MiB buffer
13 MaxSize 5120
14
15 # Warn at 2 MiB
16 WarnLimit 2048
17 </Processor>
18
19 <Output http>
20 Module om_http
21 URL http://10.8.1.1:8080/
22 </Output>
23
24 <Route udp_to_http>
25 Path udp => buffer => http
26
27 # Process events in this route first
28 Priority 1
29 </Route>
The HTTP destination is unreachable, the http instance log queue is full, and the buffer instance is filling.
With flow control disabled, events will be discarded if the route becomes blocked and the route’s log queues
become full. To reduce the risk of lost log data, the log queue size of a succeeding module instance in the route
can be increased. Alternatively, a pm_buffer instance can be used as in the second UDP example above.
186
Example 80. Buffering Syslog Messages From /dev/log
This configuration uses the im_uds module to collect Syslog messages from the /dev/log socket, and the
xm_syslog parse_syslog() procedure to parse them.
To prevent the syslog() system call from blocking as a result of the im_uds instance being suspended, the
FlowControl directive is set to FALSE. The LogqueueSize directive raises the log queue limit of the output
instance to 5000 events. The Priority directive indicates that this route’s events should be processed first.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input dev_log>
6 Module im_uds
7 UDS /dev/log
8 Exec parse_syslog();
9
10 # This module instance must never be suspended
11 FlowControl FALSE
12 </Input>
13
14 <Output elasticsearch>
15 Module om_elasticsearch
16 URL http://192.168.2.1:9022/_bulk
17
18 # Keep up to 5000 events in the log queue
19 LogqueueSize 5000
20 </Output>
21
22 <Route syslog_to_elasticsearch>
23 Path dev_log => elasticsearch
24
25 # Process events in this route first
26 Priority 1
27 </Route>
The Elasticsearch server is unreachable and the log queue is filling. If the log queue becomes full, events
will be discarded.
187
Example 81. Forwarding From File With Default Buffering
This configuration reads log messages from a file with im_file and forwards them with om_tcp. No extra
buffering is necessary because flow control is enabled.
nxlog.conf
1 <Input file>
2 Module im_file
3 File '/tmp/in.log'
4
5 # Enable flow control (default)
6 FlowControl TRUE
7
8 # Save file position on exit (default)
9 SavePos TRUE
10 </Input>
11
12 <Output tcp>
13 Module om_tcp
14 Host 10.8.0.2
15 </Output>
16
17 <Route r>
18 Path file => tcp
19 </Route>
The TCP destination is unreachable, and the im_file instance is paused. No messages will be read from the
source file until the om_tcp instance becomes unblocked.
Sometimes, however, there is a risk of the input log file becoming inaccessible while the im_file instance is
suspended (due to log rotation, for example). In this case, the tcp log queue size can be increased (or a
pm_buffer instance added) to buffer more events.
188
Example 82. Forwarding From File With Additional Buffering
In this example, log messages are read from a file with im_file and forwarded with om_tcp. The om_tcp log
queue size has been increased in order to buffer more events because the source file may be rotated away.
nxlog.conf
1 <Input file>
2 Module im_file
3 File '/tmp/in.log'
4 </Input>
5
6 <Output tcp>
7 Module om_tcp
8 Host 192.168.1.1
9
10 # Keep up to 2000 events in the log queue
11 LogqueueSize 2000
12 </Output>
13
14 <Route r>
15 Path file => tcp
16 </Route>
The TCP destination is unreachable and the om_tcp instance is blocked. The im_file instance will continue to
read from the file (and events will accumulate) until the tcp log queue is full; then it will be paused.
189
Example 83. Disabling Flow Control to Selectively Discard Events
This example sends UDP input to two outputs, a file and an HTTP destination. If the HTTP transmission is
slower than the rate of incoming UDP packets or the destination is unreachable, flow control would
normally pause the im_udp instance. This would result in dropped UDP packets. In this situation it is better
to selectively drop log messages in the HTTP route than to lose them entirely. This can be accomplished by
simply disabling flow control for the input module instance.
This configuration will also continue to send events to the HTTP destination in the unlikely
event that the om_file output blocks. In fact, the input will remain active even if both
NOTE
outputs block (though in this particular case, because UDP is lossy, messages will be lost
regardless of whether the im_udp instance is suspended).
nxlog.conf
1 <Input udp>
2 Module im_udp
3
4 # Never pause this instance
5 FlowControl FALSE
6 </Input>
7
8 <Output http>
9 Module om_http
10 URL http://10.0.0.3:8080/
11
12 # Increase the log queue size
13 LogqueueSize 2000
14 </Output>
15
16 <Output file>
17 Module om_file
18 File '/tmp/out.log'
19 </Output>
20
21 <Route udp_to_tcp>
22 Path udp => http, file
23 </Route>
The HTTP destination cannot accept events quickly enough. The om_http instance is blocked and its log
queue is full. New events are not being added to the HTTP output queue but are still being written to the
output file.
190
In this example, process accounting logs collected by im_acct are both forwarded via TCP and written to file.
A separate route is used for each output. A pm_buffer instance is used in the TCP route, and it is configured
to discard events with drop() if its size goes beyond a certain threshold. Thus, the pm_buffer instance will
never become full and will never cause the im_acct instance to pause—events will always be written to the
output file.
Because the im_acct instance has flow control enabled, it will be paused if the om_file
NOTE
output becomes blocked.
nxlog.conf
1 <Input acct>
2 Module im_acct
3
4 # Flow control enabled (default)
5 FlowControl TRUE
6 </Input>
7
8 <Processor buffer>
9 Module pm_buffer
10 Type Mem
11 MaxSize 1000
12 WarnLimit 800
13 Exec if buffer_size() >= 80k drop();
14 </Processor>
15
16 <Output tcp>
17 Module om_tcp
18 Host 192.168.1.1
19 </Output>
20
21 <Output file>
22 Module om_file
23 File '/tmp/out.log'
24 </Output>
25
26 <Route udp_to_tcp>
27 Path acct => buffer => tcp
28 </Route>
29
30 <Route udp_to_file>
31 Path acct => file
32 </Route>
The TCP destination is unreachable and the om_tcp log queue is full. Input accounting events will be added
to the buffer until it gets full, then they will be discarded. Input events will also be written to the output file,
regardless of whether the buffer is full.
191
26.3.10. Scheduled Buffering
While buffering is typically used when a log source becomes unavailable, NXLog can also be configured to buffer
logs programmatically. For this purpose, the pm_blocker module can be added to a route.
This example collects log messages via UDP and forwards them to a remote NXLog agent. However, events
are buffered with pm_buffer during the week and only forwarded on weekends.
• During the week, the pm_blocker instance is blocked and events accumulate in the large on-disk buffer.
• During the weekend, the pm_blocker instance is unblocked and all events, including those that have
accumulated in the buffer, are forwarded.
nxlog.conf (truncated)
1 <Input udp>
2 Module im_udp
3 Host 0.0.0.0
4 </Input>
5
6 <Processor buffer>
7 Module pm_buffer
8
9 # 500 MiB disk buffer
10 Type Disk
11 MaxSize 512000
12 WarnLimit 409600
13 </Processor>
14
15 <Processor schedule>
16 Module pm_blocker
17 <Schedule>
18 # Start blocking Monday morning
19 When 0 0 * * 1
20 Exec schedule->block(TRUE);
21 </Schedule>
22 <Schedule>
23 # Stop blocking Saturday morning
24 When 0 0 * * 6
25 Exec schedule->block(FALSE);
26 </Schedule>
27 </Processor>
28 [...]
192
It is currently a weekday and the schedule pm_blocker instance is blocked.
If it is possible to use flow control with the log sources, then it is not necessary to use extra buffering. Instead,
the inputs will be paused and read later when the route is unblocked.
193
Example 86. Collecting Log Data on a Schedule
This configuration reads events from the Windows Event Log and forwards them to a remote NXLog agent
in compressed batches with om_batchcompress. However, events are only forwarded during the night.
Because the im_msvistalog instance can be paused and events will still be available for collection later, it is
not necessary to configure any extra buffering.
• During the day, the pm_blocker instance is blocked, the output log queue becomes full, and the
eventlog instance is paused.
• During the night, the pm_blocker instance is unblocked. The events in the schedule log queue are
processed, the eventlog instance is resumed, and all pending events are read from the Event Log and
forwarded.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3 </Input>
4
5 <Processor schedule>
6 Module pm_blocker
7 <Schedule>
8 # Start blocking at 7:00
9 When 0 7 * * *
10 Exec schedule->block(TRUE);
11 </Schedule>
12 <Schedule>
13 # Stop blocking at 19:00
14 When 0 19 * * *
15 Exec schedule->block(FALSE);
16 </Schedule>
17 </Processor>
18
19 <Output batch>
20 Module om_batchcompress
21 Host 10.3.0.211
22 </Output>
23
24 <Route scheduled_batches>
25 Path eventlog => schedule => batch
26 </Route>
The current time is within the specified "day" interval and pm_blocker is blocked.
194
26.4. Character Set Conversion
It is recommended to normalize logs to UTF-8. The xm_charconv module provides character set conversion: the
convert_fields() procedure for converting an entire message (all event fields) and a convert() function for
converting a string.
This configuration shows an example of character set auto-detection. The input file may contain differently
encoded lines, but by invoking the convert_fields() procedure, each message will have the character set
encoding of its fields detected and then converted to UTF-8 as needed.
nxlog.conf
1 <Extension _charconv>
2 Module xm_charconv
3 AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2
4 </Extension>
5
6 <Input filein>
7 Module im_file
8 File "tmp/input"
9 Exec convert_fields("auto", "utf-8");
10 </Input>
11
12 <Output fileout>
13 Module om_file
14 File "tmp/output"
15 </Output>
16
17 <Route r>
18 Path filein => fileout
19 </Route>
The im_mark module is designed as means of monitoring the health of the NXLog agent by
NOTE
generating "mark" messages every 30 minutes. The message text and interval are configurable.
The solution to this problem is the combined use of statistical counters and Scheduled checks. The input module
can update a statistical counter configured to calculate events per hour. In the same input module a Schedule
block checks the value of the statistical counter periodically. When the event rate is zero or drops below a certain
limit, an appropriate action can be executed such as sending out an alert email or generating an internal warning
message. Note that there are other ways to address this issue and this method may not be optimal for all
situations.
195
Example 88. Alerting on Absence of Log Messages
The following configuration example creates a statistical counter in the context of the im_tcp module to
calculate the number of events received per hour. The Schedule block within the context of the same
module checks the value of the msgrate statistical counter and generates an internal error message when
no logs have been received within the last hour.
nxlog.conf
1 <Input in>
2 Module im_tcp
3 Port 2345
4 <Exec>
5 create_stat("msgrate", "RATE", 3600);
6 add_stat("msgrate", 1);
7 </Exec>
8 <Schedule>
9 Every 3600 sec
10 <Exec>
11 create_stat("msgrate", "RATE", 10);
12 add_stat("msgrate", 0);
13 if defined get_stat("msgrate") and get_stat("msgrate") <= 1
14 log_error("No messages received from the source!");
15 </Exec>
16 </Schedule>
17 </Input>
A dedicated NXLog module, pm_evcorr, is available for advanced correlation requirements. It provides features
similar to those of SEC and greatly enhances the correlation capabilities of NXLog.
196
Example 89. Correlation Rules
The following configuration provides samples for each type of rule: Absence, Pair, Simple, Suppressed, and
Thresholded.
nxlog.conf (truncated)
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4
5 <Simple>
6 Exec if $Message =~ /^simple/ $raw_event = "got simple";
7 </Simple>
8
9 <Suppressed>
10 # Match input event and execute an action list, but ignore the following
11 # matching events for the next t seconds.
12 Condition $Message =~ /^suppressed/
13 Interval 30
14 Exec $raw_event = "suppressing..";
15 </Suppressed>
16
17 <Pair>
18 # If TriggerCondition is true, wait Interval seconds for RequiredCondition
19 # to be true and then do the Exec. If Interval is 0, there is no window on
20 # matching.
21 TriggerCondition $Message =~ /^pair-first/
22 RequiredCondition $Message =~ /^pair-second/
23 Interval 30
24 Exec $raw_event = "got pair";
25 </Pair>
26
27 <Absence>
28 # If TriggerCondition is true, wait Interval seconds for RequiredCondition
29 [...]
Some log sources (like Windows EventLog collected via im_msvistalog) already contain structured data. In this
case, there is often no additional extraction required; see Message Classification.
197
Example 90. Parsing With Regular Expressions
Syslog Message
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from 192.168.1.60
port 38176 ssh2↵
With this configuration, the Syslog message shown above is first parsed with parse_syslog(). This results in a
$Message field created in the event record. Then, a regular expression is used to further parse the
$Message field and create additional fields if it matches.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 <Exec>
10 parse_syslog();
11 if $Message =~ /(?x)^Failed\ (\S+)\ for(?:\ invalid user)?\ (\S+)\ from
12 \ (\S+)\ port\ \d+\ ssh2$/
13 {
14 $AuthMethod = $1;
15 $AccountName = $2;
16 $SourceIPAddress = $3;
17 }
18 </Exec>
19 </Input>
Named capturing is supported also. Each captured group is automatically added to the event record as a
field with the same name.
nxlog.conf
1 <Input in>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 $Message =~ /(?x)^Failed\ (?<AuthMethod>\S+)\ for(?:\ invalid\ user)?
8 \ (?<AccountName>\S+)\ from\ (?<SourceIPAddress>\S+)\ port
9 \ \d+\ ssh2$/;
10 </Exec>
11 </Input>
Field Value
$AuthMethod password
$AccountName linda
$SourceIPAddress 192.168.1.60
198
26.7.2. Pattern Matching With Grok
The xm_grok module provides parsing for unstructured log messages with Grok patterns.
The examples below demonstrate how to parse Apache messages using Grok patterns.
The above Apache message can be parsed using the Grok pattern below.
The above Apache message can be parsed using the Grok pattern below.
Lists of Grok patterns are available in various repositories. As an example, see the logstash-plugin section on
Github.
199
Example 93. Configuring NXLog to Parse Apache Messages
The following configuration reads messages from the apache_entries.log file using the im_file module
and stores the result in the $raw_event field.
The match_grok() function reads patterns from the patterns.txt file and attempts a series of matches on
the $raw_event field. If none of the patterns match, an internal message is logged.
nxlog.conf
1 <Extension grok>
2 Module xm_grok
3 Pattern patterns.txt
4 </Extension>
5
6 <Input messages>
7 Module im_file
8 File "apache_entries.log"
9 <Exec>
10 if not ( match_grok($raw_event, "%{ACCESS_LOG}") or
11 match_grok($raw_event, "%{ERROR_LOG}"))
12 {
13 log_info('Event did not match any pattern');
14 }
15 </Exec>
16 </Input>
This example uses the patterns.txt file, which contains all necessary Grok patterns.
patterns.txt
INT (?:[+-]?(?:[0-9]+))
YEAR (?>\d\d){1,2}
MONTH
\b(?:[Jj]an(?:uary|uar)?|[Ff]eb(?:ruary|ruar)?|[Mm](?:a|ä)?r(?:ch|z)?|[Aa]pr(?:il)?|[Mm]a(?:y|i
)?|[Jj]un(?:e|i)?|[Jj]ul(?:y)?|[Aa]ug(?:ust)?|[Ss]ep(?:tember)?|[Oo](?:c|k)?t(?:ober)?|[Nn]ov(?
:ember)?|[Dd]e(?:c|z)(?:ember)?)\b
DAY
(?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
UNIXPATH (/([\w_%!$@:.,+~-]+|\\.)*)+
GREEDYDATA .*
IP (?<![0-9])(?:(?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-
9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5])[.](?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-
5]))(?![0-9])
LOGLEVEL
([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WA
RN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]ever
e|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)
TIMESTAMP_ACCESS %{INT}\/%{MONTH}\/%{YEAR}(:%{HOUR}:%{MINUTE}:%{SECOND} %{GREEDYDATA})?
TIMESTAMP_ERROR %{DAY} %{MONTH} %{INT} %{HOUR}:%{MINUTE}:%{SECOND} %{YEAR}
METHOD (GET|POST|PUT|DELETE|HEAD|TRACE|OPTIONS|CONNECT|PATCH){1}
HTTP_VERSION 1.(0|1)
200
26.7.3. Pattern Matching With pm_pattern
Regular expressions are widely used in pattern matching. Unfortunately, using a large number of regular
expression based patterns does not scale well, because these need to be evaluated linearly. The pm_pattern
module implements a more efficient pattern matching than regular expressions used in Exec directives.
Syslog Message
<38>Nov 22 10:30:12 myhost sshd[8459]: Failed password for invalid user linda from 192.168.1.60
port 38176 ssh2↵
With this configuration, the above Syslog message is first parsed with parse_syslog(). This results in a
$Message field created in the event record. Then, the pm_pattern module is used with a pattern XML file to
further parse the record.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog();
10 </Input>
11
12 <Processor pattern>
13 Module pm_pattern
14 PatternFile /var/lib/nxlog/patterndb.xml
15 </Processor>
16
17 <Output out>
18 Module om_null
19 </Output>
20
21 <Route r>
22 Path in => pattern => out
23 </Route>
The patterns for the pm_pattern module instance above are declared in the following patterndb.xml file.
201
Pattern Database (patterndb.xml)
<?xml version='1.0' encoding='UTF-8'?>
<patterndb>
<created>2010-01-01 01:02:03</created>
<version>42</version>
<!-- First and only pattern group in this file -->
<group>
<name>ssh</name>
<id>42</id>
<!-- Only try to match this group if $SourceName == "sshd" -->
<matchfield>
<name>SourceName</name>
<type>exact</type>
<value>sshd</value>
</matchfield>
<!-- First and only pattern in this pattern group -->
<pattern>
<id>1</id>
<name>ssh auth failure</name>
<!-- Do regular expression match on $Message field -->
<matchfield>
<name>Message</name>
<type>regexp</type>
<value>^Failed (\S+) for(?: invalid user)? (\S+) from (\S+) port \d+ ssh2</value>
<!-- Set 3 event record fields from captured strings -->
<capturedfield>
<name>AuthMethod</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>AccountName</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>SourceIPAddress</name>
<type>string</type>
</capturedfield>
</matchfield>
<!-- Set additional fields if pattern matches -->
<set>
<field>
<name>TaxonomyAction</name>
<value>Authenticate</value>
<type>string</type>
</field>
<field>
<name>TaxonomyStatus</name>
<value>Failure</value>
<type>string</type>
</field>
</set>
</pattern>
</group>
</patterndb>
Field Value
$AuthMethod password
202
Field Value
$AccountName linda
$SourceIPAddress 192.168.1.60
$TaxonomyAction Authenticate
$TaxonomyStatus Failure
NXLog Manager provides an interface for writing pattern files, and will also test sample events to aid in
establishing the correct match patterns. The pattern functions can be accessed from the PATTERNS menu in the
page header.
The following instructions explain the steps required for creating the above pattern database with NXLog
Manager.
1. Open PATTERNS › CREATE GROUP. Enter a Name for the new pattern group, and optionally a
Description, in the Properties section. The name is used to refer to the pattern group later. The name
of the above pattern group is ssh.
2. Add a match field by clicking [ Add Field ] in the Match section. Only messages that match will be
further processed by this pattern group. In the above example, there is no reason to attempt any
matches if the $SourceName field does not equal sshd. The above pattern group uses Field name
=SourceName, Match=EXACT, and Value=sshd.
203
7. The Set section allows fields to be set if the match is successful. Click [ Add Field ] for each field. The
above example sets $TaxonomyStatus to Failure and $TaxonomyAction to Authenticate.
8. The Action section accepts NXLog language statements like would be specified in an Exec directive.
Click [ Add action ], type in the statement, and click [ Verify ] to make sure the statement is valid. The
above example does not include any NXLog language statements.
9. The final tabbed section allows test messages to be entered to verify that the match works as expected.
Click the [ + ] to add a test case. To test the above example, add a Value for the Message field: Failed
password for invalid user linda from 192.168.1.60 port 38176 ssh2. Click [ Update Test
Cases ] in the Match section to automatically fill the captured fields. Verify that the fields are set as
expected. Additional test cases can be added to test other events.
10. Save the new pattern. Then click [ Export ] to download the pattern.xml file or use the pattern to
configure a managed agent.
204
26.7.4. Using the Extracted Fields
The previous sections explore ways that the log message can be parsed and new fields added to the event
record. Once the required data has been extracted and corresponding fields created, there are various ways to
use this new data.
• A field or set of fields can be matched by string or regular expression to trigger alerts, perform filtering, or
further classify the event.
• Fields in the event record can be renamed, modified, or deleted.
• Event correlation can be used to execute statements or suppress messages based on matching events inside
a specified window.
• Some output formats can be used to preserve the full set of fields in the event record (such as JSON and the
NXLog Binary format).
In this example, any line that matches neither of the two regular expressions will be discarded with the
drop() procedure. Only lines that match at least one of the regular expressions will be kept.
nxlog.conf
1 <Input file>
2 Module im_file
3 File "/var/log/myapp/*.log"
4 Exec if not ($raw_event =~ /failed/ or $raw_event =~ /error/) drop();
5 </Input>
205
Example 97. Using drop() with $SourceName and $Message to Isolate Authentication Errors
In this example, events collected from multiple hosts and multiple sources by a centralized log server are
contained in an input file. By defining a list of targeted $SourceName values along with the presence of
certain keywords in the $Message field as criteria for authentication failures, the drop() procedure will
discard all non-matching events.
nxlog.conf
1 define AUTHSOURCES "su", "sudo", "sshd", "unix_chkpwd"
2
3 <Input combined>
4 Module im_file
5 File "tmp/central-logging"
6 <Exec>
7 if not (
8 defined($SourceName)
9 and $SourceName IN (%AUTHSOURCES%)
10 and (
11 $Message =~ /fail/
12 or $Message =~ /error/
13 or $Message =~ /illegal/
14 or $Message =~ /invalid/
15 )
16 ) drop();
17 </Exec>
18 </Input>
Example 98. Using drop() with $SourceName and $EventID to Collect all DNS Events
In this example events are to be collected from all DNS sources. Three of the four sources contain only
DNS-specific events which can be matched by their $SourceName value alone against the defined list, but
the Sysmon source can contain other non-DNS events as well. However, all Sysmon events with an Event ID
of 22 are DNS events. The conditional statement drops all events that do not have a $SourceName in the
defined list as well as those that match the Sysmon $SourceName but do not have a value of 22 for their
$EventID.
nxlog.conf
1 define DNSSOURCES "Microsoft-Windows-DNSServer", \
2 "Microsoft-Windows-DNS-Client", \
3 "systemd-resolved"
4
5 <Input combined>
6 Module im_file
7 File "tmp/central-logging"
8 <Exec>
9 if not (defined($SourceName)
10 and ($SourceName IN (%DNSSOURCES%)
11 or ($SourceName == "Microsoft-Windows-Sysmon"
12 and $EventID == 22)))
13 drop();
14 </Exec>
15 </Input>
206
Example 99. Filtering During the Output Phase to Create Multiple Event Logs from a Single Input
This example uses the same centralized log server events from the previous examples above as an input to
three outputs. Separate categories based on a single $SourceName are created and written to three
separate files. Each output instance defines a range of values for $EventId, the criteria for the
categorization into two groups: DNS Server Audit or DNS Server Analytical. The conditional statement in the
second instance uses $SeverityValue to keep only those audit events having a value greater than 2
(warnings or errors).
nxlog.conf (truncated)
1 <Input combined>
2 Module im_file
3 File "tmp/central-logging"
4 </Input>
5
6 <Output DNS_Audit>
7 Module om_file
8 File "tmp/DNS-Server-Audit"
9 <Exec>
10 if not (
11 defined($SourceName)
12 and $SourceName == "Microsoft-Windows-DNSServer"
13 and $EventId >= 513
14 and $EventId <= 582
15 ) drop();
16 </Exec>
17 </Output>
18
19 <Output DNS_Audit_Action_Required>
20 Module om_file
21 File "tmp/DNS-Server-Audit-Action-Required"
22 <Exec>
23 if not (
24 defined($SourceName)
25 and $SourceName == "Microsoft-Windows-DNSServer"
26 and $EventId >= 513
27 and $EventId <= 582
28 and $SeverityValue > 2 # Severity higher than INFO
29 [...]
For converting between CSV formats, see Complex CSV Format Conversion.
207
Example 100. Converting from BSD to IETF Syslog
This configuration receives log messages in the BSD Syslog format over UDP and forwards the logs in the
IETF Syslog format over TCP.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input bsd>
6 Module im_udp
7 Port 514
8 Host 0.0.0.0
9 Exec parse_syslog_bsd(); to_syslog_ietf();
10 </Input>
11
12 <Output ietf>
13 Module om_tcp
14 Host 1.2.3.4
15 Port 1514
16 </Output>
17
18 <Route bsd_to_ietf>
19 Path bsd => ietf
20 </Route>
NXLog supports three main approaches to file rotation. In each case, policies should usually be implemented
using a Schedule block.
• Most policies are implemented within the scope of an om_file module instance, where output files are being
written.
• The im_file module can be configured to rotate log files after they have been fully read.
• Any log file on the system can be rotated under the scope of an xm_fileop module or any other module. This
includes the internal log file (specified by the LogFile directive).
208
Example 101. Rotating om_file Log Files
Log files written by an om_file module often need to be rotated regularly. This example uses the om_file
file_name() function and xm_fileop file_cycle() procedure to rotate the output file daily, keeping a total of 7
old log files.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File '/var/log/out.log'
8 <Schedule>
9 When @daily
10 <Exec>
11 file_cycle(file_name(), 7);
12 reopen();
13 </Exec>
14 </Schedule>
15 </Output>
NXLog will write its own logs to a file specified by the LogFile directive. It is good practice to set up rotation
of this file. This configuration uses the xm_fileop file_size() function. The file_cycle() procedure rotates the file
if it is larger than 5 MB. The file is also rotated weekly. No more than 8 past log files are retained.
nxlog.conf
1 define LOGFILE /opt/nxlog/var/log/nxlog/nxlog.log
2 LogFile %LOGFILE%
3
4 <Extension _fileop>
5 Module xm_fileop
6
7 # Check the log file size every hour and rotate if larger than 5 MB
8 <Schedule>
9 Every 1 hour
10 <Exec>
11 if (file_exists('%LOGFILE%') and file_size('%LOGFILE%') >= 5M)
12 file_cycle('%LOGFILE%', 8);
13 </Exec>
14 </Schedule>
15
16 # Rotate log file every week on Sunday at midnight
17 <Schedule>
18 When @weekly
19 Exec if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
20 </Schedule>
21 </Extension>
There are many other ways that rotation and retention can be implemented. See the following sections for more
details and examples.
209
26.10.1. Rotation Policies and Intervals
• The om_file reopen() procedure will cause NXLog to reopen the output file specified by the File directive.
• The rotate_to() procedure can be used to choose a name to rotate the current file to. This procedure will
reopen the output file automatically, so there is no need to use the the reopen() procedure.
• The file_cycle() procedure will move the selected file to "file.1". If "file.1" already exists, it will be moved to "
file.2", and so on. If an integer is used as a second argument, it specifies the maximum number of previous
files to keep.
If file_cycle() is used on a file that NXLog currently has open under the scope of an
om_file module instance, the reopen() procedure must be used to continue logging to
WARNING the file specified by the File directive. Otherwise, events will continue to be logged to
the rotated file ("file.1", for example). (This is not necessary if the rotated file is the
LogFile.)
This example uses the file_size() function to detect if a file has grown beyond a specified size. If it has, the
file_cycle() procedure is used to rotate it. The file size is checked hourly with the When directive.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File '/var/log/out.log'
8 <Schedule>
9 When @hourly
10 <Exec>
11 if file_size(file_name()) >= 1M
12 {
13 file_cycle(file_name());
14 reopen();
15 }
16 </Exec>
17 </Schedule>
18 </Output>
• The Every directive rotates log files according to a specific interval specified in seconds, minutes, days, or
weeks.
• The When directive provides crontab-style scheduling, including extensions like @hourly, @daily, and
@weekly.
210
Example 104. Using Every and When for Time-Based Rotation
This example shows the use of the Every and When directives. The output file is rotated daily using the
rotate_to() function. The name is generated in the YYYY-MM-DD format according to the current server time.
nxlog.conf
1 <Output out>
2 Module om_file
3 File '/var/log/out.log'
4 <Schedule>
5 # This can likewise be used for `@weekly` or `@monthly` time periods.
6 When @daily
7
8 # The following crontab-style is the same as `@daily` above.
9 # When "0 0 * * *"
10
11 # The `Every` directive could also be used in this case.
12 # Every 24 hour
13
14 Exec rotate_to(file_name() + strftime(now(), '_%Y-%m-%d'));
15 </Schedule>
16 </Output>
In this example, logs for each year and month are stored in separated sub-directories as shown below. The
log file is rotated daily.
.../logs/YEAR/MONTH/YYYY-MM-DD.log
This is accomplished with the xm_fileop dir_make() procedure, the core strftime() function, and the om_file
rotate_to() procedure.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Output out>
6 define OUT_DIR /srv/logs
7
8 Module om_file
9 File '%OUT_DIR%/out.log'
10 <Schedule>
11 When @daily
12 <Exec>
13 # Create year/month directories if necessary
14 dir_make('%OUT_DIR%/' + strftime(now(), '%Y/%m'));
15
16 # Rotate current file into the correct directory
17 rotate_to('%OUT_DIR%/' + strftime(now(), '%Y/%m/%Y-%m-%d.log'));
18 </Exec>
19 </Schedule>
20 </Output>
211
26.10.1.3. Using Dynamic Filenames
As an alternative to traditional file rotation, output filenames can be set dynamically, based on each log event
individually. This is possible because the om_file File directive supports expressions.
Because dynamic filenames result in events being written to multiple files with semi-arbitrary
NOTE names, they are not suitable for scenarios where a server or application expects events to be
written to a particular foo.log. In this case normal rotation should be used instead.
Often one of now(), $EventReceivedTime, and $EventTime are used for dynamic filenames. Consider the
following points.
• The now() function uses the current server time, not when the event was created or when it was received by
NXLog. If logs are delayed, they will be stored according to the time at which the NXLog output module
instance processes them. This will not work with nxlog-processor(8) (see Offline Log Processing).
• The $EventReceivedTime field timestamp is set by the input module instance when an event is received by
NXLog. This will usually be practically the same as using now(), except in cases where there are processing
delays in the NXLog route (such as when using buffering). This can be used with nxlog-processor(8) if the
$EventReceivedTime field was previously set in the logs.
• The $EventTime field is set from a timestamp in the event, so will result in correct value even if the event
was delayed before reaching NXLog. Note that some parsing may be required before this field is available
(for example, the parse_syslog() procedure sets the xm_syslog EventTime field). Note also that an incorrect
timestamp in an event record can cause the field to be unset or filled incorrectly resulting in data written into
the wrong file.
This example accepts Syslog formatted messages via UDP. Each message is parsed by the parse_syslog()
procedure. The EventTime field is set from the timestamp in the syslog header. This field is then used by
the expression in the File directive to generate an output filename for the event.
Even if messages received from clients over the network are out of order or delayed, they will still be placed
in the appropriate output files according to the timestamps.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_udp
7 Port 514
8 Host 0.0.0.0
9 Exec parse_syslog();
10 </Input>
11
12 <Output out>
13 Module om_file
14 File '/var/log/nxlog/out_' + strftime($EventTime, '%Y-%m-%d')
15 Exec to_syslog_ietf();
16 </Output>
212
Example 107. Attribute-Based Dynamic Filenames With om_file
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog();
10 </Input>
11
12 <Output out>
13 Module om_file
14 File '/tmp/logs_by_host/' + $Hostname
15 </Output>
When using OnEOF for rotation, the rotated files must be named (or placed in a directory)
WARNING
such that they will not be detected as new files and re-read by the module instance.
If a logging service keeps a log file open for writing, the xm_exec exec() procedure should be used
NOTE
to restart the service or otherwise instruct it to re-open the log file.
213
Example 108. Using im_file OnEOF for Input Files
In this example, files matching /var/log/app/*.log are read with an im_file module instance. When each
file has been fully read, it is rotated. The GraceTimeout directive will prevent NXLog from rotating the file
until after there have been no events for 10 seconds.
The input files are rotated by adding a timestamp suffix to the filename. For example, an input file named
/var/log/app/errors.log would be rotated to /var/log/app/errors.log_20180101T130100. The new
name does not match the wildcard specified by the File directive, so the file is not re-read.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Input app_logs_rotated>
6 Module im_file
7 File '/var/log/app/*.log'
8 <OnEOF>
9 <Exec>
10 file_rename(file_name(),
11 file_name() + strftime(now(), '_%Y%m%dT%H%M%S'));
12 </Exec>
13 GraceTimeout 10
14 </OnEOF>
15 </Input>
214
Example 109. Cycling One Year of Logs With file_cycle()
This example demonstrates the use of the xm_fileop file_cycle() procedure for keeping a total of 12 log files,
one for each month. Log files older than 1 year will be automatically deleted.
This policy creates following log file structure: /var/log/foo.log for the current
month,/var/log/foo.log.1 for the previous month, and so on up to the maximum of 12 files.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File '/var/logs/foo.log'
8 <Schedule>
9 When @monthly
10 <Exec>
11 file_cycle(file_name(), 12);
12 reopen();
13 </Exec>
14 </Schedule>
15 </Output>
Different policies for different events can be implemented in combination with dynamic filenames.
215
Example 110. Retaining Files According to Severity
This example uses the $Severity field (such as $Severity set by parse_syslog()) to filter events to separate
files. Then different retention policies are applied according to severity. Here, one week of debug logs, 2
weeks of informational logs, and 4 weeks of higher severity logs are retained.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _fileop>
6 Module xm_fileop
7 </Extension>
8
9 <Input logs_in>
10 Module im_file
11 File "/var/log/messages"
12 Exec parse_syslog();
13 </Input>
14
15 <Output logs_out>
16 define OUT_DIR /opt/nxlog/var/log
17
18 Module om_file
19 File '%OUT_DIR%/' + $Severity + '.log'
20 <Schedule>
21 When @daily
22 <Exec>
23 file_cycle('%OUT_DIR%/DEBUG.log', 7);
24 file_cycle('%OUT_DIR%/INFO.log', 14);
25 file_cycle('%OUT_DIR%/WARNING.log', 28);
26 file_cycle('%OUT_DIR%/ERROR.log', 28);
27 file_cycle('%OUT_DIR%/CRITICAL.log', 28);
28 reopen();
29 </Exec>
30 </Schedule>
31 </Output>
216
Example 111. Using bzip2 With exec_async()
In this example, the file size of the output file is checked hourly with the om_file file_size() function. If the
size is over the limit, then:
1. a newfile module variable is set to the name the current file will be rotated to,
2. the om_file rotate_to() procedure renames the current output file to the name set in newfile,
3. the module re-opens the original file specified by the File directive and continue logging, and
4. the xm_exec exec_async() procedure call bzip2 on the rotated-out file (without waiting for the command
to complete).
nxlog.conf (truncated)
1 <Input in>
2 Module im_null
3 </Input>
4
5 # tag::guide_include[]
6 <Extension _exec>
7 Module xm_exec
8 </Extension>
9
10 <Extension _fileop>
11 Module xm_fileop
12 </Extension>
13
14 <Output out>
15 Module om_file
16 File '/opt/nxlog/var/log/app.log'
17 <Schedule>
18 When @hourly
19 <Exec>
20 if out->file_size() > 15M
21 {
22 set_var('newfile', file_name() + strftime(now(), '_%Y%m%d%H%M%S'));
23 rotate_to(get_var('newfile'));
24 exec_async('/bin/bzip2', get_var('newfile'));
25 }
26 </Exec>
27 </Schedule>
28 [...]
217
Example 112. Using file_remove() to Delete Old Files
This example uses file_remove() to remove any files older than 30 days.
nxlog.conf
1 <Input in>
2 Module im_null
3 </Input>
4
5 <Output logs_out>
6 Module om_file
7 File '/var/log/'+ strftime(now(),'%Y%m%d') + '.log'
8 <Schedule>
9 When @daily
10
11 # Delete logs older than 30 days (24x60x60x30)
12 Exec file_remove('/var/log/*.log', now() - 2592000);
13 </Schedule>
14 </Output>
See also Extracting Data, a closely related topic, for more examples of classification.
218
Example 113. Classifying a Windows Security EventLog Message
This example classifies Windows Security login failure events with Event ID 4625 (controlled by the "Audit
logon events" audit policy setting). If a received event has that ID, it is classified as a failed authentication
attempt and the $AccountName field is set to the value of $TargetUserName.
Field Value
$EventType AUDIT_FAILURE
$EventID 4625
$SourceName Microsoft-Windows-Security-Auditing
$Channel Security
$Category Logon
$TargetUserSid S-1-0-0
$TargetUserName linda
$TargetDomainName WINHOST
$Status 0xc000006d
$FailureReason %%2313
$SubStatus 0xc000006a
$LogonType 2
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <Exec>
4 if ($EventID == 4625) and
5 ($SourceName == 'Microsoft-Windows-Security-Auditing')
6 {
7 $TaxonomyAction = 'Authenticate';
8 $TaxonomyStatus = 'Failure';
9 $AccountName = $TargetUserName;
10 }
11 </Exec>
12 </Input>
Field Value
$TaxonomyAction Authenticate
$TaxonomyStatus Failure
$AccountName linda
219
Example 114. Regular Expression Message Classification
When the contents of the $Message field match against the regular expression, the $AccountName and
$AccountID fields are filled with the appropriate values from the referenced captured sub-strings.
Additionally the value LoginEvent is stored in the $Action field.
220
Example 115. Classifying With pm_pattern
The above pattern matching rule can be defined in the pm_pattern modules’s XML format in the following
way, which will accomplish the same result.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input uds>
6 Module im_uds
7 UDS /dev/log
8 Exec parse_syslog_bsd();
9 </Input>
10
11 <Processor pattern>
12 Module pm_pattern
13 PatternFile /var/lib/nxlog/patterndb.xml
14 </Processor>
15
16 <Output file>
17 Module om_file
18 File "/var/log/messages"
19 Exec to_syslog_bsd();
20 </Output>
21
22 <Route uds_to_file>
23 Path uds => pattern => file
24 </Route>
221
26.12. Parsing Multi-Line Messages
Multi-line messages such as exception logs and stack traces are quite common in logs. Unfortunately these log
messages are often stored in files or forwarded over the network without any encapsulation. In this case, the
newline characters in the messages cannot be correctly parsed by simple line-based parsers, which treat every
line as a separate event.
• a header in the first line (with timestamp and severity field, for example),
• a closing character sequence marking the end, and
• a fixed line count.
Based on this information, NXLog can be configured to reconstruct the original messages, creating a single event
for each multi-line message.
26.12.1. xm_multiline
NXLog provides xm_multiline for multi-line parsing; this dedicated extension module is the recommended way to
parse multi-line messages. It supports header lines, footer lines, and fixed line counts. Once configured, the
xm_multiline module instance can be used as a parser via the input module’s InputType directive.
This configuration creates a single event record with the matching HeaderLine and all successive lines until
an EndLine is received.
nxlog.conf
1 <Extension multiline_parser>
2 Module xm_multiline
3 HeaderLine "---------------"
4 EndLine "END------------"
5 </Extension>
6
7 <Input in>
8 Module im_file
9 File "/var/log/app-multiline.log"
10 InputType multiline_parser
11 </Input>
It is also possible to use regular expressions with the HeaderLine and EndLine directives.
222
Example 117. Using Regular Expressions With xm_multiline
Here, a new event record is created beginning with each line that matches the regular expression.
nxlog.conf
1 <Extension tomcat_parser>
2 Module xm_multiline
3 HeaderLine /^\d{4}\-\d{2}\-\d{2} \d{2}\:\d{2}\:\d{2},\d{3} \S+ \[\S+\] \- .*/
4 </Extension>
5
6 <Input log4j>
7 Module im_file
8 File "/var/log/tomcat6/catalina.out"
9 InputType tomcat_parser
10 </Input>
Because the EndLine directive is not specified in this configuration, the xm_multiline parser
cannot know that a log message is finished until it receives the HeaderLine of the next
NOTE message. The log message is kept in the buffers, waiting to be forwarded, until either a
new log message is read or the im_file module instance’s PollInterval has expired. See the
xm_multiline AutoFlush directive.
223
Example 118. Parsing Multi-Line Messages with Module Variables
This example saves the matching line and successive lines in the saved variable. When another matching
line is read, an internal log message is generated with the contents of the saved variable.
nxlog.conf
1 <Input log4j>
2 Module im_file
3 File "/var/log/tomcat6/catalina.out"
4 <Exec>
5 if $raw_event =~ /(?x)^\d{4}\-\d{2}\-\d{2}\ \d{2}\:\d{2}\:\d{2},\d{3}\ \S+
6 \ \[\S+\]\ \-\ .*/
7 {
8 if defined(get_var('saved'))
9 {
10 $tmp = $raw_event;
11 $raw_event = get_var('saved');
12 set_var('saved', $tmp);
13 delete($tmp);
14 log_info($raw_event);
15 }
16 else
17 {
18 set_var('saved', $raw_event);
19 drop();
20 }
21 }
22 else
23 {
24 set_var('saved', get_var('saved') + "\n" + $raw_event);
25 drop();
26 }
27 </Exec>
28 </Input>
As with the previous example, a log message is kept in the saved variable, and not
NOTE
forwarded, until a new log message is read.
The poor man’s tool for rate limiting is the sleep() procedure.
224
Example 119. Rate Limiting With the sleep() Procedure
In the following example, sleep() is invoked with 500 microseconds. This means that the input module will
be able to read at most 2000 messages per second.
nxlog.conf
1 <Input in>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 Exec sleep(500);
6 </Input>
This is not very precise because the module can do additional processing which can add to the total
execution time, but it gets fairly close.
WARNING It is not recommended to use rate limiting on a route that reads logs over UDP.
The traffic shaping script can be downloaded from the nxlog-public/contrib repository.
The script does not require configuring NXLog, but it needs to be configured to run at startup with tools like
crontab or rc.local.
To configure running the script with crontab, the @reboot task should be added to the /etc/crontab file.
/etc/crontab
1 @reboot /usr/local/sbin/traffic-shaper.sh
To configure running the script with rc.local, the script path should be added to the /etc/rc.local file.
/etc/rc.local
1 /usr/local/sbin/traffic-shaper.sh
The traffic shaper ties to the destination port on the network level and can shape traffic in accordance with
priorities. For example, high priority can be configured for a database server and low priority for a backup
system.
For more information about the Linux traffic control, see the Traffic Control HOWTO section on The Linux
Documentation Project website.
225
Example 120. Simple Rewrite Statement
This statement, when used in an Exec directive, will apply the replacement directly to the $raw_event field.
In this case, a parsing procedure like parse_syslog() would not be used.
1 if $raw_event =~ /^(aaaa)(replaceME)(.+)/
2 $raw_event = $1 + 'replaceMENT' + $3;
This example will convert a timestamp field to a different format. Like the previous example, the goal is to
modify the $raw_event field directly, rather than use other fields and then a procedure like to_json() to
update $raw_event.
The input log format is line-based, with whitespace-separated fields. The first field is a timestamp
expressed as seconds since the epoch.
Input Sample
1301471167.225121 AChBVvgs1dfHjwhG8 141.143.210.102 5353 224.0.0.251 5353 udp dns - - - S0 - -
0 D 1 73 0 0 (empty)↵
In the output module instance Exec directive, the regular expression will match and capture the first field
from the line, and remove it. This captured portion is parsed with the parsedate() function and used to set
the $EventTime field. This field is then prepended to the $raw_event field to replace the previously
removed field.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "conn.log"
4 </Input>
5
6 <Output out>
7 Module om_tcp
8 Host 192.168.0.1
9 Port 1514
10 <Exec>
11 if $raw_event =~ s/^(\S+)//
12 {
13 $EventTime = parsedate($1);
14 $raw_event = strftime($EventTime, 'YYYY-MM-DDThh:mm:ss.sTZ') +
15 $raw_event;
16 }
17 </Exec>
18 </Output>
Output Sample
2011-03-30T00:46:07.225121-07:00 AChBVvgs1dfHjwhG8 141.143.210.102 5353 224.0.0.251 5353 udp
dns - - - S0 - - 0 D 1 73 0 0 (empty)↵
226
Example 122. Rewrite Using Fields
In this example, each Syslog message is received via UDP and parsed with parse_syslog_bsd(). Then, if the
$Message field matches the regular expression, the $SeverityValue field is modified. Finally, the
to_syslog_bsd() procedure generates $raw_event from the fields.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Port 514
8 Host 0.0.0.0
9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 <Exec>
16 if $Message =~ /error/ $SeverityValue = syslog_severity_value("error");
17 to_syslog_bsd();
18 </Exec>
19 </Output>
20
21 <Route syslog_to_file>
22 Path udp => file
23 </Route>
The simplest way is to use the NXLog language and the Exec directive.
This statement uses the rename_field() procedure to rename the $user field to $AccountName.
1 rename_field($user, $AccountName);
This statement uses the delete() procedure to delete the $Serial field.
1 delete($Serial);
Alternatively, the xm_rewrite extension module (available in NXLog Enterprise Edition) can be used to rename or
delete fields.
227
Example 125. Using xm_rewrite to Whitelist and Rename Fields
This example uses the parse_syslog() procedure to create a set of Syslog fields in the event record. It then
uses the Keep directive to whitelist a set of fields, deleting any field that is not in the list. Finally the Rename
directive is used to rename the $EventTime field to $Timestamp. The resulting event record is converted to
JSON and sent out via TCP.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension rewrite>
6 Module xm_rewrite
7 Keep EventTime, Severity, Hostname, SourceName, Message
8 Rename EventTime, Timestamp
9 </Extension>
10
11 <Input in>
12 Module im_file
13 File '/var/log/messages'
14 Exec parse_syslog(); rewrite->process();
15 </Input>
16
17 <Output out>
18 Module om_tcp
19 Host 10.0.0.1
20 Port 1514
21 Exec to_json();
22 </Output>
23
24 <Route r>
25 Path in => out
26 </Route>
Here is an example Extension block that uses the Delete directive to delete all the severity fields. This could
be used to prevent severity-based matching (during later processing) on an event source that does not set
severity values correctly.
nxlog.conf
1 <Extension rewrite>
2 Module xm_rewrite
3 Delete SyslogSeverityValue, SyslogSeverity, SeverityValue, Severity
4 </Extension>
26.15. Timestamps
The NXLog core provides functions for parsing timestamps that return datetime values, along with functions for
generating formatted timestamps from datetime values.
228
Example 127. Parsing a Timestamp With parsedate()
Consider the following line-based input sample. Each record begins with a timestamp followed by a tab.
Input Sample
2016-10-11T22:14:15.003Z ⇥ machine.example.com ⇥ An account failed to log on.↵
This example configuration uses a regular expression to capture the string up to the first tab. Then the
parsedate() function is used to parse the resulting string and set the $EventTime field to the corresponding
datetime value. This value can be converted to a timestamp string as required in later processing, either
explicitly or as defined by the global DateFormat directive (see Formatting Timestamps).
nxlog.conf
1 <Input in>
2 Module im_file
3 File 'in.log'
4 Exec if $raw_event =~ /^([^\t])\t/ $EventTime = parsedate($1);
5 </Input>
The parsedate() function is especially useful if the timestamp format varies within the events
being processed. A timestamp of any supported format will be parsed. In this example, the
TIP
timestamp must be at the beginning of the event and followed by a tab character to be
matched by the regular expression.
Sometimes a log source will contain a few events with invalid or unexpected formatting. If parsedate() fails to
parse the input string, it will return an undefined datetime value. This allows the user to configure a fallback
timestamp.
This example statement uses a vague regular expression that may in some cases match an invalid string. If
parsedate() fails to parse the timestamp, it will return an undefined datetime value. In this case, the final
line below will set $EventTime to the current time.
1 if $raw_event =~ /^(\S+)\s+(\S+)/
2 $EventTime = parsedate($1 + " " + $2);
3
4 # Make sure $EventTime is set
5 if not defined($EventTime) $EventTime = now();
For parsing more exotic formats, the strptime() function can be used.
229
Example 129. Using strptime() to Parse Timestamps
In this input sample, the date and time are two distinct fields delimited by a tab. It also uses a non-standard
single digit format instead of fixed width with double digits.
Input Sample
2011-5-29 ⇥ 0:3:2 GMT ⇥ WINDOWSDC ⇥ An account failed to log on.↵
To parse this, a regular expression can be used to capture the timestamp string. This string is then parsed
with the strptime() function.
Reliably applying timezone offsets is difficult due to complications like daylight savings time
(DST) and networking and processing delays. For this reason, it is best to use clock
WARNING
synchronization (such as NTP) and timezone-aware timestamps at the log source when
possible.
The simplest solution for incorrect timestamps is to replace them with the time when the event was received by
NXLog. This is a good option for devices with untrusted clocks on the local network that send logs to NXLog in
real-time. The $EventReceivedTime field is automatically added to each event record by NXLog; this field can be
stored alongside the event’s own timestamp (normally $EventTime) if all fields are preserved when the event is
stored/forwarded. Alternatively, this field can be used as the event timestamp as shown below. This would have
the effect of influencing the timestamp used on most outputs, such as with the to_syslog_ietf() procedure.
This configuration accepts Syslog messages via UDP with the im_udp module. Events are parsed with the
parse_syslog() procedure, which adds an EventTime field from the Syslog header timestamp. The
$EventTime value, however, is replaced by the timestamp set by NXLog in the $EventReceivedTime field.
Any later processing that uses the $EventTime field will operate on the updated timestamp. For example, if
the to_syslog_ietf() procedure is used, the resulting IETF Syslog header will contain the
$EventReceivedTime timestamp.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input syslog>
6 Module im_udp
7 <Exec>
8 parse_syslog();
9 $EventTime = $EventReceivedTime;
10 </Exec>
11 </Input>
In some edge cases, a UTC timestamp that does not have the timezone specified is parsed as local time. This can
230
happen if BSD Syslog timestamps are in UTC, or when reading a non-timezone-aware ID timestamp with
im_odbc. In this case, it is necessary to either manually re-parse (see Parsing Timestamps) or apply a
corresponding reverse offset.
This statement uses the parsedate() and strftime() functions to apply a reverse offset after an incorrect
local-to-UTC timezone conversion. To reduce the likelihood of an incorrect offset during the daylight saving
time (DST) transition, this should be done in the Input module instance which is collecting the events (see
the warning above).
For the general case of adjusting timestamps, the plus (+) and minus (-) operators can be used to adjust a
timestamp by a specified number of seconds.
This simple method may not be suitable for correction of a timezone that uses
WARNING daylight saving time (DST). In that case the required offset may change based on
whether DST is in effect.
231
Example 133. Using the Default Timestamp Formatting
Consider an event record with an $EventTime field (as a datetime value) and a $Message field. Note that
the table below shows the $EventTime value as it is stored internally: as microseconds since the epoch.
Field Value
$EventTime 1493425133541851
The following output module instance uses the to_json() procedure without specifying the timestamp
format.
nxlog.conf
1 <Output out>
2 Module om_file
3 File 'out.log'
4 Exec to_json();
5 </Output>
The output of the $EventTime field in this case will depend on the DateFormat directive. The default
DateFormat is YYYY-MM-DD hh:mm:ss (local time).
Output Sample
{
"EventTime": "2017-01-02 15:19:22",
"Message": "EXT4-fs (dm-0): mounted filesystem with ordered data mode."
}
A different timestamp may be used in some cases, depending on the procedure used to
convert the field and the output module. The to_syslog_bsd() procedure, for example, will
NOTE
use the $EventTime value to generate a RFC 3164 format timestamp regardless of how the
DateFormat directive is set.
Alternatively, the strftime() function can be used to explicitly convert a datetime value to a string with the
required format.
232
Example 134. Using strftime() to Format Timestamps
Again, consider an event record with an $EventTime field (as a datetime value) and a $Message field. In this
example, the strftime() function is used with a format string (see the strftime(3) manual) to convert
$EventTime to a string in the local time zone. Then the to_json() procedure is used to set the $raw_event
field.
nxlog.conf
1 <Output out>
2 Module om_file
3 File 'out.log'
4 <Exec>
5 $EventTime = strftime($EventTime, '%Y-%m-%dT%H:%M:%S%z');
6 to_json();
7 </Exec>
8 </Output>
Output Sample
{
"EventTime": "2017-04-29T02:18:53+0200",
"Message": "EXT4-fs (dm-0): mounted filesystem with ordered data mode."
}
NXLog Enterprise Edition supports a few additional format strings for formats that the stock C strftime() does not
offer, including formats with fractional seconds and in UTC time. See the Reference Manual strftime()
documentation for the list.
The following statement will convert $EventTime to a timestamp format with fractional seconds and in UTC
(regardless of the current time zone).
233
Chapter 27. Forwarding and Storing Logs
This chapter discusses the configuration of NXLog outputs, including:
Syslog
There are two Syslog formats, the older BSD Syslog (RFC 3164) and the newer IETF Syslog (RFC 5424). The
transport protocol in Syslog can be UDP, TCP, or SSL. The xm_syslog module provides procedures for
generating Syslog messages. For more information, see Generating Syslog.
This configuration uses the to_syslog_ietf() procedure to convert the corresponding fields in the event
record to a Syslog message in IETF format. The result is forwarded via TCP by the om_tcp module.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_tcp
7 Host 192.168.1.1
8 Port 1514
9 Exec to_syslog_ietf();
10 </Output>
Syslog Snare
The Snare agent format is a special format on top of BSD Syslog which is used and understood by several
tools and log analyzer frontends. This format is most useful when forwarding Windows EventLog data in
conjunction with im_mseventlog and/or im_msvistalog. The to_syslog_snare() procedure can construct Syslog
Snare formatted messages. For more information, see Generating Snare.
234
Example 137. Generating Syslog Snare and Sending via UDP
In this example, the to_syslog_snare() procedure converts the corresponding fields in the event record to
Snare format. The messages are then forwarded via UDP by the om_udp module.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_udp
7 Host 192.168.1.1
8 Port 514
9 Exec to_syslog_snare();
10 </Output>
With this configuration, NXLog will send the fields in the event record via UDP in GELF format.
nxlog.conf
1 <Extension _gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Output out>
6 Module om_udp
7 Host 127.0.0.1
8 Port 12201
9 OutputType GELF_UDP
10 </Output>
JSON
This is one of the most popular formats for interchanging data between various systems. The xm_json
module provides procedures for generating JSON messages by using data from the event record.
235
Example 139. Generating JSON and sending via TCP
With this configuration, NXLog will send the fields of the event record via TCP in JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Output out>
6 Module om_tcp
7 Host 192.168.1.1
8 Port 1514
9 Exec to_json();
10 </Output>
UDP
To send logs as UDP datagrams, use the om_udp module.
UDP packets can be dropped by the operating system because the protocol does not
WARNING guarantee reliable message delivery. It is recommended to use TCP or TLS/SSL instead if
message loss is a concern.
236
Example 140. Using the om_udp Module
This example provides configurations to forward data to the specified host via UDP.
The configuration below converts and forwards log messages in Graylog Extended Log Format (GELF).
nxlog.conf
1 <Extension gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/tmp/input"
8 </Input>
9
10 <Output out>
11 Module om_udp
12 Host 192.168.1.1
13 Port 514
14 OutputType GELF_UDP
15 </Output>
The below configuration sample forwards data via UDP in JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/tmp/input"
8 </Input>
9
10 <Output out>
11 Module om_udp
12 Host 192.168.1.1
13 Port 514
14 Exec to_json();
15 </Output>
TCP
To send logs over TCP, use the om_tcp module.
237
Example 141. Using the om_tcp Module
In this example, log messages are forwarded to the specified host via TCP.
The configuration below provides forwarding data as a Syslog message in IETF format.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/tmp/input"
8 </Input>
9
10 <Output out>
11 Module om_tcp
12 Host 192.168.1.1
13 Port 1514
14 Exec to_syslog_ietf();
15 </Output>
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/tmp/input"
4 </Input>
5
6 <Output out>
7 Module om_tcp
8 Host 192.168.0.127
9 Port 10500
10 </Output>
SSL/TLS
To send logs over a trusted, secure SSL connection, use the om_ssl module.
238
Example 142. Using the om_ssl Module
This example provides nearly identical behavior to the TCP example above, but in this case SSL is used
to securely transmit the data.
The configuration below enables forwarding raw data over SSL/TLS using a self-signed certificate.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/tmp/input'
4 </Input>
5
6 <Output out>
7 Module om_ssl
8 Host 192.168.0.127
9 Port 10500
10 OutputType Binary
11 # Allows using self-signed certificates
12 AllowUntrusted TRUE
13 # Certificate from the peer host
14 CAFile /tmp/peer_cert.pem
15 # Certificate file
16 CertFile /tmp/cert.pem
17 # Keypair file
18 CertKeyFile /tmp/key.pem
19 </Output>
The below configuration sample forwards data over SSL/TLS in JSON format using a trusted CA
certificate.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/tmp/input'
4 </Input>
5
6 <Extension json>
7 Module xm_json
8 </Extension>
9
10 <Output out>
11 Module om_ssl
12 Host 192.168.0.127
13 Port 10500
14 # Allows using self-signed certificates
15 AllowUntrusted FALSE
16 # Certificate from the peer host
17 CAFile /tmp/peer_cert.pem
18 # Certificate file
19 CertFile /tmp/cert.pem
20 # Keypair file
21 CertKeyFile /tmp/key.pem
22 Exec to_json();
23 </Output>
HTTP(S)
To send logs over an HTTP or HTTPS connection, use the om_http module.
239
Example 143. Using the om_http Module
This example provides configurations for forwarding data via HTTP to the specified HTTP address.
Using the below configuration sample, NXLog will send raw data in text form using a POST request for
each log message.
nxlog.conf
1 <Input in>
2 Module im_file
3 File '/tmp/input'
4 </Input>
5
6 <Output out>
7 Module om_http
8 URL http://server:8080/
9 </Output>
The configuration below will forward data in Graylog Extended Log Format (GELF) over HTTPS using a
trusted certificate.
nxlog.conf
1 <Extension gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/tmp/input"
8 </Input>
9
10 <Output out>
11 Module om_http
12 URL http://server:8080/
13 # Allows using self-signed certificates
14 HTTPSAllowUntrusted FALSE
15 # Certificate from the peer host
16 HTTPSCAFile /tmp/peer_cert.pem
17 # Certificate file
18 HTTPSCertFile /tmp/cert.pem
19 # Keypair file
20 HTTPSCertKeyFile /tmp/key.pem
21 OutputType GELF_UDP
22 </Output>
240
Example 144. Using the om_file Module
This configuration writes log messages to the specified file. No additional processing is performed by
the output module instance.
nxlog.conf
1 <Output out>
2 Module om_file
3 File "/var/log/out.log"
4 </Output>
With this configuration, log messages are written to the specified socket without any additional
processing.
nxlog.conf
1 <Output out>
2 Module om_uds
3 UDS /dev/log
4 </Output>
This configuration uses libdbi and the pgsql driver to insert events into the specified database. The SQL
statement references fields in the event record to be added to the database.
nxlog.conf
1 <Output out>
2 Module om_dbi
3 SQL INSERT INTO log (facility, severity, hostname, timestamp, application, \
4 message) \
5 VALUES ($SyslogFacility, $SyslogSeverity, $Hostname, '$EventTime', \
6 $SourceName, $Message)
7 Driver pgsql
8 Option host 127.0.0.1
9 Option username dbuser
10 Option password secret
11 Option dbname logdb
12 </Output>
241
Example 147. Using the om_odbc Module
This example inserts events into the database specified by the ODBC connection string. In this case, the
sql_exec() and sql_fetch() functions are used to interact with the database.
nxlog.conf
1 <Output out>
2 Module om_odbc
3 ConnectionString DSN=mysql_ds;username=mysql;password=mysql;database=logdb;
4 <Exec>
5 if ( sql_exec("INSERT INTO log (facility, severity, hostname, timestamp,
6 application, message) VALUES (?, ?, ?, ?, ?, ?)",
7 1, 2, "host", now(), "app", $raw_event) == TRUE )
8 {
9 if ( sql_fetch("SELECT max(id) as id from log") == TRUE )
10 {
11 log_info("ID: " + $id);
12 if ( sql_fetch("SELECT message from log WHERE id=?", $id) == TRUE )
13 log_info($message);
14 }
15 }
16 </Exec>
17 </Output>
This configuration executes the specified command and writes log messages to its standard input.
nxlog.conf
1 <Output out>
2 Module om_exec
3 Command /usr/bin/someprog
4 Arg -
5 </Output>
242
Chapter 28. Centralized Log Collection
Centralized log collection, log aggregation, or log centralization is the process of sending event log data to a
dedicated server or service for storage, and optionally for search and analytics. Storing logs on a centralized
system offers several benefits over storing the data locally.
• Event data can be accessed even if the originating server is offline, compromised, or decommissioned.
• Data can be analyzed and correlated across more than one system.
• It is more difficult for malicious actors to remove evidence from logs that have already been forwarded.
• Incident investigation and auditing is easier because all event data is collected in a single location.
• Scalable, high-availability, and redundancy solutions are easier to implement and maintain since they can be
implemented at the point of the collection server.
• Compliance with internal and external standards for log data retention only need to be to managed at a
single point.
28.1. Architecture
The following diagram depicts an example of centralized log collection architecture. The single, central server
collects logs from other servers, applications, and network devices. After collection, the logs can be forwarded as
required for further analysis or storage.
This chapter is concerned with the left half of the diagram: collecting logs from clients.
In practice, network topology and other requirements may dictate that additional servers such as relays be
added for log handling. For those cases, other functionality may be necessary than what is covered here (such as
buffering).
243
28.2. Collection Modes
In the context of clients generating logs, NXLog supports both agent-based and agent-less log collection, and it is
possible to configure a system to use both in mixed mode. In brief, these modes differ as follows (see the Log
Processing Modes section for more details).
Agent-based log collection requires that an NXLog agent be installed on the client. With a local agent, collection is
much more flexible, providing features such as filtering on the source system to send only the required data,
format conversion, compression, encryption, and delivery reliability, among others. It is generally recommended
that NXLog be deployed as an agent wherever possible.
With agent-based log collection, NXLog agents are installed on both the client and the central server. Here,
the im_batchcompress and om_batchcompress modules are used to transport logs both compressed and
encrypted. These modules preserve all the fields in the event record.
nxlog.conf (Client)
1 <Output batch>
2 Module om_batchcompress
3 Host 192.168.56.101
4 Port 2514
5 UseSSL TRUE
6 CAFile /opt/openssl_rootca/rootCA.pem
7 CertFile /opt/openssl_server/server.crt
8 CertKeyFile /opt/openssl_server/server.key
9 </Output>
In agent-less mode, there is no NXLog agent installed on the client. Instead, the client forwards events to the
central server in a native format. On the central server, NXLog accepts and parses the logs received. Often there
is limited control over the log format used, and it may not be possible to implement encryption, compression,
delivery reliability, or other features.
244
Example 150. Collecting UDP Syslog Logs in Agent-Less Mode
With agent-less collection, NXLog is installed on the central server but not on the client. Clients can be
configured to send UDP Syslog messages to the central server using their native logging functionality. The
im_udp module below could be replaced with im_tcp or im_ssl according to what protocol is supported by
the clients.
UDP transport does not provide any guarantee of delivery. Network congestion or
WARNING
other issues may result in lost log data.
It is common for logs to be collected using both modes among the various clients, network devices, relays, and
log servers in a network. For example, an NXLog relay may be configured to collect logs from both agents and
agent-less sources and perform filtering and processing before forwarding the data to a central server.
28.3. Requirements
Various logging requirements may dictate particular details about the chosen logging architecture. The following
are important things to consider when deciding how to set up centralized log collection. In some cases, these
requirements can only be met by using agent-based collection.
Reliability
UDP does not guarantee message delivery, and should be avoided if log data loss is not acceptable. Instead,
TCP (and therefore TLS as well) offers guaranteed packet delivery. Furthermore, with agent-based collection
NXLog can provide application-level, guaranteed delivery. See Reliable Network Delivery for more
information.
Structured data
Correlating data across multiple log sources requires parsing event data into a common set of fields. Event
fields are a core part of NXLog processing, and an NXLog agent can be configured to parse events at any point
along their path to the central server. Often, parsing is done as early as possible (at the source, for agent-
based collection) to simplify later categorization and to reduce processing load on log servers as logs are
received. See Parsing Various Formats and Message Classification.
Encryption
To maintain confidentiality of log data, TLS can be used during transport.
Compression
If bandwidth is a concern, log data compression may be desirable. Most event data is highly compressible,
allowing bandwidth requirements to be reduced significantly. The im_batchcompress and om_batchcompress
modules provide batched, compressed transport of log data between NXLog agents.
Storage format
Normally, data should be converted to, and stored in, a common format when dealing with heterogeneous
logs sources.
245
28.4. Data Formats
When using agent-based collection, it is often desirable to convert the data prior to transfer. In this case,
structured data is often sent using one of these formats.
JSON
JSON is easy to generate and parse and has become a de-facto standard for logging as well. It has some
limitations, such as the missing datetime format. See the JSON section.
Agent-less collection is restricted to formats supported by the clients. The following are a few common formats,
but many more are supported. See also the OS Support chapters.
Syslog
Using Syslog has become a common practice and many SIEM vendors and products support (or even require)
Syslog. See the Syslog chapter for more details. Syslog contains free form message data that typically needs
to be parsed to extract more information for further analysis. Syslog often uses UDP, TCP, or TLS for
transport.
Snare
The Snare format is commonly used to transport Windows EventLog, with or without Syslog headers.
246
Chapter 29. Encrypted Transfer
In order to protect log data in transit from being modified or viewed by an attacker, NXLog provides SSL/TLS data
encryption support in many input and output modules. Benefits of using SSL/TLS encrypted log transfer include:
• strong authentication,
• message integrity (assures that the logs are not changed), and
• message confidentiality (assures that the logs cannot be viewed by an unauthorized party).
There are several modules in NXLog Enterprise Edition that support SSL/TLS encryption:
When using the SSL/TLS, there are two ways to handle authentication.
• With mutual authentication, both client and log server agents are authenticated, and certificates/keys must
be deployed for both agents. This is the most secure and prevents log collection if the client’s certificate is
untrusted or has expired.
• With server-side authentication only, authentication takes place only via a certificate/key deployed on the
server. On the log server, the im_ssl AllowUntrusted directive (or corresponding directive for im_http or
im_batchcompress) must be set to TRUE. The client is prevented from sending logs to an untrusted server
but the server accepts logs from untrusted clients.
247
Example 151. Client/Server Encrypted Transfer
With the following configurations, a client reads logs from all log files under the /var/log directory, parses
the events with parse_syslog(), converts to JSON with to_json(), and forwards them over a secure connection
to the central server.
These configurations use mutual authentication: both agents are authenticated and certificates must be
created for both agents.
nxlog.conf (Client)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input messages>
10 Module im_file
11 File "/var/log/*"
12 Exec parse_syslog();
13 </Input>
14
15 <Output central_ssl>
16 Module om_ssl
17 Host 192.168.56.103
18 Port 516
19 CAFile /opt/ssl/rootCA.pem
20 CertFile /opt/ssl/client.crt
21 CertKeyFile /opt/ssl/client.key
22 KeyPass password
23 Exec to_json();
24 </Output>
The server receives the logs on port 516 and writes them to /var/log/logmsg.log.
248
29.2. OpenSSL Certificate Creation
NXLog Manager provides various features for creating, deploying, and managing SSL/TLS certificates, and is
especially helpful when managing many NXLog agents across an organization. This section, however, provides
steps for creating self-signed certificates with OpenSSL, a Linux-based SSL/TLS cryptography toolkit.
1. Generate the private root key for your Certification Authority (CA).
$ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 730 -out rootCA.pem
b. Generate the certificate signing request for the CA. When prompted for the Common Name, enter the
server’s name or IP address.
249
Chapter 30. Reducing Bandwidth and Data Size
There are several ways that NXLog can be configured to reduce the size of log data. This can help lower
bandwidth requirements during transport, storage requirements for log data storage, and licensing costs for
commercial SIEM systems that charge based on data volume.
The three main strategies for achieving this goal are covered in the following sections:
• Filtering Events by removing unnecessary or duplicate events at the source so that less data needs to be
transported and stored—reducing the data size during all subsequent stages of processing.
• Trimming Events by removing extra content or fields from event records which can reduce the total volume
of log data.
• Compressing During Transport can drastically reduce bandwidth requirements for events being forwarded.
To achieve the best results, it is important to understand how fields work in NXLog and which fields are being
transferred or stored. For example, removing or modifying fields without modifying $raw_event will not reduce
data requirements at all for an output module instance that uses only $raw_event. See Event Records and Fields
for details, as well as the explanation in Compressing During Transport below.
In this example, an NXLog agent is configured to collect Syslog messages from devices on the local network.
Events are parsed with the xm_syslog parse_syslog() procedure, which sets the SeverityValue field. Any event
with a normalized severity lower than 3 (warning) is discarded.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input syslog>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog(); if $SeverityValue < 3 drop();
10 </Input>
Similarly, the pm_norepeat module can be used to detect, count, and discard duplicate events. In their place,
pm_norepeat generates a single event with a last message repeated n times message.
250
Example 153. Dropping Duplicate Events
With this configuration, NXLog collects Syslog messages from hosts on the local network with im_udp and
parses them with the xm_syslog parse_syslog() procedure. Events are then routed through a pm_norepeat
module instance, where the $Hostname, $Message, and $SourceName fields are checked to detect duplicate
messages. Last, events are sent to a remote host with om_batchcompress.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input syslog_udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog();
10 </Input>
11
12 <Processor norepeat>
13 Module pm_norepeat
14 CheckFields Hostname, Message, SourceName
15 </Processor>
16
17 <Output out>
18 Module om_batchcompress
19 Host 10.2.0.2
20 Port 2514
21 </Output>
22
23 <Route r>
24 Path syslog_udp => norepeat => out
25 </Route>
251
Example 154. Discarding Extra Fields via Whitelist
This configuration reads from the Windows EventLog with im_msvistalog and uses an xm_rewrite module
instance to discard any fields in the event record that are not included in the whitelist. The xm_rewrite
instance below could be used with multiple sources; for example, the whitelist would also be suitable for
the xm_syslog fields.
NOTE The xm_rewrite module does not remove the $raw_event field.
nxlog.conf
1 <Extension whitelist>
2 Module xm_rewrite
3 Keep AccountName, Channel, EventID, EventReceivedTime, EventTime, Hostname, \
4 Severity, SeverityValue, SourceName
5 </Extension>
6
7 <Input eventlog>
8 Module im_msvistalog
9 <QueryXML>
10 <QueryList>
11 <Query Id='0'>
12 <Select Path='Security'>*[System/Level<=4]</Select>
13 </Query>
14 </QueryList>
15 </QueryXML>
16 Exec whitelist->process();
17 </Input>
In some cases, event messages contain a lot of extra data that is duplicated across multiple events of the same
time. One example of this is the "descriptive event data" which has been introduced by Microsoft for the
Windows EventLog. By removing this verbose text from common events, event sizes can be reduced significantly
while still preserving all the forensic details of the event.
The following configuration collects events from the Application, Security, and System channels. Rules are
included for truncating the messages of Security events with IDs 4688 and 4769.
In this example, the $Message field is truncated. However, the $raw_event field is not. For
most input modules, $raw_event will include the contents of $Message and other fields
NOTE (see the im_msvistalog $raw_event field). To update the $raw_event field, include a
statement for this (see the comment in the configuration example). See also Compressing
During Transport below for more details.
252
Input Sample (Event ID 4769)
A Kerberos service ticket was requested.
Account Information:
Account Name: WINAD$@TEST.COM
Account Domain: TEST.COM
Logon GUID: {55a7f67c-a32c-150a-29f1-7e173ff130a7}
Service Information:
Service Name: WINAD$
Service ID: TEST\WINAD$
Network Information:
Client Address: ::1
Client Port: 0
Additional Information:
Ticket Options: 0x40810000
Ticket Encryption Type: 0x12
Failure Code: 0x0
Transited Services: -
This event is generated every time access is requested to a resource such as a computer or a
Windows service. The service name indicates the resource to which access was requested.
This event can be correlated with Windows logon events by comparing the Logon GUID fields in
each event. The logon event occurs on the machine that was accessed, which is often a
different machine than the domain controller which issued the service ticket.
Ticket options, encryption types, and failure codes are defined in RFC 4120.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Application">
7 *[System[(Level<=4)]]</Select>
8 <Select Path="Security">
9 *[System[(Level<=4)]]</Select>
10 <Select Path="System">
11 *[System[(Level<=4)]]</Select>
12 </Query>
13 </QueryList>
14 </QueryXML>
15 <Exec>
16 if ($Channel == 'Security') and ($EventID == 4688)
17 $Message =~ s/\s*Token Elevation Type indicates the type of .*$//s;
18 else if $(Channel == 'Security') and ($EventID == 4769)
19 $Message =~ s/\s*This event is generated every time access is .*$//s;
20 # Additional rules can be added here
21 # ...
22 # Optionally, update the $raw_event field
23 #$raw_event = $EventTime + ' ' + $Message;
24 </Exec>
25 </Input>
253
Output Sample
A Kerberos service ticket was requested.
Account Information:
Account Name: WINAD$@TEST.COM
Account Domain: TEST.COM
Logon GUID: {55a7f67c-a32c-150a-29f1-7e173ff130a7}
Service Information:
Service Name: WINAD$
Service ID: TEST\WINAD$
Network Information:
Client Address: ::1
Client Port: 0
Additional Information:
Ticket Options: 0x40810000
Ticket Encryption Type: 0x12
Failure Code: 0x0
Transited Services: -
The following chart compares the data requirements for the *m_tcp, *m_ssl (with TLSv1.2), and *m_batchcompress
module pairs. It is based on a sample of BSD Syslog records parsed with parse_syslog(). The values shown reflect
the total bi-directional bytes transferred at the packet level. Of course, ratios will vary from this in practice based
on network conditions and the compressibility of the event data.
Note that the om_tcp and om_ssl modules (among others) transfer only the $raw_event field by default, but can
be configured to transfer all fields with OutputType Binary. The om_batchcompress module transfers all fields
in the event record, but it is possible to send only the $raw_event field by first removing the other fields (see
Generating $raw_event and Removing Other Fields below).
254
Simply configuring the *m_batchcompress modules for the transfer of event data between NXLog agents can
significantly reduce the bandwidth requirements for that part of the log path.
The table below displays the comparison of sending the same data set using different methods and modules:
Compressi Modules Event size Diff vs Sender Receiver EPS sender EPS
on method used baseline CPU usage CPU usage receiver
None om_tcp, 112 0.00% 141 215.07 83091.8 84169.9
im_tcp
Compression ratios show that enabling SSLCompression yields only a minimal improvement in message size.
Batch compression fares much better, because it compresses data in batches leading to better compression
ratios.
With the following configuration, an NXLog agent uses om_batchcompress to send events in compressed
batches to a remote NXLog agent.
The *m_batchcompress modules also support SSL/TLS encryption; see the im_batchcompress
TIP
and om_batchcompress configuration details.
The remote NXLog agent receives and decompresses the received batches with im_batchcompress. All
fields in an event are available to the receiving agent.
To further reduce the size of the batches transferred by the *m_batchcompress modules, and if only the
$raw_event field will be needed later in the log path, the extra fields can be removed from the event record prior
to transfer. This can be done with an xm_rewrite instance for multiple fields or with the delete() procedure (see
Renaming and Deleting Fields).
255
Example 157. Generating $raw_event and Removing Other Fields
In this configuration, events are collected from the Windows EventLog with im_msvistalog, which sets the
$raw_event and many other fields. To reduce the size of the events, only the $raw_event field is retained;
all the other fields in the event record are removed by the xm_rewrite module instance (called by clean-
>process()).
Rather than using the default im_msvistalog $raw_event field, it would also be possible to
NOTE customize it with something like $raw_event = $EventTime + ' ' + $Message or
to_json().
nxlog.conf
1 <Extension clean>
2 Module xm_rewrite
3 Keep raw_event
4 </Extension>
5
6 <Input eventlog>
7 Module im_msvistalog
8 <QueryXML>
9 <QueryList>
10 <Query Id='0'>
11 <Select Path='Security'>*[System/Level<=4]</Select>
12 </Query>
13 </QueryList>
14 </QueryXML>
15 </Input>
16
17 <Output out>
18 Module om_batchcompress
19 Host 10.2.0.2
20 Exec clean->process();
21 </Output>
Alternatively, if the various fields in the event record will be handled later in the log path, the $raw_event field
can be set to an empty string (but see the warning below).
256
Example 158. Emptying $raw_event and Sending Other Fields
This configuration collects events from the Windows EventLog with im_msvistalog, which writes multiple
fields to the event record. In this case, the $raw_event field contains the same data as other fields. Because
the om_batchcompress module instance will send all the fields in the event record, the $raw_event field
can be emptied.
Many output modules operate on the $raw_event field only. It should not be set to
an empty string unless the output module sends all the event fields
(om_batchcompress or a module using the Binary OutputType) and so on for all
WARNING
subsequent agents and modules. Otherwise, a module instance will encounter an
empty $raw_event. For this reason, the following example is in general not
recommended.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id='1'>
6 <Select Path='Security'>*[System/Level<=4]</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 </Input>
11
12 <Output out>
13 Module om_batchcompress
14 Host 10.2.0.2
15 Exec $raw_event = '';
16 </Output>
257
Chapter 31. Reliable Message Delivery
Sometimes regulatory compliance or other requirements mandate that the logging infrastructure function in an
ultra-reliable manner. NXLog Enterprise Edition can be configured to guarantee that:
• Log messages are buffered in various places in NXLog, and buffered messages can be lost in the case of a
crash. Persistent module message queues can be enabled so that these messages are stored on disk instead
of in memory. Each log message is removed from the queue only after successful delivery. See the
PersistLogqueue and SyncLogqueue global configuration directives, and the PersistLogqueue and
SyncLogqueue module directives.
Log message removal from queues in processor modules happens before delivery. This
WARNING can result in potential data loss. Do not use processor modules when high reliability
operation is required.
• Input positions (for im_file and other modules) are saved in the cache file, and by default this file is only
saved to disk on shutdown. In case of a crash some events may be duplicated or lost depending on the value
of the ReadFromLast directive. This data can be periodically flushed and synced to disk using the
CacheFlushInterval and CacheSync directives.
In this example, the log queues are synced to disk after each successful delivery. The cache file containing
the current event ID is also flushed and synced to disk after each event is read from the database. Note that
these reliability features, when enabled, significantly reduce the processing speed.
nxlog.conf
1 PersistLogqueue TRUE
2 SyncLogqueue TRUE
3 CacheFlushInterval always
4 CacheSync TRUE
5
6 <Input in>
7 Module im_file
8 File 'input.log'
9 </Input>
10
11 <Output out>
12 Module om_tcp
13 Host 10.0.0.1
14 Port 1514
15 </Output>
258
31.2. Reliable Network Delivery
The TCP protocol provides guaranteed packet delivery via packet level acknowledgment. Unfortunately, if the
receiver closes the TCP connection prematurely while messages are being transmitted, unsent data stored in the
socket buffers will be lost since this is handled by the operating system instead of the application (NXLog). This
can result in message loss and affects im_tcp, om_tcp, im_ssl, and om_ssl. See the diagram in All Buffers in a
Basic Route.
The solution to this unreliability in the TCP protocol is application-level acknowledgment. NXLog provides two
pairs of modules for this purpose.
• NXLog can use the HTTP/HTTPS protocol to provide guaranteed message delivery over the network,
optionally with TLS/SSL. The client (om_http) sends the event in a HTTP POST request. The server (im_http,
only available in NXLog Enterprise Edition) responds with a status code indicating successful message
reception.
In the following configuration example, a client reads logs from a file and transmits the logs over an
SSL-secured HTTP connection.
nxlog.conf (Client/Sending)
1 <Input in>
2 Module im_file
3 File 'input.log'
4 </Input>
5
6 <Output out>
7 Module om_http
8 URL https://10.0.0.1:8080/
9 HTTPSCertFile %CERTDIR%/client-cert.pem
10 HTTPSCertKeyFile %CERTDIR%/client-key.pem
11 HTTPSCAFile %CERTDIR%/ca.pem
12 </Output>
The remote NXLog agent accepts the HTTPS connections and stores the received messages in a file. The
contents of input.log will be replicated in output.log.
nxlog.conf (Server/Receiving)
1 <Input in>
2 Module im_http
3 ListenAddr 0.0.0.0
4 Port 8080
5 HTTPSCertFile %CERTDIR%/server-cert.pem
6 HTTPSCertKeyFile %CERTDIR%/server-key.pem
7 HTTPSCAFile %CERTDIR%/ca.pem
8 </Input>
9
10 <Output out>
11 Module om_file
12 File 'output.log'
13 </Output>
• The om_batchcompress and im_batchcompress modules, available in NXLog Enterprise Edition, also provide
acknowledgment as part of the batchcompress protocol.
259
Example 161. Batched Log Transfer
With the following configuration, a client reads logs from a file and transmits the logs in compressed
batches to a remote NXLog agent.
nxlog.conf (Client/Sending)
1 <Input in>
2 Module im_file
3 File 'input.log'
4 </Input>
5
6 <Output out>
7 Module om_batchcompress
8 Host 10.0.0.1
9 UseSSL true
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 CAFile %CERTDIR%/ca.pem
13 </Output>
The remote NXLog agent receives and decompresses the received message batches and stores the
individual messages in a file. The contents of input.log will be replicated in output.log.
nxlog.conf (Server/Receiving)
1 <Input in>
2 Module im_batchcompress
3 ListenAddr 0.0.0.0
4 CertFile %CERTDIR%/server-cert.pem
5 CertKeyFile %CERTDIR%/server-key.pem
6 CAFile %CERTDIR%/ca.pem
7 </Input>
8
9 <Output out>
10 Module om_file
11 File 'output.log'
12 </Output>
In some cases it may be very important that a log message is not duplicated. For example, a duplicated message
may trigger the same alarm a second time or cause an extra entry in a financial transaction log. NXLog Enterprise
Edition can be configured to prevent duplicate messages from occurring.
The best way to prevent duplicated messages is by using serial numbers, as it is only possible to detect
duplicates at the receiver. The receiver can keep track of what has been received by storing the serial number of
the last message. If a message is received with the same or a lower serial number from the same source, the
message is simply discarded.
• Each module that receives a message directly from an input source or from another module in the route
assigns a field named $__SERIAL__$ with a monotonically increasing serial number. The serial number is
260
taken from a global generator and is increased after each fetch so that two messages received at two
modules simultaneously will not have the same serial number. The serial number is initialized to the seconds
elapsed since the UNIX epoch when NXLog is started. This way it can provide 1,000,000 serial numbers per
second without problems in case it is stopped and restarted. Otherwise the value would need to be saved
and synced to disk as after each serial number fetch which would adversely affect performance. When a
module receives a message it checks the value of the field named $__SERIAL__$ against the last saved
value.
• The im_http module keeps the value of the last $__SERIAL__$ for each client. It is only possible to know and
identify the client (om_http sender) in HTTPS mode. The Common Name (CN) in the certificate subject is
used and is assumed to uniquely identify the client.
The remote IP and port number cannot be used to identify the remote sender because the
remote port is assigned dynamically and changes for every connection. Thus if a client
sends a message, disconnects, reconnects, and then sends the same message again, it is
NOTE impossible to know if this is the same client or another. For this reason it is not possible to
protect against message duplication with plain TCP or HTTP when multiple clients connect
from the same IP. The im_ssl and im_batchcompress modules do not have the certificate
subject extraction implemented at this time.
• All other non-network modules use the value of $SourceModuleName which is automatically set to the name
of the module instance generating the log message. This value is assumed to uniquely identify the source.
The value of $SourceModuleName is not overwritten if it already exists. Note that this may present problems
in some complex setups.
• The algorithm is implemented in one procedure call named duplicate_guard(), which can be used in modules
to prevent message duplication. The dropped() function can be then used to test whether the current log
message has been dropped.
261
Example 162. Disallowing Duplicated Messages
The following client and server configuration examples extend the earlier HTTPS example to provide an
ultra-reliable operation where messages cannot be lost locally due to a crash, lost over the network, or
duplicated.
nxlog.conf (Client/Sending)
1 PersistLogqueue TRUE
2 SyncLogqueue TRUE
3 CacheFlushInterval always
4 CacheSync TRUE
5
6 <Input in>
7 Module im_file
8 File 'input.log'
9 </Input>
10
11 <Output out>
12 Module om_http
13 URL https://10.0.0.1:8080/
14 HTTPSCertFile %CERTDIR%/client-cert.pem
15 HTTPSCertKeyFile %CERTDIR%/client-key.pem
16 HTTPSCAFile %CERTDIR%/ca.pem
17 Exec duplicate_guard();
18 </Output>
The server accepts the HTTPS connections and stores the received messages in a file. The contents of
input.log will be replicated in output.log
nxlog.conf (Server/Receiving)
1 PersistLogqueue TRUE
2 SyncLogqueue TRUE
3 CacheFlushInterval always
4 CacheSync TRUE
5
6 <Input in>
7 Module im_http
8 ListenAddr 0.0.0.0
9 Port 8080
10 HTTPSCertFile %CERTDIR%/server-cert.pem
11 HTTPSCertKeyFile %CERTDIR%/server-key.pem
12 HTTPSCAFile %CERTDIR%/ca.pem
13 Exec duplicate_guard();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File 'output.log'
19 Exec duplicate_guard();
20 </Output>
262
OS Support
Each of the following chapters lists some of the common log sources that can be collected on the corresponding
platform. See also Supported Platforms.
263
Chapter 32. IBM AIX
NXLog can collect various types of system logs on the AIX platform. For deployment details, see the supported
AIX platforms, AIX installation, and monitoring.
AIX Audit
The im_aixaudit module natively collects logs generated by the AIX Audit system, without depending on
auditstream or any other process.
This example reads AIX audit logs from the /dev/audit device file.
nxlog.conf
1 <Input in>
2 Module im_aixaudit
3 DeviceFile /dev/audit
4 </Input>
Custom Programs
The im_exec module allows log data to be collected from custom external programs.
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input exec>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/adm/ras/errlog
6 </Input>
DNS Monitoring
Logs can be collected from BIND 9.
264
Example 165. Monitoring File Integrity
This example monitors files in the /etc and /srv directories, generating events when files are modified
or deleted. Files ending in .bak are excluded from the watch list.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 File "/srv/*"
5 Exclude "*.bak"
6 Digest sha1
7 ScanInterval 3600
8 Recursive TRUE
9 </Input>
Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In both cases, the xm_syslog module can be used to parse the events.
See Collecting and Parsing Syslog for more information.
This example reads Syslog messages from /var/log/messages and parses them with the parse_syslog()
procedure.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>
Log Files
The im_file module can be used to collect events from log files.
This configuration reads messages from the /opt/test/input.log file. No parsing is performed; each
line is available in the $raw_event field.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/opt/test/input.log"
4 </Input>
Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
265
Example 168. Reading Process Accounting Logs
This configuration turns on process accounting (using /tmp/nxlog.acct as the log file) and watches for
messages.
nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File "/tmp/nxlog.acct"
5 </Input>
266
Chapter 33. FreeBSD
NXLog can collect various types of system logs on FreeBSD platforms. For deployment details, see the supported
FreeBSD platforms, FreeBSD installation, and monitoring.
This example reads BSM audit logs from the /dev/auditpipe device file.
nxlog.conf
1 <Input bsm>
2 Module im_bsm
3 DeviceFile /dev/auditpipe
4 </Input>
Custom Programs
The im_exec module allows log data to be collected from custom external programs.
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input exec>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>
DNS Monitoring
Logs can be collected from BIND 9.
267
Example 171. Monitoring File Integrity
This example monitors files in the /etc and /srv directories, generating events when files are modified
or deleted. Files ending in .bak are excluded from the watch list.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 File "/srv/*"
5 Exclude "*.bak"
6 Digest sha1
7 ScanInterval 3600
8 Recursive TRUE
9 </Input>
Kernel
Logs from the kernel can be collected directly with the im_kernel module.
The system logger may need to be disabled or reconfigured to collect logs with im_kernel. To
NOTE completely disable syslogd on FreeBSD, run service syslogd onestop and sysrc
syslogd_enable=NO.
nxlog.conf
1 <Input kernel>
2 Module im_kernel
3 </Input>
Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In both cases, the xm_syslog module can be used to parse the events.
See the Linux System Logs and Collecting and Parsing Syslog sections for more information.
This example reads Syslog messages from /var/log/messages and parses them with the parse_syslog()
procedure.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>
268
Log Files
The im_file module can be used to collect events from log files.
This configuration reads messages from the /opt/test/input.log file. No parsing is performed; each
line is available in the $raw_event field.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/opt/test/input.log"
4 </Input>
Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
This configuration turns on process accounting (using /var/account/acct as the log file) and watches
for messages.
nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File "/var/account/acct"
5 </Input>
269
Chapter 34. OpenBSD
NXLog can collect various types of system logs on OpenBSD platforms. For deployment details, see the
supported OpenBSD platforms, OpenBSD installation, and monitoring.
This example reads BSM audit logs from the /dev/auditpipe device file.
nxlog.conf
1 <Input bsm>
2 Module im_bsm
3 DeviceFile /dev/auditpipe
4 </Input>
Custom Programs
The im_exec module allows log data to be collected from custom external programs.
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input exec>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>
DNS Monitoring
Logs can be collected from BIND 9.
270
Example 178. Monitoring File Integrity
This example monitors files in the /etc and /srv directories, generating events when files are modified
or deleted. Files ending in .bak are excluded from the watch list.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 File "/srv/*"
5 Exclude "*.bak"
6 Digest sha1
7 ScanInterval 3600
8 Recursive TRUE
9 </Input>
Kernel
Logs from the kernel can be collected directly with the im_kernel module. See Linux System Logs.
The system logger may need to be disabled or reconfigured to collect logs with im_kernel. To
NOTE completely disable syslogd on OpenBSD, run rcctl stop syslogd and rcctl disable
syslogd.
nxlog.conf
1 <Input kernel>
2 Module im_kernel
3 </Input>
Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In both cases, the xm_syslog module can be used to parse the events.
See the Linux System Logs and Collecting and Parsing Syslog sections for more information.
This example reads Syslog messages from /var/log/messages and parses them with the parse_syslog()
procedure.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>
271
Log Files
The im_file module can be used to collect events from log files.
This configuration reads messages from the /opt/test/input.log file. No parsing is performed; each
line is available in the $raw_event field.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/opt/test/input.log"
4 </Input>
Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
This configuration turns on process accounting (using /var/account/acct as the log file) and watches
for messages.
nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File "/var/account/acct"
5 </Input>
272
Chapter 35. GNU/Linux
NXLog can collect various types of system logs on GNU/Linux platforms. For deployment details, see the
supported Linux platforms and the corresponding installation page for RHEL/CentOS, Debian/Ubuntu, or SLES.
Notes are also available about hardening and monitoring NXLog on Linux.
The perlfcount add-on can be used to collect system information and statistics on Linux platforms.
DNS Monitoring
Logs can be collected from BIND 9 on Linux.
Kernel
The im_kernel module reads logs directly from the kernel log buffer. These logs can be parsed with
xm_syslog. See the Linux System Logs section.
Local Syslog
Messages written to /dev/log can be collected with the im_uds module. Events written to file in Syslog
format can be collected with im_file. In each case, the xm_syslog module can be used to parse the events. See
the Linux System Logs and Collecting and Parsing Syslog sections for more information.
Log Databases
Events can be read from databases with the im_dbi, im_oci, and im_odbc modules.
Log Files
The im_file module can be used to collect events from log files.
Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
This overlaps with Audit System logging.
273
Chapter 36. Apple macOS
NXLog can collect various types of system logs on the macOS platform. For deployment details, see the
supported macOS platforms and macOS installation.
This example reads events from input.asl and parses them with the xm_asl parser.
nxlog.conf
1 <Extension asl_parser>
2 Module xm_asl
3 </Extension>
4
5 <Input in>
6 Module im_file
7 # Example: "/var/log/asl/*"
8 File "foo/input.asl"
9 InputType asl_parser
10 Exec delete($EventReceivedTime);
11 </Input>
This configuration reads BSM audit logs directly from the kernel with the im_bsm module.
nxlog.conf
1 Group wheel
2
3 <Input bsm>
4 Module im_bsm
5 DeviceFile /dev/auditpipe
6 </Input>
274
Example 185. Reading BSM Audit Logs From File
This configuration reads from the BSM audit log files with im_file and parses the events with xm_bsm.
nxlog.conf
1 Group wheel
2
3 <Extension bsm_parser>
4 Module xm_bsm
5 </Extension>
6
7 <Input bsm>
8 Module im_file
9 File '/var/audit/*'
10 InputType bsm_parser
11 </Input>
Custom Programs
The im_exec module allows log data to be collected from custom external programs.
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input systemlog>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/system.log
6 </Input>
This configuration watches for changes to files and directories under /bin and /usr/bin/.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/bin/*"
4 File "/usr/bin/*"
5 ScanInterval 3600
6 Recursive TRUE
7 </Input>
Kernel
Logs from the kernel can be collected directly with the im_kernel module or via the local log file with im_file.
275
For log collection details, see macOS Kernel.
Local Syslog
Events written to file in Syslog format can be collected with im_file. The xm_syslog module can be used to
parse the events. See the Syslog section for more information.
This configuration file collects system logs from /var/log/system.log. This method does not read
from /dev/klog directly, so it is not necessary to disable syslogd.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/system.log"
8 Exec parse_syslog();
9 </Input>
Log Files
The im_file module can be used to collect events from log files.
This configuration uses the im_file module to read events from the specified log file.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/foo/in.log"
4 </Input>
Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
With this configuration file, NXLog will enable process accounting to the specified file and reads events
from it.
nxlog.conf
1 Group wheel
2
3 <Input acct>
4 Module im_acct
5 File '/var/log/acct'
6 AcctOn TRUE
7 </Input>
276
Chapter 37. Oracle Solaris
NXLog can collect various types of system logs on the Solaris platform. For deployment details, see the
supported Solaris platforms, Solaris installation, and monitoring.
This example configuration reads from files in /var/audit with im_file. The InputType provided by
xm_bsm is used to parse the binary format.
nxlog.conf
1 <Extension bsm_parser>
2 Module xm_bsm
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/audit/*'
8 InputType bsm_parser
9 </Input>
Custom Programs
The im_exec module allows log data to be collected from custom external programs.
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input systemlog>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/syslog
6 </Input>
DNS Monitoring
Logs can be collected from BIND 9.
277
Example 193. Monitoring File Integrity
This configuration watches for changes to files and directories under /usr/bin/.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/usr/bin/*"
4 Digest SHA1
5 ScanInterval 3600
6 Recursive TRUE
7 </Input>
Local Syslog
Events written to file in Syslog format can be collected with the im_file module and parsed with the xm_syslog
module. See Collecting and Parsing Syslog for more information.
This example uses the im_file module to read messages from /var/log/messages and the xm_syslog
parse_syslog() procedure to parse them.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 Exec parse_syslog();
9 </Input>
Log Files
The im_file module can be used to collect events from log files.
This configuration uses the im_file module to read events from the specified log file.
nxlog.conf
1 <Input in>
2 Module im_file
3 File "/foo/input.log"
4 </Input>
Process Accounting
The im_acct module can be used to gather details about which owner (user and group) runs what processes.
278
Example 196. Reading Process Accounting Logs
With this configuration file, NXLog will enable process accounting to the specified file and read events
from it.
nxlog.conf
1 <Input acct>
2 Module im_acct
3 AcctOn TRUE
4 File '/tmp/nxlog.acct'
5 </Input>
279
Chapter 38. Microsoft Windows
NXLog can collect various types of system logs on the Windows platform. For deployment details, see the
supported Windows platforms and Windows installation. Notes are also available about hardening and
monitoring NXLog on Windows.
Custom Programs
The im_exec module allows log data to be collected from custom external programs.
DHCP Monitoring
DHCP logging can be set up for Windows DHCP Server using the im_file module by reading DHCP audit logs
directly from CSV files. Alternatively, the im_msvistalog module can be used to collect DHCP Server or Client
event logs from the built-in channels in Windows Event Log.
DNS Monitoring
DNS logging can be set up for Windows DNS Server using either ETW tracing or debug logging.
Log Databases
Events can be read from databases with the im_odbc module. Some products write logs to SQL Server
databases; see the Microsoft System Center Operations Manager section for an example.
Log Files
The im_file module can be used to collect events from log files.
Microsoft Exchange
Logs generated by Microsoft Exchange can be used as a source for log collection with many log types
supported.
Microsoft IIS
IIS can be configured to write logs in W3C format, which can be read with im_file and parsed with xm_w3c or
xm_csv. Other formats can be parsed with other methods. See Microsoft IIS.
Microsoft SharePoint
Collect the various types of logs generated by Microsoft SharePoint, parse the ULS into another format, and
send.
Registry Monitoring
The Windows Registry can be monitored for changes; see the im_regmon module. For an example ruleset,
280
see the regmon-rules add-on.
Snare
Windows Event Log data can be converted to Snare format as needed for some third-party integrations.
Sysmon
Many additional audit events can be generated with the Sysmon utility, including process creation, system
driver loading, network connections, and modification of file creation timestamps. These events are written to
the Event Log. See the Sysmon section for more information.
Windows Applocker
Collecting event logs from Windows AppLocker is supported by using the im_msvistalog or the other Windows
Event Log modules.
Windows Firewall
Windows Firewall logs can be collected with the im_file module from the Advanced Security log. Alternatively,
the im_msvistalog module can be used to collect Windows Firewall events from Windows Event Log.
Windows Powershell
PowerShell scripts can be integrated for log processing tasks and configuration generation (for example,
Azure SQL Database); see Using PowerShell Scripts. It is also possible to collect Powershell activity logs.
281
Integration
282
Chapter 39. Amazon Web Services (AWS)
AWS is a subsidiary of Amazon that provides various cloud computing services.
NXLog can be set up to retrieve CloudWatch log streams in either of two ways:
• NXLog can connect to the CloudWatch API using the Boto 3 client and poll for logs at regular intervals. This is
suitable when a short delay in log collection is acceptable.
• Or, AWS Lambda can be set up to push log data to NXLog via HTTP. This method offers low latency log
collection.
4. Choose to Attach existing policies directly and select the CloudWatchLogsReadOnly policy. Click Next:
Review and then Create user.
283
5. Save access keys for this user and Close.
6. Install and configure Boto 3, the AWS SDK for Python. See the Boto 3 Quickstart and Credentials
documentation for more details.
7. Edit the region_name and group_name variables in the cloudwatch.py script, as necessary.
284
Example 197. Using a Amazon CloudWatch Add-On
This example NXLog configuration uses im_python to execute the CloudWatch add-on script. The xm_json
parse_json() procedure is then used is parse the JSON log data into fields.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input py>
6 Module im_python
7 PythonCode cloudwatch.py
8 Exec parse_json();
9 </Input>
cloudwatch.py (truncated)
import nxlog, boto3, json, time
class LogReader:
def __init__(self, time_interval):
client = boto3.client('logs', region_name='eu-central-1')
self.lines = ""
all_streams = []
group_name = '<ENTER GROUP NAME HERE>'
1. In the AWS web interface, go to Services › Lambda and click the Create function button.
285
4. Under Function code select Upload a .ZIP file for Code entry type, select Python under Runtime, and
change the Handler name to lambda_function.lambda_handler.
5. Set the correct host and port in lambda_function.py, then upload a ZIP archive with that file (and
certificates, if needed). Click Save.
6. From the Configuration tab, change to the Triggers tab. Click + Add trigger.
7. Choose CloudWatch Logs as a trigger for the Lambda function. Select the log group that should be
forwarded and provide a Filter Name, then click Submit.
286
287
Example 198. Lambda Collection via HTTPS Input
In this example, the im_http module listens for connections from the Lambda script via HTTP. The xm_json
parse_json() procedure is then used to parse the JSON log data into fields.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input http>
6 Module im_http
7 ListenAddr 127.0.0.1
8 Port 8080
9 HTTPSCertFile %CERTDIR%/server-cert.pem
10 HTTPSCertKeyFile %CERTDIR%/server-key.pem
11 HTTPSCAFile %CERTDIR%/ca.pem
12 HTTPSRequireCert TRUE
13 HTTPSAllowUntrusted FALSE
14 Exec parse_json();
15 </Input>
lambda_function.py
import json, base64, zlib, ssl, http.client
print('Loading function')
When running NXLog in EC2 instances, it may be helpful to include the current instance ID in the collected logs.
For more information about retrieving EC2 instance metadata and adding it to event data, see the Amazon Web
Services section of the Cloud Instance Metadata chapter.
288
NXLog can be set up to send log data to S3 storage or read log data from S3 storage. For more information, see
the Amazon S3 add-on documentation.
289
Chapter 40. Apache HTTP Server
The Apache HTTP Server provides very comprehensive and flexible logging capabilities. A brief overview is
provided in the following sections. See the Log Files section of the Apache HTTP Server Documentation for more
detailed information about configuring logging.
290
Example 199. Using the Apache Error Log
The following directives enable error logging of all messages at or above the "informational" severity level,
in the specified format, to the specified file. The ErrorLogFormat defined below is equivalent to the
default, which includes the timestamp, the module producing the message, the event severity, the process
ID, the thread ID, the client address, and the detailed error message.
apache2.conf
LogLevel info
ErrorLogFormat "[%{u}t] [%-m:%l] [pid %P:tid %T] [client %a] %M"
ErrorLog /var/log/apache2/error.log
The following is a typical log message generated by the Apache HTTP Server, an NXLog configuration for
parsing it, and the resulting JSON.
Log Sample
[Tue Aug 01 07:17:44.496832 2017] [core:info] [pid 15019:tid 140080326108928] [client
192.168.56.1:60154] AH00128: File does not exist: /var/www/html/notafile.html↵
nxlog.conf
1 <Input apache_error>
2 Module im_file
3 File '/var/log/apache2/error.log'
4 <Exec>
5 if $raw_event =~ /(?x)^\[\S+\ ([^\]]+)\]\ \[(\S+):(\S+)\]\ \[pid\ (\d+):
6 tid\ (\d+)\]\ (\[client\ (\S+)\]\ )?(.+)$/
7 {
8 $EventTime = parsedate($1);
9 $ApacheModule = $2;
10 $ApacheLogLevel = $3;
11 $ApachePID = $4;
12 $ApacheTID = $5;
13 if $7 != '' $ClientAddress = $7;
14 $Message = $8;
15 }
16 </Exec>
17 </Input>
Output Sample
{
"EventReceivedTime": "2017-08-01T07:17:45.641190+02:00",
"SourceModuleName": "apache_error",
"SourceModuleType": "im_file",
"EventTime": "2017-08-01T07:17:44.496832+02:00",
"ApacheModule": "core",
"ApacheLogLevel": "info",
"ApachePID": "15019",
"ApacheTID": "140080317716224",
"ClientAddress": "192.168.56.1:60026",
"Message": "AH00128: File does not exist: /var/www/html/notafile.html"
}
291
There are several options for handling logging when using virtual hosts. The examples below, when specified in
the main server context (not in a <VirtualHost> section) will log all requests exactly as with a single-host server.
The %v format string can be added, if desired, to log the name of the virtual server responding to the request.
Alternatively, the CustomLog directive can be specified inside a <VirtualHost> section, in which case only the
requests served by that virtual server will be logged to the file.
Pre-defined format strings for the Common Log and Combined Log Formats may be included by
NOTE default. These pre-defined formats may use %O (the total sent including headers) instead of the
standard %b (the size of the requested file) in order to allow detection of partial requests.
Example 200. Using the Common Log Format for the Access Log
The LogFormat directive below creates a format named common that corresponds to the Common Log
Format. The second directive configures the Apache HTTP Server to write entries to the access_log file in
the common format.
apache2.conf
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog /var/log/apache2/access_log common
Example 201. Using the Combined Log Format for the Access Log
The following directives will configure the Apache HTTP Server to write entries to the access_log file in the
Combined Log Format.
apache2.conf
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" combined
CustomLog /var/log/apache2/access_log combined
NXLog configuration examples for parsing these access log formats can be found in the Common & Combined
Log Formats section.
292
Chapter 41. Apache Tomcat
Apache Tomcat provides flexible logging that can be configured for different transports and formats.
Here is a log sample consisting of three events. The log message of the second event spans multiple lines.
Log Sample
2001-01-25 17:31:42,136 INFO [org.nxlog.somepackage.Class] - single line↵
2001-01-25 17:41:16,268 ERROR [org.nxlog.somepackage.Class] - Error retrieving names: ; nested
exception is:↵
java.net.ConnectException: Connection refused↵
AxisFault↵
faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException↵
faultSubcode:↵
faultString: java.net.ConnectException: Connection refused↵
faultActor:↵
faultNode:↵
faultDetail:↵
{http://xml.apache.org/axis/}stackTrace:java.net.ConnectException: Connection refused↵
2001-01-25 17:57:38,469 INFO [org.nxlog.somepackage.Class] - third log message↵
In order to parse and process multiple line log messages, the xm_multiline module can be used. In this
example, a regular expression match determines the beginning of a log message.
nxlog.conf
1 define REGEX /(?x)^(?<EventTime>\d{4}\-\d{2}\-\d{2}\ \d{2}\:\d{2}\:\d{2}),\d{3}\ \
2 (?<Severity>\S+)\ \[(?<Class>\S+)\]\ \-\ (?<Message>[\s\S]+)/
3
4 <Extension multiline>
5 Module xm_multiline
6 HeaderLine %REGEX%
7 </Extension>
8
9 <Input log4j>
10 Module im_file
11 File "/var/log/tomcat6/catalina.out"
12 InputType multiline
13 Exec if $raw_event =~ %REGEX% $EventTime = parsedate($EventTime);
14 </Input>
293
Chapter 42. APC Automatic Transfer Switch
The APC Automatic Transfer Switch (ATS) is capable of sending its logs to a remote Syslog destination via UDP.
Log Sample
Date Time Event↵
------------------------------------------------------------------------↵
03/26/2017 16:20:55 Automatic Transfer Switch: Communication↵
established.↵
03/26/2017 16:20:45 System: Warmstart.↵
03/26/2017 16:19:13 System: Detected an unauthorized user attempting↵
to access the SNMP interface from 192.168.15.11.↵
The ATS is an independent device, so if there more than one installed in a particular environment the
configuration below must be applied to each device individually. For more details about configuring APC ATS
logging, go to the APC Support Site and select the product name or part number.
The steps below have been tested on AP7700 series devices and should work for other ATS
NOTE
models also.
1. Configure NXLog for receiving log entries via UDP (see the example below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the device.
3. Configure Syslog logging on the ATS using either the web interface or the command line. See the following
sections.
The following examples shows the ATS logs as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/apc.log"
19 Exec to_json();
20 </Output>
Logs like the example at the beginning of the chapter will produce output as follows.
294
Output Sample
{
"MessageSourceAddress": "192.168.15.22",
"EventReceivedTime": "2017-03-26 17:03:27",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 23,
"SyslogFacility": "LOCAL7",
"SyslogSeverityValue": 7,
"SyslogSeverity": "DEBUG",
"SeverityValue": 1,
"Severity": "DEBUG",
"Hostname": "192.168.15.22",
"EventTime": "2017-03-26 16:04:18",
"SourceName": "System",
"Message": "Detected an unauthorized user attempting to access the SNMP interface from
192.168.15.11. 0x0004"
}
{
"MessageSourceAddress": "192.168.15.22",
"EventReceivedTime": "2017-03-26 17:20:04",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 23,
"SyslogFacility": "LOCAL7",
"SyslogSeverityValue": 7,
"SyslogSeverity": "DEBUG",
"SeverityValue": 1,
"Severity": "DEBUG",
"Hostname": "192.168.15.22",
"EventTime": "2017-03-26 16:20:54",
"SourceName": "System",
"Message": "Warmstart. 0x0002"
}
{
"MessageSourceAddress": "192.168.15.22",
"EventReceivedTime": "2017-03-26 17:20:04",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 23,
"SyslogFacility": "LOCAL7",
"SyslogSeverityValue": 7,
"SyslogSeverity": "DEBUG",
"SeverityValue": 1,
"Severity": "DEBUG",
"Hostname": "192.168.15.22",
"EventTime": "2017-03-26 16:20:55",
"Message": "Automatic Transfer Switch: Communication established. 0x0C05"
}
3. Enable Syslog.
4. Select the Facility.
295
5. Add up to four Syslog servers and a port for each.
6. Map the Local Severity to the Syslog Severity as required.
7. Click [ Apply ].
296
Example 204. ATS Syslog Settings
The following shows the Syslog settings screen, which is shown after completing step 2 above.
1- Settings
2- Server 1
3- Server 2
4- Server 3
5- Server 4
6- Severity Mapping
297
Chapter 43. Apple macOS Kernel
NXLog supports different ways of collecting Apple macOS kernel logs:
• Collect directly with the im_kernel module, which requires disabling syslogd.
• Collect via the local log file with im_file; see Local Syslog below.
This configuration uses the im_kernel module to read events directly from the kernel (via /dev/klog).
This requires that syslogd be disabled as follows:
2. Rename plist to keep syslogd from starting again at the next reboot.
$ sudo mv /System/Library/LaunchDaemons/com.apple.syslogd.plist \
/System/Library/LaunchDaemons/com.apple.syslogd.plist.disabled
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input kernel>
6 Module im_kernel
7 Exec parse_syslog_bsd();
8 </Input>
Newer versions of Apple macOS use ULS (Unified Logging System) with SIP (System Integrity Protection) and
users are unable to easily disable syslogd while keeping SIP enabled. For this setup, you can leverage the
im_exec module to collect from /usr/bin/log stream --style=json --type=log.
298
Example 206. Collecting ULS Kernel Logs from /usr/bin/log
This configuration uses the im_exec module to read events from the kernel (via /usr/bin/log) and
parses the data with the xm_json module.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension multiline>
6 Module xm_multiline
7 HeaderLine /^\[{|^},{/
8 </Extension>
9
10 <Input in>
11 Module im_exec
12 Command /usr/bin/log
13 Arg stream
14 Arg --style=json
15 Arg --type=log
16 InputType multiline
17 <Exec>
18 $raw_event =~ s/^\[{|^},{/{/;
19 $raw_event =~ s/\}]$//;
20 $raw_event = $raw_event + "\n}";
21 parse_json();
22 </Exec>
23 </Input>
299
Chapter 44. ArcSight Common Event Format (CEF)
NXLog can be configured to collect or forward logs in Common Event Format (CEF). NXLog Enterprise Edition
provides the xm_cef module for parsing and generating CEF.
CEF is a text-based log format developed by ArcSight™ and used by HP ArcSight™ products. It uses Syslog as
transport. The full format includes a Syslog header or "prefix", a CEF "header", and a CEF "extension". The
extension contains a list of key-value pairs. Standard key names are provided, and user-defined extensions can
be used for additional key names. In some cases, CEF is used with the Syslog header omitted.
CEF Syntax
Jan 11 10:25:39 host CEF:Version|Device Vendor|Device Product|Device Version|Device Event Class
ID|Name|Severity|[Extension]↵
Log Sample
Oct 12 04:16:11 localhost CEF:0|nxlog.org|nxlog|2.7.1243|Executable Code was Detected|Advanced
exploit detected|100|src=192.168.255.110 spt=46117 dst=172.25.212.204 dpt=80↵
The ArcSight™ Logger can be configured to send CEF logs via TCP with the following steps.
5. Click Save.
300
Example 207. Receiving CEF Logs
With this configuration, NXLog will collect CEF logs via TCP, convert to plain JSON format, and save to file.
nxlog.conf
1 <Extension _cef>
2 Module xm_cef
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Extension _syslog>
10 Module xm_syslog
11 </Extension>
12
13 <Input logger_tcp>
14 Module im_tcp
15 Host 0.0.0.0
16 Port 1514
17 Exec parse_syslog(); parse_cef($Message);
18 </Input>
19
20 <Output json_file>
21 Module om_file
22 File '/var/log/json'
23 Exec to_json();
24 </Output>
25
26 <Route r>
27 Path logger_tcp => json_file
28 </Route>
The ArcSight™ Logger can be configured to receive CEF logs via TCP with the following steps.
◦ Encoding: UTF-8
5. Click Save.
301
Example 208. Sending CEF Logs
With this configuration, NXLog will read Syslog logs from file, convert them to CEF, and forward them to the
ArcSight Logger via TCP. Default values will be used for the CEF header unless corresponding fields are
defined in the event record (see the to_cef() procedure in the Reference Manual for a list of fields).
nxlog.conf
1 <Extension _cef>
2 Module xm_cef
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input messages_file>
10 Module im_file
11 File '/var/log/messages'
12 Exec parse_syslog();
13 </Input>
14
15 <Output logger_tcp>
16 Module om_tcp
17 Host 192.168.1.1
18 Port 574
19 Exec $Message = to_cef(); to_syslog_bsd();
20 </Output>
21
22 <Route r>
23 Path messages_file => logger_tcp
24 </Route>
WARNING The xm_csv and xm_kvp modules may not always correctly parse or generate CEF logs.
Here, the xm_csv module is used to parse the pipe-delimited CEF header, while the xm_kvp module is used
to parse the space-delimited key-value pairs in the CEF extension. The required extension configurations
are shown below.
302
nxlog.conf Extensions
1 <Extension cef_header>
2 Module xm_csv
3 Fields $Version, $Device_Vendor, $Device_Product, $Device_Version, \
4 $Signature_ID, $Name, $Severity, $_Extension
5 Delimiter |
6 QuoteMethod None
7 </Extension>
8
9 <Extension cef_extension>
10 Module xm_kvp
11 KVDelimiter '='
12 KVPDelimiter ' '
13 QuoteMethod None
14 </Extension>
15
16 <Extension syslog>
17 Module xm_syslog
18 </Extension>
nxlog.conf Input
1 <Input in>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 <Exec>
6 parse_syslog();
7 cef_header->parse_csv($Message);
8 cef_extension->parse_kvp($_Extension);
9 </Exec>
10 </Input>
nxlog.conf Output
1 <Output out>
2 Module om_tcp
3 Host 192.168.1.1
4 Port 574
5 <Exec>
6 $_Extension = cef_extension->to_kvp();
7 $Version = 'CEF:0';
8 $Device_Vendor = 'NXLog';
9 $Device_Product = 'NXLog';
10 $Device_Version = '';
11 $Signature_ID = '0';
12 $Name = '-';
13 $Severity = '';
14 $Message = cef_header->to_csv();
15 to_syslog_bsd();
16 </Exec>
17 </Output>
303
Chapter 45. Box
Box provides content management and file sharing services.
NXLog can be set up to pull events from Box using their REST API. For more information, see the Box add-on.
304
Chapter 46. Brocade Switches
Brocade switches can be configured to send Syslog messages to a remote destination, UDP port 514.
Log Sample
2017/03/22-23:05:12, [SEC-1203], 113962, FID 128, INFO, fcsw1, Login information: Login successful
via TELNET/SSH/RSH. IP Addr: admin2↵
The best way to configure a Brocade switch is with the command line interface. In the case of multiple switches
running in redundancy mode, each device must be configured separately.
More details on configuring Brocade switches can be found in the Brocade Document Library: search for a
particular switch model and select Installation & Configuration Guides from the Filter list.
The steps below have been tested with Brocade 4100 series switches and OS v6. Newer
NOTE
software versions may have additional capabilities, such as sending logs over TLS.
1. Configure NXLog for receiving Syslog entries via UDP (see the example below), then restart NXLog.
2. Make sure the NXLog agent is accessible from the switch.
3. Log in to the switch via SSH.
4. Run the following commands. Replace LEVEL with an integer corresponding to the desired Syslog local facility
(see the example). Replace IP_ADDRESS with the address of the NXLog agent.
# syslogdfacility -l LEVEL
# syslogdIpAdd IP_ADDRESS
The following commands query the current Syslog facility and then set up Syslog logging to
192.168.6.143 with Syslog facility local5.
fcsw1:admin> syslogdfacility
Syslog facility: LOG_LOCAL7
fcsw1:admin> syslogdfacility -l 5
Syslog facility changed to LOG_LOCAL5
fcsw1:admin> syslogdIpAdd 192.168.6.143
Syslog IP address 192.168.6.143 added
305
Example 211. Receiving Brocade Logs
This example shows Brocade switch logs as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/brocade.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.5.15",
"EventReceivedTime": "2017-03-22 20:23:58",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 21,
"SyslogFacility": "LOCAL5",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-03-22 20:23:58",
"Hostname": "192.168.5.15",
"SourceName": "raslogd",
"Message": "2017/03/22-23:05:12, [SEC-1203], 113962, WWN 10:00:00:05:1e:02:8e:fc | FID 128,
INFO, fcsw1, Login information: Login successful via TELNET/SSH/RSH. IP Addr: admin2"
}
306
Chapter 47. Check Point
The im_checkpoint module, provided by NXLog Enterprise Edition, can collect logs from Check Point devices over
the OPSEC LEA protocol.
With the following configuration, NXLog will collect logs from Check Point devices over the LEA protocol and
write them to file in JSON format.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input checkpoint>
6 Module im_checkpoint
7 Command /opt/nxlog/bin/nx-im-checkpoint
8 LEAConfigFile /opt/nxlog/etc/lea.conf
9 </Input>
10
11 <Output file>
12 Module om_file
13 File 'tmp/output'
14 Exec $raw_event = to_json();
15 </Output>
16
17 <Route checkpoint_to_file>
18 Path checkpoint => file
19 </Route>
307
Chapter 48. Cisco ACS
An example Syslog record from a Cisco Secure Access Control System (ACS) device looks like the following. For
more information, refer to the Syslog Logging Configuration Scenario chapter in the Cisco Configuration Guide.
Log Sample
<38>Oct 16 21:01:29 10.0.1.1 CisACS_02_FailedAuth 1k1fg93nk 1 0 Message-Type=Authen failed,User-
Name=John,NAS-IP-Address=10.0.1.2,AAA Server=acs01↵
The following configuration file instructs NXLog to accept Syslog messages on UDP port 1514. The payload
is parsed as Syslog and then the ACS specific fields are extracted. The output is written to file in JSON
format.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_udp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog_bsd();
15 if ( $Message =~ /^CisACS_(\d\d)_(\S+) (\S+) (\d+) (\d+) (.*)$/ )
16 {
17 $ACSCategoryNumber = $1;
18 $ACSCategoryName = $2;
19 $ACSMessageId = $3;
20 $ACSTotalSegments = $4;
21 $ACSSegmentNumber = $5;
22 $ACSMessage = $6;
23 if ( $ACSMessage =~ /Message-Type=([^\,]+)/ ) $ACSMessageType = $1;
24 if ( $ACSMessage =~ /User-Name=([^\,]+)/ ) $AccountName = $1;
25 if ( $ACSMessage =~ /NAS-IP-Address=([^\,]+)/ ) $ACSNASIPAddress = $1;
26 if ( $ACSMessage =~ /AAA Server=([^\,]+)/ ) $ACSAAAServer = $1;
27 }
28 else log_warning("Does not match: " + $raw_event);
29 [...]
308
Chapter 49. Cisco ASA
Cisco Adaptive Security Appliance (ASA) devices are capable of sending their logs to a remote Syslog destination
via TCP or UDP. When sending logs over the network, TCP is the preferred protocol since packet loss is possible
with UDP, especially when network traffic is high.
Log Sample
Apr 15 2017 00:21:14 192.168.12.1 : %ASA-5-111010: User 'john', running 'CLI' from IP 0.0.0.0,
executed 'dir disk0:/dap.xml'↵
Apr 15 2017 00:22:27 192.168.12.1 : %ASA-4-313005: No matching connection for ICMP error message:
icmp src outside:81.24.28.226 dst inside:72.142.17.10 (type 3, code 0) on outside interface.
Original IP payload: udp src 72.142.17.10/40998 dst 194.153.237.66/53.↵
Apr 15 2017 00:22:42 192.168.12.1 : %ASA-3-710003: TCP access denied by ACL from
179.236.133.160/8949 to outside:72.142.18.38/23↵
For more details about configuring Syslog on Cisco ASA, see the Cisco configuration guide for the ASA or
Adaptive Security Device Manager (ASDM) version in use.
The steps below have been tested with ASA 9.x and ASDM 7.x, but should also work with other
NOTE
versions.
This example shows Cisco ASA logs as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/asa.log"
19 Exec to_json();
20 </Output>
The following output log sample resulted from the input at the beginning of the chapter being processed by
this configuration.
309
Output Sample
{
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-04-15 00:19:53",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 20,
"SyslogFacility": "LOCAL4",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "192.168.12.1",
"EventTime": "2017-04-15 00:21:14",
"Message": "%ASA-5-111010: User 'john', running 'CLI' from IP 0.0.0.0, executed 'dir
disk0:/dap.xml'"
}
{
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-04-15 00:21:06",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 20,
"SyslogFacility": "LOCAL4",
"SyslogSeverityValue": 4,
"SyslogSeverity": "WARNING",
"SeverityValue": 3,
"Severity": "WARNING",
"Hostname": "192.168.12.1",
"EventTime": "2017-04-15 00:22:27",
"Message": "%ASA-4-313005: No matching connection for ICMP error message: icmp src
outside:81.24.28.226 dst inside:72.142.17.10 (type 3, code 0) on outside interface. Original IP
payload: udp src 72.142.17.10/40998 dst 194.153.237.66/53."
}
{
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-04-15 00:21:21",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 20,
"SyslogFacility": "LOCAL4",
"SyslogSeverityValue": 3,
"SyslogSeverity": "ERR",
"SeverityValue": 4,
"Severity": "ERROR",
"Hostname": "192.168.12.1",
"EventTime": "2017-04-15 00:22:42",
"Message": "%ASA-3-710003: TCP access denied by ACL from 179.236.133.160/8949 to
outside:72.142.18.38/23"
}
310
Example 215. Extracting Additional Fields
The following configuration uses regular expressions to parse additional key-value pairs from substrings
embedded in the string value of $Message field. Once they have been parsed and added as new fields, a
copy of the $Message field is made, given the name $ASAMessage, and assigned the remaining string value
after the parsed substrings have been removed.
nxlog.conf
1 <Input in_syslog_tcp>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /^%(ASA)-(\d)-(\d{6}): (.*)$/
8 {
9 $ASASeverityNumber = $2;
10 $ASAMessageID = $3;
11 $ASAMessage = $4;
12 }
13 </Exec>
14 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-04-15 14:27:04",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 20,
"SyslogFacility": "LOCAL4",
"SyslogSeverityValue": 3,
"SyslogSeverity": "ERR",
"SeverityValue": 4,
"Severity": "ERROR",
"Hostname": " 192.168.12.1",
"EventTime": "2017-04-15 14:28:26",
"Message": "%ASA-3-710003: TCP access denied by ACL from 117.247.81.21/52569 to
outside:72.142.18.38/23",
"ASASeverityNumber": "3",
"ASAMessageID": "710003",
"ASAMessage": "TCP access denied by ACL from 117.247.81.21/52569 to outside:72.142.18.38/23"
}
Further field extraction can be done based on message ID. Detailed information on existing IDs and their formats
can be found in the Cisco ASA Series Syslog Messages book.
311
Example 216. Extracting Fields According to Message ID
The following NXLog configuration parses a very common firewall message: "TCP access denied by ACL".
The regular expressions have been enhanced with pattern matching for parsing out the IP address/port
for both the source and the destination.
nxlog.conf
1 <Input in_syslog_tcp>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /(?x)^%ASA-3-710003:\ TCP\ access\ denied\ by\ ACL\ from
8 \ ([0-9.]*)\/([0-9]*)\ to\ outside:([0-9.]*)\/([0-9]*)/
9 {
10 $ASASeverityNumber = "3";
11 $ASAMessageID = "710003";
12 $ASAMessage = "TCP access denied by ACL";
13 $ASASrcIP = $1;
14 $ASASrcPort = $2;
15 $ASADstIP = $3;
16 $ASADstPort = $4;
17 }
18 </Exec>
19 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-04-15 15:10:20",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 20,
"SyslogFacility": "LOCAL4",
"SyslogSeverityValue": 3,
"SyslogSeverity": "ERR",
"SeverityValue": 4,
"Severity": "ERROR",
"Hostname": "192.168.12.1",
"EventTime": "2017-04-15 15:11:43",
"Message": "%ASA-3-710003: TCP access denied by ACL from 119.80.179.109/2083 to
outside:72.142.18.38/23",
"ASASeverityNumber": "3",
"ASAMessageID": "710003",
"ASAMessage": "TCP access denied by ACL",
"ASASrcIP": "119.80.179.109",
"ASASrcPort": "2083",
"ASADstIP": "72.142.18.38",
"ASADstPort": "23"
}
312
2. Enable logging.
# logging enable
3. In case of a High Availability (HA) pair, enable logging on the standby unit.
# logging standby
4. Specify the Syslog facility. Replace FACILITY with a number from 16 to 23, corresponding to local0 through
local7 (the default is 20, or local4).
5. Specify the severity level. Replace LEVEL with a number from 0 to 7. Use the maximum level for which
messages should be generated (severity level 3 will produce messages for levels 3, 2, 1, and 0). The levels
correspond to the Syslog severities.
6. Allow ASA to pass traffic when the Syslog server is not available.
# logging permit-hostdown
If logs are being sent via TCP and this setting is not configured, ASA will stop passing traffic
NOTE
when the Syslog server is unavailable.
7. Configure the Syslog host. Replace IP_ADDRESS and PORT with the remote IP address and port that NXLog is
listening on.
To enable SSL/TLS for connections to the NXLog agent, add secure at the end of the above
NOTE
command. The im_ssl module will need to be used when configuring NXLog.
This command configures 192.168.6.143 as the Syslog host, with TCP port 1514.
# write memory
313
3. Go to Syslog Setup and specify the Facility Code (the default is 20). Click [ Apply ].
4. Go to Logging Filters, select Syslog Servers, click [ Edit ] and specify the severity level. Click [ OK ] and then [
Apply ].
314
5. Go to Syslog Servers and select Allow user traffic to pass when TCP syslog server is down. Click [ Apply ].
This setting is important to avoid downtime during TCP logging in case the Syslog server is
NOTE
unavailable.
6. Under Syslog Servers, click [ Add ] and specify the interface, remote IP address, protocol and port. Click [ OK
] and then [ Apply ].
To enable SSL/TLS for connections to the NXLog agent, select the Enable secure syslog
NOTE
using SSL/TLS option. The im_ssl module will need to be used when configuring NXLog.
315
49.2. NetFlow From Cisco ASA
NetFlow is a protocol used by Cisco devices that provides the ability to send details about network traffic to a
remote destination. NXLog is capable of receiving NetFlow logs. The steps below outline the configuration
required to send information about traffic passing through Cisco ASA to NXLog via UDP.
1. Configure NXLog for receiving NetFlow via UDP/2162 (see the example below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from each of the ASA devices being configured.
3. Set up NetFlow logging on Cisco ASA, using either the command line or ASDM. See the following sections.
The steps below have been tested with ASA 9.x and ASDM 7.x, but should work for other
NOTE
versions also.
nxlog.conf
1 <Extension netflow>
2 Module xm_netflow
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_netflow_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 2162
13 InputType netflow
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/netflow.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"Version": 9,
"SysUpTimeMilisec": 2374222958,
"ExportTime": "2017-05-17 18:39:05",
"TimeMsecStart": "2017-05-17 18:38:04",
"Protocol": 6,
"SourcePort": 64394,
"DestPort": 443,
"SourceIpV4Address": "192.168.13.37",
"DestIpV4Address": "172.217.3.135",
"inputSNMPIface": 4,
"outputSNMPIface": 3,
"ASAeventTime": "2017-05-17 18:39:05",
"ASAconnID": 41834207,
"FNF_ICMPCode": 0,
"FNF_ICMPType": 0,
"ASAevent": 1,
316
"ASAextEvent": 0,
"ASA_XlateSourcePort": 64394,
"ASA_XlateDestPort": 443,
"ASA_V4XlateSourceAddr": "72.142.18.38",
"ASA_V4XlateDestAddr": "172.217.3.135",
"ASA_IngressACL": "433a1af1a925365e00000000",
"ASA_EgressACL": "000000000000000000000000",
"ASA_UserName20": "",
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-05-17 18:36:32",
"SourceModuleName": "udpin",
"SourceModuleType": "im_udp"
}
{
"Version": 9,
"SysUpTimeMilisec": 2374222958,
"ExportTime": "2017-05-17 18:39:05",
"TimeMsecStart": "2017-05-17 18:38:04",
"Protocol": 17,
"SourcePort": 65080,
"DestPort": 443,
"SourceIpV4Address": "192.168.13.37",
"DestIpV4Address": "216.58.216.206",
"inputSNMPIface": 4,
"outputSNMPIface": 3,
"ASAeventTime": "2017-05-17 18:39:05",
"ASAconnID": 41834203,
"FNF_ICMPCode": 0,
"FNF_ICMPType": 0,
"ASAevent": 1,
"ASAextEvent": 0,
"ASA_XlateSourcePort": 65080,
"ASA_XlateDestPort": 443,
"ASA_V4XlateSourceAddr": "72.142.18.38",
"ASA_V4XlateDestAddr": "216.58.216.206",
"ASA_IngressACL": "433a1af1a925365e00000000",
"ASA_EgressACL": "000000000000000000000000",
"ASA_UserName20": "",
"MessageSourceAddress": "192.168.12.1",
"EventReceivedTime": "2017-05-17 18:36:32",
"SourceModuleName": "udpin",
"SourceModuleType": "im_udp"
}
3. Create an access list matching the traffic that needs to be logged. Replace ACL_NAME with a name for the
access list. Replace PROTOCOL, SOURCE_IP, and DESTINATION_IP with appropriate values corresponding to
the traffic to be matched.
317
# access-list ACL_NAME extended permit PROTOCOL SOURCE_IP DESTINATION_IP
4. Create a class map with the access list. Replace ACL_NAME with the access list name used in the previous
step.
# class-map global-class
# match access-list ACL_NAME
5. Add NetFlow destination to global policy. Replace IP_ADDRESS with the address that the NXLog agent is
listening on.
# policy-map global_policy
# class global-class
# flow-export event-type all destination IP_ADDRESS
These commands enable NetFlow logging of all traffic to 192.168.6.143 via UDP port 2162.
3. Click [ Add ] and specify the interface, remote IP address, and port that the NXLog agent is listening on.
318
6. Select Source and Destination IP Address (uses ACL) and click [ Next ].
7. Specify the source and destination criteria. The example below matches all traffic.
319
8. Go to the NetFlow tab and add the NetFlow destination created during the first step. Make sure the Send
option is selected.
320
Chapter 50. Cisco FireSIGHT
Cisco FireSIGHT is a suite of network security and traffic management products.
NXLog can be set up to collect Cisco FireSIGHT events using the Cisco Event Streamer (eStreamer) API. This
functionality is implemented as an add-on; for more information, see the Cisco FireSIGHT eStreamer add-on
documentation.
321
Chapter 51. Cisco IPS
Cisco IPS devices monitors and prevents intrusions by analyzing, detecting, and blocking threats.
NXLog can be set up to collect Cisco IPS alerts with the Security Device Event Exchange (SDEE) API. This
functionality is implemented as an add-on; for more information, see the Cisco Intrusion Prevention Systems
(CIDEE) add-on documentation.
322
Chapter 52. Cloud Instance Metadata
Cloud providers often allow retrieval of metadata about a virtual machine directly from the instance. NXLog can
be configured to enrich the log data with this information, which may include details such as instance ID and
type, hostname, and currently used public IP address.
The examples below use the xm_python module and Python scripts for this purpose. Each of the scripts depends
on the requests module which can be installed by running pip install requests or with the system’s
package manager (for example, apt install python-requests on Debian-based systems).
In this example, NXLog reads from a generic file with im_file. In the Output block, the xm_python
python_call() procedure is used to execute the get_attribute() Python function, which adds one or more
metadata fields to the event record. The output is then converted to JSON format and written to a file.
This configuration is applicable for each of cloud providers listed in the following sections, with the
corresponding Python code which differs according to the provider.
nxlog.conf
1 <Extension python>
2 Module xm_python
3 PythonCode metadata.py
4 </Extension>
5
6 <Extension json>
7 Module xm_json
8 </Extension>
9
10 <Input in>
11 Module im_file
12 File '/var/log/input'
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/tmp/output'
18 <Exec>
19 # Call Python function; this will add one or more fields to the event
20 python_call('get_attribute');
21
22 # Save contents of $raw_event field in $Message prior to JSON conversion
23 $Message = $raw_event;
24
25 # Save all fields in event record to $raw_event field in JSON format
26 $raw_event = to_json();
27 </Exec>
28 </Output>
$ curl http://169.254.169.254/
See the Instance Metadata and User Data documentation for more information about retrieving metadata from
the AWS EC2 service.
323
Example 221. Using a Python Script to Retrieve EC2 Metadata
The following Python script, which can be used with the xm_python module, collects the instance ID from
the EC2 metadata service and adds a field to the event record.
metadata.py (truncated)
import nxlog, requests
def request_metadata(item):
"""Gets value of metadata attribute 'item', returns text string"""
# Set metadata URL
metaurl = 'http://169.254.169.254/latest/meta-data/{0}'.format(item)
$ curl -H "Metadata:true" \
"http://169.254.169.254/metadata/instance/compute/vmId?api-version=2017-08-01&format=text"
See the Azure Instance Metadata service for more information about retrieving the metadata of an Azure
instance.
324
Example 222. Using a Python Script to Retrieve Azure VM Metadata
The following Python script, which can be used with the xm_python module, collects the metadata
attributes from the Azure Instance Metadata Service API and adds a field to the event record for each.
metadata.py (truncated)
import json, nxlog, requests
def request_metadata():
"""Gets all metadata values for compute instance, returns dict"""
# Set metadata URL
metaurl = 'http://169.254.169.254/metadata/instance/compute?api-version=2017-08-01'
# Set header required to retrieve metadata
metaheader = {'Metadata':'true'}
See Storing and Retrieving Instance Metadata for more information about retrieving metadata from the Google
Compute Engine.
325
Example 223. Using a Python Script to Retrieve GCE Instance Metadata
The following Python script, which can be used with the xm_python module, collects the instance ID from
the GCE metadata server and adds a field to the event record.
metadata.py (truncated)
import nxlog, requests
def request_metadata(item):
"""Gets value of metadata attribute 'item', returns text string"""
# Set metadata URL
metaurl = 'http://metadata.google.internal/computeMetadata/v1/instance/{0}'.format(item)
# Set header require to retrieve metadata
metaheader = {'Metadata-Flavor':'Google'}
326
Chapter 53. Common Event Expression (CEE)
NXLog can be configured to collect or forward logs in the Common Event Expression (CEE) format. CEE was
developed by MITRE as an extension for Syslog, based on JSON. MITRE’s work on CEE was discontinued in 2013.
Log Sample
Dec 20 12:42:20 syslog-relay serveapp[1335]: @cee:
{"pri":10,"id":121,"appname":"serveapp","pid":1335,"host":"syslog-relay","time":"2011-12-
20T12:38:05.123456-05:00","action":"login","domain":"app","object":"account","status":"success"}↵
327
Example 224. Collecting CEE Logs
With the following configuration, NXLog accepts CEE logs via TCP, parses the CEE-formatted $Message field,
and writes the logs to file in JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog();
15 if $Message =~ /^@cee: ({.+})$/
16 {
17 $raw_event = $1;
18 parse_json();
19 }
20 </Exec>
21 </Input>
22
23 <Output out>
24 Module om_file
25 File '/var/log/json'
26 Exec to_json();
27 </Output>
Input Sample
Oct 13 14:23:11 myserver @cee: { "purpose": "test" }↵
Output Sample
{
"EventReceivedTime": "2016-09-13 14:23:12",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "myserver",
"EventTime": "2016-09-13 14:23:11",
"Message": "@cee: { \"purpose\": \"test\" }",
"purpose": "test"
}
328
Example 225. Generating CEE Logs
With this configuration, NXLog parses IETF Syslog input from file. The logs are then converted to CEE format
and forwarded via TCP. The Syslog header data and IETF Syslog Structured-Data key/value list from the
input are also included.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/log/ietf'
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_tcp
17 Host 192.168.1.1
18 Port 1514
19 Exec $Message = '@cee: ' + to_json(); to_syslog_bsd();
20 </Output>
Input Sample
<13>1 2016-10-13T14:23:11.000000-06:00 myserver - - - [NXLOG@14506 Purpose="test"] This is a
test message.↵
Output Sample
<13>Oct 13 14:23:11 myserver @cee: {"EventReceivedTime":"2016-10-13
14:23:12","SourceModuleName":"in","SourceModuleType":"im_file","SyslogFacilityValue":1,"SyslogF
acility":"USER","SyslogSeverityValue":5,"SyslogSeverity":"NOTICE","SeverityValue":2,"Severity":
"INFO","EventTime":"2016-10-13 14:23:11","Hostname":"myserver","Purpose":"test","Message":"This
is a test message."}↵
329
Chapter 54. Dell EqualLogic
Dell EqualLogic SAN systems are capable of sending logs to a remote Syslog destination via UDP.
In most environments, two or more EqualLogic units are configured as a single group. This allows storage
capacity to be utilized from all devices, and the configuration of RAID levels across multiple drives and hardware
platforms. In this case, Syslog configuration is performed from Group Manager and applies to all members.
For more details about configuring logging on Dell EqualLogic PS series SANs, check the "Dell PS Series
Configuration Guide" which can be downloaded from the Dell EqualLogic Support Site (a valid account is
required).
NOTE The steps below have been tested with a Dell EqualLogic PS6000 series SAN.
1. Configure NXLog for receiving Syslog messages via UDP (see the examples below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from all devices in the group.
3. Proceed with the logging configuration on EqualLogic, using either the Group Manager or the command line.
See the following sections.
The following example shows EqualLogic logs as received and processed by NXLog with the im_udp and
xm_syslog modules.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/equallogic.log"
19 Exec to_json();
20 </Output>
330
Output Sample
{
"MessageSourceAddress": "192.168.10.43",
"EventReceivedTime": "2017-03-18 21:12:58",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 16,
"SyslogFacility": "LOCAL0",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-03-18 21:12:58",
"Hostname": "192.168.10.43",
"SourceName": "11517",
"Message": "380:netmgtd:18-Mar-2017
21:13:19.415464:rcc_util.c:1032:AUDIT:grpadmin:25.7.0:CLI: Login to account grpadmin succeeded,
using local authentication. User privilege is group-admin."
}
{
"MessageSourceAddress": "192.168.10.43",
"EventReceivedTime": "2017-03-18 20:35:31",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 16,
"SyslogFacility": "LOCAL0",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-03-18 20:35:31",
"Hostname": "192.168.10.43",
"SourceName": "11470",
"Message": "88:agent:18-Mar-2017 20:35:51.833836:echoCli.c:10611:AUDIT:grpadmin:22.7.0:User
action:volume select volume1 schedule create test type once start-time 06:30PM read-write max-
keep 10 start-date 03/18/17 enable"
}
{
"MessageSourceAddress": "192.168.10.43",
"EventReceivedTime": "2017-03-18 20:38:51",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 16,
"SyslogFacility": "LOCAL0",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-03-18 20:38:51",
"Hostname": "192.168.10.43",
"SourceName": "11502",
"Message": "103:agent:18-Mar-2017 20:39:12.124329:echoCli.c:10611:AUDIT:grpadmin:22.7.0:User
action:volume select volume1 snapshot delete volume1-2017-03-18-20:38:00.3.1 "
}
331
Example 227. Extracting Fields From the EqualLogic Logs
This configuration uses a regular expression to extract additional fields from each message.
nxlog.conf
1 <Input in_syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /(?x)^([0-9]*):([a-z]*):\d{2}-[a-zA-Z]{3}-\d{4}
8 \ \d{2}:\d{2}:\d{2}.\d{6}:([a-zA-Z.]*):[0-9]*:([a-zA-Z]*):
9 ([a-z]*):([0-9.]*):([a-zA-Z. ]*):(.*)$/
10 {
11 $EQLMsgSeq = $1;
12 $EQLMsgSrc = $2;
13 $EQLFile = $3;
14 $EQLMsgType = $4;
15 $EQLAccount = $5;
16 $EQLMsgID = $6;
17 $EQLEvent = $7;
18 $EQLMessage = $8;
19 }
20 </Exec>
21 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.10.43",
"EventReceivedTime": "2017-04-15 16:55:48",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 16,
"SyslogFacility": "LOCAL0",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-04-15 16:55:48",
"Hostname": "192.168.10.43",
"SourceName": "12048",
"Message": "113:agent:15-Apr-2017 16:57:09.744470:echoCli.c:10611:AUDIT:grpadmin:22.7.0:User
action:alerts select syslog priority fatal,error,warning,audit",
"EQLMsgSeq": "113",
"EQLMsgSrc": "agent",
"EQLFile": "echoCli.c",
"EQLMsgType": "AUDIT",
"EQLAccount": "grpadmin",
"EQLMsgID": "22.7.0",
"EQLEvent": "User action",
"EQLMessage": "alerts select syslog priority fatal,error,warning,audit"
}
332
2. Go to Group › Group Configuration › Notifications.
3. Under the Event Logs section, make sure the Send events to syslog servers option is checked.
4. Select the required Event priorities.
5. Click [ Add ], enter the IP address of the NXLog agent, and click [ OK ].
6. Click the [ Save all changes ] button in the top left corner.
These commands will send all logs, except for Informational level, to 192.168.6.143 via the default UDP
port 514.
333
Chapter 55. Dell iDRAC
Integrated Dell Remote Access Controller (iDRAC) is an interface that provides web-based or command line
access to a server’s hardware for management and monitoring purposes. This interface may be implemented as
a separate expansion card (DRAC) or be integrated into the motherboard (iDRAC). In both cases it uses resources
separate from the main server and is independent from the server’s operating system.
Different server generations come with different versions of iDRAC. For example, PowerEdge R520, R620, or R720
servers have iDRAC7, while older models such as PowerEdge 1850 or 1950 come with iDRAC5. Remote Syslog via
UDP is an option starting from iDRAC6.
NOTE An iDRAC Enterprise license is required to redirect logs to a remote Syslog destination.
For more details regarding iDRAC configuration, go to Dell Support and search for the server model or iDRAC
version.
NOTE The steps below were tested with iDRAC7 but should work for newer versions as well.
1. Configure NXLog for receiving Syslog entries via UDP (see the examples below), then restart NXLog.
2. Make sure the NXLog agent is accessible from the management interface.
3. Configure iDRAC remote Syslog logging, using the web interface or the command line. See the following
sections.
334
Example 229. Receiving iDRAC Logs
This example shows iDRAC logs as received and processed by NXLog, with the im_udp and xm_syslog
modules.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/idrac.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.5.50",
"EventReceivedTime": "2017-03-26 13:52:48",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 21,
"SyslogFacility": "LOCAL5",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-03-26 13:52:48",
"Hostname": "192.168.5.50",
"SourceName": "Severity",
"Message": "Informational, Category: Audit, MessageID: USR0030, Message: Successfully logged
in using john, from 192.168.0.106 and GUI."
}
335
Example 230. Extracting Additional Fields From iDRAC Logs
The following configuration uses a regular expression to extract additional fields from each message.
nxlog.conf
1 <Input in_syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /(?x)^([a-zA-Z]*),\ Category:\ ([a-zA-Z]*),
8 \ MessageID:\ ([a-zA-Z0-9]*),\ Message:\ (.*)$/
9 {
10 $DracMsgLevel = $1;
11 $DracMscCategory = $2;
12 $DracMscID = $3;
13 $DracMessage = $4;
14 }
15 </Exec>
16 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.5.50",
"EventReceivedTime": "2017-04-15 17:32:47",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 21,
"SyslogFacility": "LOCAL5",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-04-15 17:32:47",
"Hostname": "192.168.5.50",
"SourceName": "Severity",
"Message": "Informational, Category: Audit, MessageID: USR0030, Message: Successfully logged
in using john, from 192.168.0.106 and GUI.",
"DracMsgLevel": "Informational",
"DracMscCategory": "Audit",
"DracMscID": "USR0030",
"DracMessage": "Successfully logged in using john, from 192.168.0.106 and GUI."
}
3. Select the Remote System Log option for all required alert types.
336
4. Click [ Apply ].
5. Go to Overview › Server › Logs › Settings.
337
Available categories are all, system, storage, updates, audit, config, and worknotes. Valid severity
values are critical, warning, and info.
◦ ACTION: an action for this alert. Possible values are none, powercycle, poweroff, and systemreset.
◦ NOTIFICATION: required notifications for the alert. Valid values are all or none, or a comma-separated
list including one or more of snmp, ipmi, lcd, email, and remotesyslog.
◦ NUMBER: the Syslog server number—1, 2 or 3.
The following commands disable all alert actions, enable Syslog notifications for all alerts (disabling
other notifications), and enable Syslog logging to 192.168.6.143 (UDP port 514).
338
Chapter 56. Dell PowerVault MD Series
PowerVault MD logs can be sent to a remote Syslog destination via UDP by using the "Event Monitor" Windows
service, which is a part of the Modular Disk Storage Manager application used to manage PowerVault. The MD
Storage Manager is a separate application which is usually installed on a management server. It connects to the
MD unit and provides a convenient graphical interface for managing the PowerVault storage.
Log Sample
Date/Time: 4/5/17 2:43:00 PM↵
Sequence number: 418209↵
Event type: 4011↵
Description: Virtual disk not on preferred path due to failover↵
Event specific codes: 0/0/0↵
Event category: Error↵
Component type: RAID Controller Module↵
Component location: Enclosure 0, Slot 0↵
Logged by: RAID Controller Module in slot 0↵
↵
Date/Time: 4/5/17 4:06:21 PM↵
Sequence number: 418233↵
Event type: 104↵
Description: Needs attention condition resolved↵
Event specific codes: 0/0/0↵
Event category: Internal↵
Component type: RAID Controller Module↵
Component location: Enclosure 0, Slot 0↵
Logged by: RAID Controller Module in slot 0↵
For more details about configuring PowerVault alerts and using MD Storage Manager, see Dell Support.
The steps below have been tested with the PowerVault MD3200 Series SAN and should work
NOTE
with any MD unit managed by MD Storage Manager Enterprise.
1. Configure NXLog for receiving log entries via UDP (see the examples below), then restart NXLog.
2. Confirm that the NXLog agent is accessible from the server where MD Storage Manager is installed.
3. Locate the PMServer.properties file. By default, the file can be found in C:\Program Files
(x86)\Dell\MD Storage Software\MD Storage Manager\client\data.
4. Edit the file. Set enable_local_logger to true, specify the Syslog server address, and set the facility.
339
Example 232. Sending Logs to 192.168.15.223
With the following directives, the MD Storage Manager will send events to 192.168.15.223 via UDP port
514.
PMServer.properties
Time_format(12/24)=12
syslog_facilty=3
DBM_files_maximum_key=20
DBM_files_minimum_key=5
syslog_receivers=192.168.15.223
DBM_recovery_interval_key=120
DBM_recovery_debounce_key=5
DBM_files_maintain_timeperiod_key=14
eventlog_source_name=StorageArray
enable_local_logger=true
syslog_tag=StorageArray
This example shows PowerVault logs as received and processed by NXLog with the im_udp and xm_syslog
modules.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/mdsan.log"
19 Exec to_json();
20 </Output>
340
Output Sample
{
"MessageSourceAddress": "192.168.15.231",
"EventReceivedTime": "2017-04-05 14:43:45",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 4,
"SyslogSeverity": "WARNING",
"SeverityValue": 3,
"Severity": "WARNING",
"Hostname": "192.168.5.18",
"EventTime": "2017-04-05 14:43:00",
"SourceName": "StorageArray",
"Message": "MD3620f1;4011;Warning;Virtual disk not on preferred path due to failover"
}
{
"MessageSourceAddress": "192.168.15.231",
"EventReceivedTime": "2017-04-05 16:07:01",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "192.168.5.18",
"EventTime": "2017-04-05 16:06:21",
"SourceName": "StorageArray",
"Message": "MD3620f1;104;Informational;Needs attention condition resolved"
}
341
Example 234. Extracting Additional Fields
The following configuration uses a regular expression to extract additional fields from each message.
nxlog.conf
1 <Input in_syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /^([a-zA-Z0-9]*);([0-9]*);([a-zA-Z]*);(.*)$/
8 {
9 $MDArray = $1;
10 $MDMsgID = $2;
11 $MDMsgLevel = $3;
12 $MDMessage = $4;
13 }
14 </Exec>
15 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.15.231",
"EventReceivedTime": "2017-04-05 14:43:45",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 4,
"SyslogSeverity": "WARNING",
"SeverityValue": 3,
"Severity": "WARNING",
"Hostname": "192.168.5.18",
"EventTime": "2017-04-05 14:43:00",
"SourceName": "StorageArray",
"Message": "MD3620f1;4011;Warning;Virtual disk not on preferred path due to failover",
"MDArray": "MD3620f1",
"MDMsgID": "4011",
"MDMsgLevel": "Warning",
"MDMessage": "Virtual disk not on preferred path due to failover"
}
342
Chapter 57. DHCP Logs
DHCP servers and clients both generate log activity that may need to be collected, processed, and stored. This
chapter provides information about enabling logging for some common DHCP servers and clients, as well as for
configuring NXLog to collect the DHCP logs.
By default, DHCPd logs to the daemon Syslog facility. If desired, the DHCPd log-facility configuration
statement can be used in /etc/dhcp/dhcpd.conf to write logs to a different facility. The system logger could
then be configured to handle that facility’s logs as required. Otherwise, something like the following example
should work with the default settings.
This configuration uses the im_file module to read DHCPd messages from one of the Syslog log files, and
the xm_syslog parse_syslog() procedure to parse them. Only events from the dhcpd source are kept; others
are discarded with drop().
This method will most likely not preserve severity information. See Reading Syslog
WARNING Log Files for more information and the other sections in Collecting and Parsing Syslog
for alternative ways to collect Syslog messages.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input dhcp_server>
6 Module im_file
7 # Debian writes `daemon` facility logs to `/var/log/daemon.log` by default
8 File '/var/log/daemon.log'
9 # RHEL writes `daemon` facility logs to `/var/log/messages` by default
10 #File '/var/log/messages'
11 <Exec>
12 parse_syslog();
13 if $SourceName != 'dhcpd' drop();
14 </Exec>
15 </Input>
343
Example 236. Collecting dhclient Messages
This configuration uses the im_file module to read dhclient messages from one of the Syslog log files, and
the xm_syslog parse_syslog() procedure to parse them. Only events from the dhclient source are kept;
others are discarded with drop().
This method will most likely not preserve severity information. See Reading Syslog
WARNING Log Files for more information and the other sections in Collecting and Parsing Syslog
for alternative ways to collect Syslog messages.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input dhcp_client>
6 Module im_file
7 # Debian writes `daemon` facility logs to `/var/log/daemon.log` by default
8 File '/var/log/daemon.log'
9 # RHEL writes `daemon` facility logs to `/var/log/messages` by default
10 #File '/var/log/messages'
11 <Exec>
12 parse_syslog();
13 if $SourceName != 'dhclient' drop();
14 </Exec>
15 </Input>
NOTE The following sections have been tested on Windows Server 2016.
The log files are named DhcpSrvLog-<DAY>.log for IPv4 and DhcpV6SrvLog-<DAY>.log for IPv6. For example,
Thursday’s log files are DhcpSrvLog-Thu.log and DhcpV6SrvLog-Thu.log.
344
The DHCP audit log can be configured with PowerShell or the DHCP Management MMC snap-in.
> Get-DhcpServerAuditLog
Path : C:\Windows\system32\dhcp
Enable : True
MaxMBFileSize : 70
DiskCheckInterval : 50
MinMBDiskSpace : 20
2. To set the audit log configuration, run this command (see Set-DhcpServerAuditLog on Microsoft Docs).
3. The DHCP server must be restarted for the configuration changes to take effect.
1. Run the DHCP MMC snap-in (dhcpmgmt.msc), expand the server for which to configure logging, and click on
IPv4.
2. Right-click on IPv4 and click Properties. Note that the context menu is not fully populated until after the
IPv4 menu has been expanded at least once.
345
3. Make sure Enable DHCP audit logging is checked.
4. Open the Advanced tab, change the Audit log file path, and click [ OK ].
5. Restart the DHCP server by right-clicking the server and clicking All Tasks › Restart.
This configuration uses a short batch/PowerShell polyglot script with the include_stdout directive to fetch
346
the DHCP audit log location. The im_file module reads from the files and the xm_csv module parses the
lines into fields. Any line that does not match the /^\d+,/ regular expression is discarded with the drop()
procedure (all the header lines are dropped). The event ID and QResult codes are resolved automatically,
with corresponding $Message and $QMessage fields added where applicable.
If DHCP audit logging is disabled, the script will print an error and NXLog will abort during
NOTE
the configuration check.
nxlog.conf (truncated)
1 <Extension dhcp_csv_parser>
2 Module xm_csv
3 Fields ID, Date, Time, Description, IPAddress, Hostname, MACAddress, \
4 UserName, TransactionID, QResult, ProbationTime, CorrelationID, \
5 DHCID, VendorClassHex, VendorClassASCII, UserClassHex, \
6 UserClassASCII, RelayAgentInformation, DnsRegError
7 </Extension>
8
9 <Extension dhcpv6_csv_parser>
10 Module xm_csv
11 Fields ID, Date, Time, Description, IPv6Address, Hostname, ErrorCode, \
12 DuidLength, DuidBytesHex, UserName, Dhcid, SubnetPrefix
13 </Extension>
14
15 <Input dhcp_server_audit>
16 Module im_file
17 include_stdout %CONFDIR%\dhcp_server_audit_include.cmd
18 <Exec>
19 # Only process lines that begin with an event ID
20 if $raw_event =~ /^\d+,/
21 {
22 $FileName = file_name();
23 if $FileName =~ /DhcpSrvLog-/
24 {
25 dhcp_csv_parser->parse_csv();
26 $QResult = integer($QResult);
27 if $QResult == 0 $QMessage = "NoQuarantine";
28 else if $QResult == 1 $QMessage = "Quarantine";
29 [...]
347
dhcp_server_audit_include.cmd
@( Set "_= (
REM " ) <#
)
@Echo Off
SetLocal EnableExtensions DisableDelayedExpansion
powershell.exe -ExecutionPolicy Bypass -NoProfile ^
-Command "iex ((gc '%~f0') -join [char]10)"
EndLocal & Exit /B %ErrorLevel%
#>
$AuditLog = Get-DhcpServerAuditLog
if ($AuditLog.Enable) {
Write-Output "File '$($AuditLog.Path)\Dhcp*SrvLog-*.log'"
}
else {
[Console]::Error.WriteLine(@"
DHCP audit logging is disabled. To enable, run in PowerShell:
> Set-DhcpServerAuditLog -Enable $True
"@)
exit 1
}
Alternatively, the following PowerShell script will check all three logs, enabling if necessary.
348
$LogNames = @("DhcpAdminEvents",
"Microsoft-Windows-Dhcp-Server/FilterNotifications",
"Microsoft-Windows-Dhcp-Server/Operational")
ForEach ($LogName in $LogNames) {
$EventLog = Get-WinEvent -ListLog $LogName
if ($EventLog.IsEnabled) {
Write-Host "Already enabled: $LogName"
}
else {
Write-Host "Enabling: $LogName"
$EventLog.IsEnabled = $true
$EventLog.SaveChanges()
}
}
This configuration uses the im_msvistalog module to collect DHCP Server events from the EventLog
DhcpAdminEvents, FilterNotifications, and Operational logs.
nxlog.conf
1 <Input dhcp_server_eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="DhcpAdminEvents">*</Select>
7 <Select Path="Microsoft-Windows-Dhcp-Server/FilterNotifications">
8 *</Select>
9 <Select Path="Microsoft-Windows-Dhcp-Server/Operational">*</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>
349
Alternatively, the following PowerShell script will check all four logs, enabling if necessary.
$LogNames = @("Microsoft-Windows-Dhcp-Client/Admin",
"Microsoft-Windows-Dhcp-Client/Operational",
"Microsoft-Windows-Dhcpv6-Client/Admin",
"Microsoft-Windows-Dhcpv6-Client/Operational")
ForEach ($LogName in $LogNames) {
$EventLog = Get-WinEvent -ListLog $LogName
if ($EventLog.IsEnabled) {
Write-Host "Already enabled: $LogName"
}
else {
Write-Host "Enabling: $LogName"
$EventLog.IsEnabled = $true
$EventLog.SaveChanges()
}
}
This configuration collects events from the IPv4 and IPv6 Admin and Operational DHCP client logs using
the im_msvistalog module.
nxlog.conf
1 <Input dhcp_client_eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Dhcp-Client/Admin">*</Select>
7 <Select Path="Microsoft-Windows-Dhcp-Client/Operational">*</Select>
8 <Select Path="Microsoft-Windows-Dhcpv6-Client/Admin">*</Select>
9 <Select Path="Microsoft-Windows-Dhcpv6-Client/Operational">*</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>
350
Chapter 58. DNS Monitoring
Monitoring and proactively analyzing Domain Name Server (DNS) queries and responses has become a standard
security practice for networks of all sizes. Many types of malware rely on DNS traffic to communicate with
command-and-control servers, inject ads, redirect traffic, or transport data.
DNS traffic can quickly become overwhelming. To save resources, consider discarding any fields
TIP
that will not be required for analysis.
According to RFC 7626 there are no specific privacy laws for DNS data collection, in any country. However, it is
not clear if data protection directive 95/46/EC of the European Union includes DNS traffic collection.
DNS events are available from a number of sources. DNS queries and responses are commonly sent and
received in the form of packets over UDP. These packets and the ability to passively capture them is basically the
same across all operating systems.
Another common source is the DNS server itself as it receives queries from clients, processes them and returns
the results. Although the DNS protocol is a common standard, the logging facilities implemented in each DNS
server can vary greatly across different operating systems. Bind 9 generates flat log files while Windows DNS
Server employs Event Tracing for Windows (ETW) for managing its DNS events.
Although Windows DNS Server has two event tracing channels named Audit and Analytical, the advantage gained
from classifying DNS events into these two categories, and treating them separately, is by no means proprietary
and can be applied to other DNS server environments.
A DNS server is basically a highly specialized database server, yet it still retains the same low-level CRUD (Create,
Read, Update, Delete) functionality of any other database. Analytical logging is focused primarily on client
queries, the read operations, while DNS Audit Logging is focused on the remaining CRUD operations: creating,
updating, and deleting DNS zone information. These are the most important operations to monitor from a
security perspective since unauthorized access to them can lead to interruption of network services, data loss,
and outages of other infrastructure services.
The goal of DNS Audit logging is to maintain an audit trail of any changes to the DNS Server’s configuration,
mainly for security purposes, while providing timely notification and easy access to any high severity events. By
logging changes to any of the more than 40 DNS resource record (RR) types in zone files, security analysts will
have the forensic information they need, should DNS records be maliciously or accidentally modified.
The realm of DNS Analytical Logging is completely different. The volume of data collected can be huge and the
events being analyzed are typically not time- sensitive. The bulk of these DNS queries can be useful for producing
metrics on user and application network traffic to various internal and external sites and services.
In the following two sections, the methods used to collect audit and analytical log data may differ greatly, but the
goal of managing them separately remains the same.
351
58.2. BIND 9
The BIND 9 DNS server is commonly used on Unix-like operating systems. It can act as both an authoritative
name server and a recursive resolver.
In addition to collecting BIND 9 logs, consider implementing File Integrity Monitoring or DNS Audit Logging for
the BIND 9 configuration files.
This configuration logs all messages, of info severity or greater, to the local Syslog daemon. The queries
category is specified explicitly, because query logging is otherwise disabled by default. The print-* options
enable the inclusion of various metadata in the log messages—this metadata can later be parsed by NXLog.
named.conf
logging {
Log Format
<syslog-header> <date> <time> <category>: <severity>: <message>
Log Sample
<30>Apr 29 22:30:15 debian named[16373]: 29-Apr-2019 22:30:15.371 general: info: managed-keys-
zone: Key 20326 for zone . acceptance timer complete: key now trusted↵
<30>Apr 29 22:30:15 debian named[16373]: 29-Apr-2019 22:30:15.372 resolver: info: resolver
priming query complete↵
<30>Apr 29 22:30:20 debian named[16373]: 29-Apr-2019 22:30:20.770 queries: info: client
@0x7f9b6810ed50 10.80.0.1#44663 (google.com): query: google.com IN A +E(0) (10.80.1.88)↵
352
Example 241. Logging to File
BIND can be configured to write log messages to a file. This configuration also shows how a particular
category can be disabled.
named.conf
logging {
The resulting log format is the same as in the previous example, but without the Syslog header.
Log Sample
01-May-2019 00:26:56.579 general: info: managed-keys-zone: Key 20326 for zone . acceptance
timer complete: key now trusted↵
01-May-2019 00:26:56.617 resolver: info: resolver priming query complete↵
01-May-2019 00:27:48.084 queries: info: client @0x7f82bc11d4e0 10.80.0.1#53995 (google.com):
query: google.com IN A +E(0) (10.80.1.88)↵
3. any category-specific syntax (such as for the queries category below)—additional parsing can be
implemented, if required, for any other category that uses a consistent format.
NOTE The following examples have been tested with BIND 9.10 and 9.11.
This configuration uses the im_uds module to accept local Syslog messages. BIND 9 should be configured
to log messages via Syslog as shown in Logging All Categories via Syslog above.
353
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input syslog>
6 Module im_uds
7 UDS /dev/log
8 <Exec>
9 # 1. Parse Syslog header
10 parse_syslog_bsd();
11
12 # 2. Parse BIND 9 metadata
13 if $Message =~ /(?x)^(?<EventTime>\S+\s\S+)\s(?<Category>\S+):\s
14 (?<BINDSeverity>[^:]+):\s(?<Message>.+)$/i
15 {
16 $EventTime = parsedate($EventTime);
17
18 # 3. Parse messages from the queries category
19 if $Category == "queries"
20 {
21 $Message =~ /(?x)^client\s((?<ClientID>\S+)\s)?(?<Client>\S+)\s
22 \((?<OriginalQuery>\S+)\):\squery:\s
23 (?<QueryName>\S+)\s(?<QueryClass>\S+)\s
24 (?<QueryType>\S+)\s(?<QueryFlags>\S+)\s
25 \((?<LocalAddress>\S+)\)$/;
26 }
27
28 # Parse messages from another category
29 [...]
Event Sample
{
"EventReceivedTime": "2019-04-29T22:30:20.856069+01:00",
"SourceModuleName": "syslog",
"SourceModuleType": "im_uds",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "debian",
"EventTime": "2019-04-29T22:30:20.770000+01:00",
"SourceName": "named",
"ProcessID": "16373",
"Message": "client @0x7f9b6810ed50 10.80.0.1#44663 (google.com): query: google.com IN A +E(0)
(10.80.1.88)",
"BINDSeverity": "info",
"Category": "queries",
"Client": "10.80.0.1#44663",
"ClientID": "@0x7f9b6810ed50",
"LocalAddress": "10.80.1.88",
"OriginalQuery": "google.com",
"QueryClass": "IN",
"QueryFlags": "+E(0)",
"QueryName": "google.com",
"QueryType": "A"
}
354
Example 243. Collecting BIND 9 Logs From File
This configuration uses the im_file module to read messages from the BIND 9 log file. BIND 9 should be
configured as shown in Logging to File above. The parsing here is very similar to the previous example, but
without Syslog header parsing.
nxlog.conf
1 <Input file>
2 Module im_file
3 File '/var/log/bind.log'
4 <Exec>
5 if $raw_event =~ /(?x)^(?<EventTime>\S+\s\S+)\s(?<Category>\S+):\s
6 (?<Severity>[^:]+):\s(?<Message>.+)$/i
7 {
8 $EventTime = parsedate($EventTime);
9 if $Category == "queries"
10 {
11 $Message =~ /(?x)^client\s((?<ClientID>\S+)\s)?(?<Client>\S+)\s
12 \((?<OriginalQuery>\S+)\):\squery:\s
13 (?<QueryName>\S+)\s(?<QueryClass>\S+)\s
14 (?<QueryType>\S+)\s(?<QueryFlags>\S+)\s
15 \((?<LocalAddress>\S+)\)$/;
16 }
17 }
18 </Exec>
19 </Input>
This configuration uses the im_linuxaudit module to watch the the BIND 9 configuration file
/etc/bind/named.conf for modifications and tags the events with conf-change-bind. Read more about
Audit Rules.
355
nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 <Rules>
5 # Delete all rules (This rule has no affect; it is performed
6 # automatically by im_linuxaudit)
7 -D
8
9 # Watch /etc/bind/named.conf for modifications and tag 'conf-change-bind'
10 -w /etc/bind/named.conf -p wa -k conf-change-bind
11
12 # Generate a log entry when the system time is changed
13 -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k system_time
14
15 # Lock Audit rules until reboot
16 -e 2
17 </Rules>
18 </Input>
356
Event Sample of Audit Trail
{
"type": "SYSCALL",
"time": "2020-02-21T02:20:32.365000+01:00",
"seq": 165,
"arch": "c000003e",
"syscall": "257",
"success": "yes",
"exit": 3,
"a0": "ffffff9c",
"a1": "563b82d382a0",
"a2": "441",
"a3": "1b6",
"items": 2,
"ppid": 1739,
"pid": 1740,
"auid": 1000,
"uid": 0,
"gid": 0,
"euid": 0,
"suid": 0,
"fsuid": 0,
"egid": 0,
"sgid": 0,
"fsgid": 0,
"tty": "pts2",
"ses": "2",
"comm": "nano",
"exe": "/bin/nano",
"subj": "=unconfined",
"key": "conf-change-bind",
"EventReceivedTime": "2020-02-21T02:20:32.373192+01:00",
"SourceModuleName": "audit",
"SourceModuleType": "im_linuxaudit"
}
The following table maps some of the key features and attributes unique to each NXLog logging facility available
for Windows DNS monitoring.
357
DNS Logging or Provider or Module(s) Feature(s) Requirements
Tracing Type Channel
Audit and Microsoft-Windows- im_etw Preferred method. Server versions 2012
Analytical DNSServer Native DNS Server R2 and later
(Tracing) auditing.
Best choice for
Analytical logs.
1. Windows DNS Server Audit Events are enabled by default. An audit event is logged whenever the DNS
server settings, zones, or resource records are changed. Such DNS events are of utmost importance for
security audits. Each of the 53 types of audit events are identified by a unique EventID which is documented
in the Audit events table of Microsoft’s documentation. The Type column in this table contains a short
description of the event; however, it is not included in the actual logged event. For example, if a new zone is
created, it will not be possible to search for an event containing Record create, instead only EventID: 515 is
available for identifying this type of event.
2. Windows DNS Server Analytical Events must be specifically enabled. They represent the bulk of DNS
events—primarily lookups and other queries—and can be quite large in volume. The Analytic events table of
Microsoft’s documentation lists each of the 23 types of events that are monitored. Just like with Audit Events,
Windows logs the EventID, but not the more descriptive Type field. According to the Audit and analytic event
logging section of Microsoft’s documentation, when processing 100,000 queries per second (QPS) on modern
hardware the expected reduction in performance is around 5% if Analytical Event logging is enabled.
Event tracing offers significant advantages over DNS Debug Logging in terms of architecture, flexibility,
configurability, and performance. ETW events can be read directly without requiring events to be first written to
disk. However, ETW is not available on older Windows systems. To maintain its performance it is by design a
358
"best effort" framework and consequently does not guarantee that all events will be captured.
For more information, see the Installing and enabling DNS diagnostic logging section on Microsoft Docs.
With Analytical Logging enabled, NXLog can use the im_etw module to collect DNS logs from the Microsoft-
Windows-DNSServer ETW provider. This is the preferred method for collecting logs from Windows Server versions
2012 R2 and later.
NOTE On Windows Server 2012 R2, this feature is provided by hotfix 2956577.
58.3.2.1. Examples
Example 245. Using im_etw
The following configuration collects DNS logs via ETW from the Microsoft-Windows-DNSServer provider,
using the im_etw module. The collected logs are converted to JSON and saved to a file.
nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-DNSServer
4 </Input>
In this example, an Audit event has been logged. EventID: 515 identifies this as a Record create for this zone.
{
"SourceName": "Microsoft-Windows-DNSServer",
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
"EventId": 515,
"Version": 0,
"ChannelID": 17,
"OpcodeValue": 0,
"TaskValue": 5,
"Keywords": "4611686018428436480",
"EventTime": "2020-03-10T09:42:39.788511-07:00",
"ExecutionProcessID": 4752,
"ExecutionThreadID": 1732,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Domain": "WIN-R4QHULN6KLH",
"AccountName": "Administrator",
"UserID": "S-1-5-21-915329490-2962477901-227355065-500",
"AccountType": "User",
"Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
"Type": "1",
"NAME": "www.example.com",
"TTL": "3600",
"BufferSize": "4",
"RDATA": "0x0A00020F",
"Zone": "example.com",
"ZoneScope": "Default",
"VirtualizationID": ".",
"EventReceivedTime": "2020-03-10T09:42:40.801598-07:00",
"SourceModuleName": "etw",
"SourceModuleType": "im_etw"
}
359
58.3.3. File-based DNS Debug Logging
Windows DNS Debug Logging is the only means of monitoring DNS events on Windows Server versions prior to
2012 R2. However, DNS Servers capable of ETW might be configured for file-based logging in cases where all
events must be captured without exception.
2. Right-click on the DNS server and choose Properties from the context menu.
3. Under the Debug Logging tab, enable Log packets for debugging.
4. Mark the check boxes corresponding to the data that should be logged.
The Details option will produce multi-line logs. To parse this detailed format, refer to
NOTE
Parsing Detailed DNS Logs With Regular Expressions below.
5. Set the File path and name to the desired log file location.
The Windows DNS service may not recreate the debug log file after a rollover. If you
WARNING encounter this issue, be sure to use the C: drive for the debug log path. See the post,
The disappearing Windows DNS debug log, on the NXLog website.
360
Log Sample (Standard Debug Mode)
4/21/2017 7:52:03 AM 06B0 PACKET 00000000028657F0 UDP Snd 10.2.0.1 6590 R Q [8081 DR
NOERROR] A (7)example(3)com(0)↵
See the following sections for information about parsing the logs.
This module does not support parsing of logs from DNS Debug Logging generated with the
WARNING
Details option enabled.
NOTE This module has been tested on Windows Server versions 2008 R2, 2012 R2, and 2016.
This configuration uses the im_file and xm_msdns modules to read and parse the log file. Output is written
to file in JSON format for this example.
nxlog.conf
1 <Extension dns_parser>
2 Module xm_msdns
3 EventLine TRUE
4 PacketLine TRUE
5 NoteLine TRUE
6 </Extension>
7
8 <Input in>
9 Module im_file
10 File 'C:\Server\dns.log'
11 InputType dns_parser
12 </Input>
Event Sample
{
"EventTime": "2017-04-21 07:52:03",
"ThreadId": "06B0",
"Context": "PACKET",
"InternalPacketIdentifier": "00000000028657F0",
"Protocol": "UDP",
"SendReceiveIndicator": "Snd",
"RemoteIP": "10.2.0.1",
"Xid": "6590",
"QueryResponseIndicator": "Response",
"Opcode": "Standard Query",
"FlagsHex": "8081",
"RecursionDesired": true,
"RecursionAvailable": true,
"ResponseCode": "NOERROR",
"QuestionType": "A",
"QuestionName": "example.com",
"EventReceivedTime": "2017-04-21 7:52:03",
"SourceModuleName": "in",
"SourceModuleType": "im_file"
}
361
58.3.3.3. Parsing Non-Detailed Logs With Regular Expressions
While the xm_msdns module is the preferred method for parsing DNS logs, and is about three times faster,
regular expressions can also be used.
This example does not parse logs from DNS Debug Logging generated with the Details
WARNING
option enabled.
NOTE This has been tested on Windows Server versions 2008 R2, 2012 R2, and 2016.
This example parses the log files generated by DNS Debug Logging and then writes the output to file in
JSON format.
nxlog.conf (truncated)
1 define EVENT_REGEX /(?x)(?<Date>\d+(?:\/\d+){2})\s \
2 (?<Time>\d+(?:\:\d+){2}\s\w+)\s \
3 (?<ThreadId>\w+)\s+ \
4 (?<Context>\w+)\s+ \
5 (?<InternalPacketIdentifier>[[:xdigit:]]+)\s+ \
6 (?<Protocol>\w+)\s+ \
7 (?<SendReceiveIndicator>\w+)\s \
8 (?<RemoteIP>[[:xdigit:].:]+)\s+ \
9 (?<Xid>[[:xdigit:]]+)\s \
10 (?<QueryType>\s|R)\s \
11 (?<Opcode>[A-Z]|\?)\s \
12 (?<QFlags>\[(.*?)\])\s+ \
13 (?<QuestionType>\w)\s+ \
14 (?<QuestionName>.*)/
15 define EMPTY_EVENT_REGEX /(^$|^\s+$)/
16 define DOMAIN_REGEX /\(\d+\)([\w-]+)\(\d+\)([\w-]+)/
17 define SUBDOMAIN_REGEX /\(\d+\)([\w-]+)\(\d+\)([\w-]+)\(\d+\)(\w+)/
18 define NOT_STARTING_WITH_DATE_REGEX /^(?!\d+\/\d+\/\d+).+/
19 define QFLAGS_REGEX /(?x)(?<FlagsHex>\d+)\s+ \
20 (?<FlagsCharCodes>\s+|([A-Z]{2}|[A-Z]))\s+ \
21 (?<ResponseCode>\w+)/
22
23 <Extension _json>
24 Module xm_json
25 </Extension>
26
27 <Input in>
28 Module im_file
29 [...]
362
Output Sample
{
"EventReceivedTime": "2017-04-21 07:52:16",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"Context": "PACKET",
"InternalPacketIdentifier": "00000000028657F0",
"Opcode": "Q",
"Protocol": "UDP",
"QueryType": "response",
"QuestionName": "notabilus.com",
"QuestionType": "A",
"RemoteIP": "10.2.0.1",
"SendReceiveIndicator": "Snd",
"ThreadId": "06B0",
"Xid": "6590",
"Regular": true,
"EventTime": "2017-04-21 07:52:03",
"Raw": "4/21/2017 7:52:03 AM 06B0 PACKET 00000000028657F0 UDP Snd 10.2.0.1 6590 R Q
[8081 DR NOERROR] A (9)notabilus(3)com(0)",
"FlagsCharCodes": "DR",
"FlagsHex": "8081",
"ResponseCode": "NOERROR"
}
In this example, the xm_multiline module joins lines that belong to the same event, by using a regular
expression to match the header line. Then a regular expression is used to parse the content into fields.
Input Sample
6/1/2017 8:33:36 PM 09B8 PACKET 0000022041EED460 UDP Rcv 192.168.56.1 edaa Q [2001 D
NOERROR] A (6)google(3)com(0)↵
UDP question info at 0000022041EED460↵
Socket = 680↵
Remote addr 192.168.56.1, port 48210↵
Time Query=6941, Queued=0, Expire=0↵
Buf length = 0x0fa0 (4000)↵
Msg length = 0x0027 (39)↵
Message:↵
XID 0xedaa↵
Flags 0x0120↵
QR 0 (QUESTION)↵
OPCODE 0 (QUERY)↵
AA 0↵
TC 0↵
RD 1↵
RA 0↵
Z 0↵
CD 0↵
AD 1↵
RCODE 0 (NOERROR)↵
363
nxlog.conf (truncated)
1 define EVENT_REGEX /(?x)(?<Date>\d+(?:\/\d+){2})\s \
2 (?<Time>\d+(?:\:\d+){2}\s\w+)\s \
3 (?<ThreadId>\w+)\s+ \
4 (?<Context>\w+)\s+ \
5 (?<InternalPacketIdentifier>[[:xdigit:]]+)\s+ \
6 (?<Protocol>\w+)\s+ \
7 (?<SendReceiveIndicator>\w+)\s \
8 (?<RemoteIP>[[:xdigit:].:]+)\s+ \
9 (?<Xid>[[:xdigit:]]+)\s \
10 (?<QueryType>\s|R)\s \
11 (?<Opcode>[A-Z]|\?)\s \
12 (?<QFlags>\[(.*?)\])\s+ \
13 (?<QuestionType>\w+)\s+ \
14 (?<QuestionName>.*)\s+ \
15 (?<LogInfo>.+)\s+.+=\s \
16 (?<Socket>\d+)\s+ Remote\s+ addr\s \
17 (?<RemoteAddr>.+),\sport\s \
18 (?<PortNum>\d+)\s+Time\sQuery= \
19 (?<TimeQuery>\d+),\sQueued= \
20 (?<Queued>\d+),\sExpire= \
21 (?<Expire>\d+)\s+.+\( \
22 (?<BufLen>\d+)\)\s+.+\( \
23 (?<MsgLen>\d+)\)\s+Message:\s+ \
24 (?<Message>(?s).*)/
25
26 define HEADER_REGEX /(?x)(?<Date>\d+(?:\/\d+){2})\s \
27 (?<Time>\d+(?:\:\d+){2}\s\w+)\s \
28 (?<ThreadId>\w+)\s+ \
29 [...]
364
Output Sample
{
"EventReceivedTime": "2018-11-30T04:33:38.660127+01:00",
"SourceModuleName": "filein",
"SourceModuleType": "im_file",
"BufLen": "512",
"Context": "PACKET",
"Expire": "0",
"InternalPacketIdentifier": "000000D58F45A560",
"LogInfo": "UDP response info at 000000D58F45A560",
"Message": "XID 0x000d\r\n Flags 0x8180\r\n QR 1 (RESPONSE)\r\n
OPCODE 0 (QUERY)\r\n AA 0\r\n TC 0\r\n RD 1\r\n RA
1\r\n Z 0\r\n CD 0\r\n AD 0\r\n RCODE 0
(NOERROR)\r\n QCOUNT 1\r\n ACOUNT 1\r\n NSCOUNT 0\r\n ARCOUNT 0\r\n
QUESTION SECTION:\r\n Offset = 0x000c, RR count = 0\r\n Name \"
(6)google(3)com(0)\"\r\n QTYPE AAAA (28)\r\n QCLASS 1\r\n ANSWER SECTION:\r\n
Offset = 0x001c, RR count = 0\r\n Name \"[C00C](6)google(3)com(0)\"\r\n TYPE
AAAA (28)\r\n CLASS 1\r\n TTL 26\r\n DLEN 16\r\n DATA
2a00:1450:400d:805::200e\r\n AUTHORITY SECTION:\r\n empty\r\n ADDITIONAL
SECTION:\r\n empty\r\n",
"MsgLen": "56",
"Opcode": "Q",
"PortNum": "60010",
"Protocol": "UDP",
"QFlags": "[8081 DR NOERROR]",
"QueryType": "R",
"QuestionName": "(6)google(3)com(0)",
"QuestionType": "AAAA",
"Queued": "0",
"RemoteAddr": "::1",
"RemoteIP": "::1",
"SendReceiveIndicator": "Snd",
"Socket": "512",
"ThreadId": "044C",
"TimeQuery": "12131",
"Xid": "000d",
"EventTime": "2018-11-30T04:32:43.000000+01:00"
}
The DNS event log collection supported by Sysmon is not comparable to other types of DNS monitoring like DNS
Server Audit and Analytical logging or DNS Server Debug Logging. In fact, Sysmon DNS Query logging provides
only DNS client query logging, but the information it provides compliments the information from DNS Server
Analytical logs by adding the name and path of the application which is querying the DNS Server. It can monitor
the DNS queries executed by practically any Windows client software that is network-enabled, for instance web
browsers, FileZilla, WinSCP, ping, tracert, etc. It should be noted that direct DNS lookups using nslookup are
not logged by Sysmon’s DNS Query logging.
365
config-dnsquery.xml
<Sysmon schemaversion="4.22">
<EventFiltering>
<DnsQuery onmatch="exclude"/>
</EventFiltering>
</Sysmon>
With the XML configuration file confg-dnsquery.xml located in the same directory as Sysmon.exe, running the
following command will apply the new configuration:
Once the configuration file has been applied, it can be confirmed by issuing the same command with the -c
option, but without any file specified:
A good resource for configuring Sysmon to perform DNS monitoring can be found in this
document on GitHub: sysmonconfig-export.xml. Despite being in XML, the DNS section starting
NOTE at line 835 is quite readable. Lines 871-1063 provide a complete RuleGroup example of how to
filter 180 domains to reduce noise from ads and other common sources of DNS traffic that can
generate a large number of events but are benign.
The last few lines of output returned from Sysmon should produce the following confirmation that DNS Query
logging is active.
Once Sysmon is active and running as a service, it will be logging various events in addition to DNS queries.
These events are visible in the Windows Event Viewer under Applications and Services Log > Microsoft >
Windows > Sysmon > Operational. Each event has an EventID. Sysmon Event ID 22, DNSEvent (DNS query), is
generated when a process executes a DNS query, whether the result is successful or fails, cached or not. The
telemetry for this event was added for Windows 8.1 so it is not available on Windows 7 and earlier. See the
Sysmon section for more information.
To collect DNS events, Sysmon creates an ETW trace session and writes the data into the
Windows Event Log which can then be collected with the im_msvistalog module. To avoid
WARNING
this performance overhead, it is recommended to use the im_etw module to collect event
data directly from the DNS ETW providers for greater efficiency.
366
Example 249. Collecting DnsQuery Logs with Sysmon
Environments that already utilize Sysmon monitoring (v10.0 or later) only need to use the im_msvistalog
module and add the relevant Sysmon filtering rules for DNS Query monitoring. In this example, the
im_msvistalog module will collect DnsQuery logs.
nxlog.conf
1 <Input sysmon>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Sysmon/Operational">
7 *[System[(EventID='22')]]
8 </Select>
9 </Query>
10 </QueryList>
11 </QueryXML>
12 </Input>
367
58.3.4.2. Summary of DNS Query Fields
The fields of particular interest are the QueryName and Image fields, which together provide a wealth of
information about the network activity of the client machine. Each event discloses which site—internal or
external—was queried and which Windows application was preparing to access that remote site.
The Message field usually contains a long string of information, most of which is parsed out into the following
fields:
• ProcessGuid
• ProcessId
• QueryStatus
• QueryResults
• Image (the full path and file name of the client application’s executable which performed the DNS query)
Using the im_msvistalog module for collecting DNS client events from this source is similar to the
configuration for getting events from Sysmon. A QueryXML block is used to select the source, some fields
are used to filter out unwanted events, while other fields are used to select only the events of interest. In
this configuration example, only four Event IDs are of interest, queries for "wpad" are not needed, and any
QueryType other than "1" will be dropped.
nxlog.conf
1 <Input DNS_Client>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-DNS-Client/Operational">
7 *[System[(EventID=3006 or EventID=3008 or
8 EventID=3010 or EventID=3018)]]
9 </Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 Exec if ($QueryName == 'wpad') OR \
14 ($QueryType != '1') drop();
15 </Input>
368
Output Sample
{
"EventTime": "2020-03-12T14:40:08.809107-07:00",
"Hostname": "WIN-R4QHULN6KLH",
"Keywords": "9223372036854775808",
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 3006,
"SourceName": "Microsoft-Windows-DNS-Client",
"ProviderGuid": "{1C95126E-7EEA-49A9-A3FE-A378B03DDB4D}",
"Version": 0,
"TaskValue": 0,
"OpcodeValue": 0,
"RecordNumber": 42095,
"ExecutionProcessID": 2224,
"ExecutionThreadID": 4672,
"Channel": "Microsoft-Windows-DNS-Client/Operational",
"Domain": "WIN-R4QHULN6KLH",
"AccountName": "Administrator",
"UserID": "S-1-5-21-915329490-2962477901-227355065-500",
"AccountType": "User",
"Message": "DNS query is called for the name ntp.msn.com, type 1, query options
140738562228224, Server List , isNetwork query 0, network index 0, interface index 0, is
asynchronous query 0",
"Opcode": "Info",
"QueryName": "ntp.msn.com",
"QueryType": "1",
"QueryOptions": "140738562228224",
"IsNetworkQuery": "0",
"NetworkQueryIndex": "0",
"InterfaceIndex": "0",
"IsAsyncQuery": "0",
"EventReceivedTime": "2020-03-12T14:40:10.674875-07:00",
"SourceModuleName": "DNS_Client",
"SourceModuleType": "im_msvistalog"
}
369
Example 251. Configure DNS Server Audit logging
No filtering is used in this configuration since most audit events are important and audit logs tend to be
much lower in volume than analytical or debug logs.
nxlog.conf
1 <Input DNS_Audit>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-DNSServer/Audit">*</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 </Input>
Output Sample
{
"EventTime": "2020-03-12T14:56:07.622472-07:00",
"Hostname": "WIN-R4QHULN6KLH",
"Keywords": "4611686018428436480",
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 516,
"SourceName": "Microsoft-Windows-DNSServer",
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
"Version": 0,
"TaskValue": 5,
"OpcodeValue": 0,
"RecordNumber": 98,
"ExecutionProcessID": 2000,
"ExecutionThreadID": 4652,
"Channel": "Microsoft-Windows-DNSServer/Audit",
"Domain": "WIN-R4QHULN6KLH",
"AccountName": "Administrator",
"UserID": "S-1-5-21-915329490-2962477901-227355065-500",
"AccountType": "User",
"Message": "A resource record of type 1, name ns2.example.com and RDATA 0x0A000210 was
deleted from scope Default of zone example.com.",
"Category": "ZONE_OP",
"Opcode": "Info",
"Type": "1",
"NAME": "ns2.example.com",
"TTL": "0",
"BufferSize": "4",
"RDATA": "0A000210",
"Zone": "example.com",
"ZoneScope": "Default",
"VirtualizationID": ".",
"EventReceivedTime": "2020-03-12T14:56:09.343045-07:00",
"SourceModuleName": "DNS_Audit",
"SourceModuleType": "im_msvistalog"
}
370
58.3.5.3. Monitoring DNS Server Analytical Events
One limitation of the im_msvistalog module is that it cannot read event traces of analytical sources. For this
reason, the im_etw module remains the preferred choice for collecting events from the DNS Server Analytical log.
It is possible though, to leverage the File directive in im_msvistalog to read the DNS Server Analytical log file
directly, which is located here:
%SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-DNSServer%4Analytical.etl
Analytical log sources, like debug log sources, tend to generate a high volume of events that are not always
useful. In this configuration example, an analysis of the log file determined that frequent lookups on 10
specific hosts were responsible for a sizable portion of the log file. Since none of these hosts are of interest
for security monitoring, they are being filtered out to reduce noise. The polling interval for reading the log
file is set to 60 seconds to reduce disk I/O in a low traffic environment.
nxlog.conf
1 <Input DNS_Analytical>
2 Module im_msvistalog
3 File C:\Windows\System32\winevt\Logs\Microsoft-Windows-DNSServer%4Analytical.etl
4 PollInterval 60
5 Exec if ($QNAME == 'americas1.notify.windows.com.akadns.net.') OR \
6 ($QNAME == 'cy2.vortex.data.microsoft.com.akadns.net.') OR \
7 ($QNAME == 'dm3p.wns.notify.windows.com.akadns.net.') OR \
8 ($QNAME == 'geo.vortex.data.microsoft.com.akadns.net.') OR \
9 ($QNAME == 'v10-win.vortex.data.microsoft.com.akadns.net.') OR \
10 ($QNAME == 'v10-win.vortex.data.microsoft.com.akadns.NET.') OR \
11 ($QNAME == 'v10.vortex-win.data.microsoft.com.') OR \
12 ($QNAME == 'wns.notify.windows.com.akadns.net.') OR \
13 ($QNAME == 'wns.notify.windows.com.akadns.NET.') OR \
14 ($QNAME == 'client.wns.windows.com.') OR \
15 ($QTYPE == '15') \
16 drop();
17 </Input>
371
Output Sample
{
"EventTime": "2020-03-12T19:21:47.052133-07:00",
"Hostname": "WIN-R4QHULN6KLH",
"Keywords": "9223372071214514176",
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 279,
"SourceName": "Microsoft-Windows-DNSServer",
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
"Version": 0,
"TaskValue": 1,
"OpcodeValue": 0,
"RecordNumber": 60,
"ExecutionProcessID": 2000,
"ExecutionThreadID": 4188,
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "S-1-5-18",
"AccountType": "User",
"Message": "INTERNAL_LOOKUP_CNAME: TCP=0; InterfaceIP=10.0.2.15; Source=10.0.2.15; RD=1;
QNAME=ns1.example.com.; QTYPE=1; Port=54171; Flags=34176; XID=2;
PacketData=0x00028580000100010000000003777777076578616D706C6503636F6D0000010001",
"Category": "LOOK_UP",
"Opcode": "Info",
"TCP": "0",
"InterfaceIP": "10.0.2.15",
"Source": "10.0.2.15",
"RD": "1",
"QNAME": "ns1.example.com.",
"QTYPE": "1",
"Port": "54171",
"Flags": "34176",
"XID": "2",
"BufferSize": "33",
"PacketData": "00028580000100010000000003777777076578616D706C6503636F6D0000010001",
"EventReceivedTime": "2020-03-12T19:28:51.560303-07:00",
"SourceModuleName": "DNS_Analytical",
"SourceModuleType": "im_msvistalog"
}
The packet capture module im_pcap provides capabilities for monitoring all common network protocols,
including network traffic that is specific to DNS clients and servers.
372
via the extended field name pattern, $dns.additional.*, needed to store the various additional attributes of
DNS traffic.
This configuration uses the im_pcap module to capture DNS, IPv4, IPv6, TCP, and UDP packets which are
then formatted to JSON while writing to a local file. Each protocol and its fields are defined within its own
Protocol block.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input pcap>
6 Module im_pcap
7 Dev enp0s3
8 <Protocol>
9 Type dns
10 Field dns.opcode
11 Field dns.id
12 Field dns.flags.authoritative
13 Field dns.flags.recursion_available
14 Field dns.flags.recursion_desired
15 Field dns.flags.authentic_data
16 Field dns.flags.checking_disabled
17 Field dns.flags.truncated_response
18 Field dns.response
19 Field dns.response.code
20 Field dns.query
21 Field dns.additional
22 Field dns.answer
23 Field dns.authority
24 </Protocol>
25 <Protocol>
26 Type ipv4
27 Field ipv4.src
28 Field ipv4.dst
29 [...]
373
"dns.authority.class": "IN",
"dns.authority.count": "1",
"dns.authority.name": "example.com",
"dns.authority.type": "NS",
"dns.flags.authentic_data": "false",
"dns.flags.authoritative": "true",
"dns.flags.checking_disabled": "false",
"dns.flags.recursion_available": "true",
"dns.flags.recursion_desired": "true",
"dns.flags.truncated_response": "false",
"dns.id": "18321",
"dns.opcode": "Query",
"dns.query.class": "IN",
"dns.query.count": "1",
"dns.query.name": "www.example.com",
"dns.response.code": "NOERROR",
"ipv4.dst": "192.168.1.7",
"ipv4.src": "192.168.1.24",
"udp.dst_port": "36486",
"udp.src_port": "53",
"EventTime": "2020-05-18T12:15:34.033655-05:00",
"EventReceivedTime": "2020-05-18T12:15:34.301402-05:00",
"SourceModuleName": "pcap",
"SourceModuleType": "im_pcap"
}
{
"dns.additional.count": "0",
"dns.answer.count": "0",
"dns.authority.count": "0",
"dns.flags.authentic_data": "false",
"dns.flags.authoritative": "false",
"dns.flags.checking_disabled": "false",
"dns.flags.recursion_available": "false",
"dns.flags.recursion_desired": "false",
"dns.flags.truncated_response": "false",
"dns.id": "0",
"dns.opcode": "Query",
"dns.query.class": "IN",
"dns.query.count": "1",
"dns.query.name": "wpad.local",
"dns.response.code": "NOERROR",
"ipv6.dst": "ff02::fb",
"ipv6.src": "fe80::3c3c:c860:df55:fd89",
"udp.dst_port": "5353",
"udp.src_port": "5353",
"EventTime": "2020-05-18T12:22:48.291661-05:00",
"EventReceivedTime": "2020-05-18T12:22:48.487235-05:00",
"SourceModuleName": "pcap",
"SourceModuleType": "im_pcap"
}
374
Chapter 59. Docker
Docker is a containerization technology that enables the creation and use of Linux containers. Containers allow a
developer to package an application with all of its dependencies and distribute it as a single package. The Docker
container technology is widely used in modern, micro-service architectures.
By concept, Docker images should be lightweight; usually only one application is present and running in the
container. Therefore, logs are written to the standard out and standard error streams and logging must be
performed from outside the image.
• The default logging driver can be set in the daemon.json configuration file. This file is located in
/etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows Server hosts. The default
logging driver is json-file.
• The default logging driver can be overridden at the container level. To accomplish this, the log driver and its
configuration options must be provided as parameters at container startup with the help of the docker run
command. The configuration options are the same as setting up logging options for the Docker daemon. See
the docker run command reference on Docker.com for more information.
To find the current logging driver for a running container, run the following docker inspect command,
substituting the container name or ID for <CONTAINER>.
59.2.1. JSON
With the json-file log driver, Docker produces a line-based log file in JSON format for each container. See the
JSON File logging driver guide on Docker.com for more information.
Because im_file recursively watches for log files in the containers directory, this may cause
NOTE
reduced performance in very large installations.
375
Example 254. Collecting Docker Logs in JSON Format
This example configuration reads from the JSON log files of all containers. The JSON fields are parsed and
added to the event record with the xm_json parse_json() procedure. A $HostID field, with the container ID, is
also added.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/lib/docker/containers/*/*-json.log'
12 <Exec>
13 parse_json();
14 $HostID = file_basename(file_name());
15 $HostID =~ s/-json.log//;
16 </Exec>
17 </Input>
59.2.2. GELF
The gelf logging driver is a convenient format that is understood by a number of tools such as NXLog. In GELF,
every log message is a dictionary with fields such as version, host, timestamp, short and long version of the
message, and any custom fields that have been configured. See the Graylog Extended Format logging driver
guide on Docker.com for more information.
In this example, NXLog accepts and parses logs in GELF format on TCP port 12201 with the im_tcp and
xm_gelf modules.
nxlog.conf
1 <Extension _gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 12201
9 InputType GELF_TCP
10 </Input>
59.2.3. Syslog
The syslog logging driver routes logs to a Syslog server, such as NXLog, via UDP, TCP, SSL/TLS, or a Unix domain
socket. See the Syslog logging driver guide on Docker.com for more information.
376
Example 256. Collecting Docker Logs in Syslog Format
Here, NXLog accepts logs on TCP port 1514 with the im_tcp module and parses the logs with the xm_syslog
parse_syslog() procedure.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 Exec parse_syslog();
10 </Input>
59.2.4. ETW
On Windows-based systems, the etwlogs logging driver forwards container logs to the Event Tracing for
Windows (ETW) system. Each ETW event contains a message with both the log and its context information. See
the ETW logging driver guide on Docker.com for more information.
This example collects logs from the DockerContainerLogs Event Tracing provider using the im_etw
module.
nxlog.conf
1 <Input in>
2 Module im_etw
3 Provider DockerContainerLogs
4 </Input>
377
Chapter 60. Elasticsearch and Kibana
Elasticsearch is a search engine and document database that is commonly used to store logging data. Kibana is a
popular user interface and querying front-end for Elasticsearch. Kibana is often used with the Logstash data
collection engine—together forming the ELK stack (Elasticsearch, Logstash, and Kibana).
However, Logstash is not actually required to load data into Elasticsearch. NXLog can do this as well, and offers
several advantages over Logstash—this is the KEN stack (Kibana, Elasticsearch, and NXLog).
• Because Logstash is written in Ruby and requires Java, it has high system resource requirements. NXLog has
a small resource footprint and is recommended by many ELK users as the log collector of choice for
Windows and Linux.
• Due to the Java dependency, Logstash requires system administrators to deploy the Java runtime onto their
production servers and keep up with Java security updates. NXLog does not require Java.
• The EventLog plugin in Logstash uses the Windows WMI interface to retrieve the EventLog data. This method
incurs a significant performance penalty. NXLog uses the Windows EventLog API natively in order to
efficiently collect EventLog data.
1. Configure NXLog.
The om_elasticsearch module is only available in NXLog Enterprise Edition. Because it sends data in
batches, it reduces the effect of the latency inherent in HTTP responses, allowing the Elasticsearch
server to process the data much more quickly (10,000 EPS or more on low-end hardware).
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Output out>
6 Module om_elasticsearch
7 URL http://localhost:9200/_bulk
8 FlushInterval 2
9 FlushLimit 100
10
11 # Create an index daily
12 Index strftime($EventTime, "nxlog-%Y%m%d")
13
14 # Use the following if you do not have $EventTime set
15 #Index strftime($EventReceivedTime, "nxlog-%Y%m%d")
16 </Output>
378
Example 259. Using om_http
For NXLog Community Edition, the om_http module can be used instead to send logs to Elasticsearch.
Because it sends a request to the Elasticsearch HTTP REST API for each event, the maximum logging
throughput is limited by HTTP request and response latency. Therefore this method is suitable only for
low-volume logging scenarios.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Output out>
6 Module om_http
7 URL http://localhost:9200
8 ContentType application/json
9 <Exec>
10 set_http_request_path(strftime($EventTime, "/nxlog-%Y%m%d/" +
11 $SourceModuleName));
12 rename_field("timestamp", "@timestamp");
13 to_json();
14 </Exec>
15 </Output>
2. Restart NXLog, and make sure the event sources are sending data. This can be checked with curl -X GET
'localhost:9200/_cat/indices?v&pretty'. There should be an index matching nxlog* and its
docs.count counter should be increasing.
379
c. Set the Time Filter field name selector to EventTime (or EventReceivedTime if the $EventTime field is
not set by the input module). Click [ Create index pattern ].
4. Test that the NXLog and Elasticsearch/Kibana configuration is working by opening Discover on the left panel.
380
60.2. Forwarding Logs to Logstash
NXLog can be configured to act as a log collector, forwarding logs to Logstash in JSON format.
1. Set up a configuration on the Logstash server to process incoming event data from NXLog.
logstash.conf
input {
tcp {
codec => json_lines { charset => CP1252 }
port => "3515"
tags => [ "tcpjson" ]
}
}
filter {
date {
locale => "en"
timezone => "Etc/GMT"
match => [ "EventTime", "YYYY-MM-dd HH:mm:ss" ]
}
}
output {
elasticsearch {
host => localhost
}
stdout { codec => rubydebug }
}
381
The json codec in Logstash sometimes fails to properly parse JSON—it will concatenate
more than one JSON record into one event. Use the json_lines codec instead.
NOTE
Although the im_msvistalog module converts data to UTF-8, Logstash seems to have trouble
parsing that data. The charset => CP1252 seems to help.
2. Configure NXLog.
nxlog.conf
1 <Output out>
2 Module om_tcp
3 Host 10.1.1.1
4 Port 3515
5 Exec to_json();
6 </Output>
3. Restart NXLog.
382
Chapter 61. F5 BIG-IP
F5 BIG-IP appliances are capable of sending their logs to a remote Syslog destination via TCP or UDP. When
sending logs over the network, it is recommended to use TCP as the more reliable protocol. With UDP there is a
potential to lose entries, especially when there is a high volume of messages.
There are multiple sub-systems that write logs to different files. Below is an example of Local Traffic
Management (LTM) logs reporting pool members being up or down.
For more details on BIG-IP log files and how to view them, please refer to the K16197 knowledge base article.
Additional information on configuring logging options on BIG-IP devices can be found in the F5 Knowledge
Center. Select the appropriate software version and look for the "Log Files" section in the TMOS Operations
Guide.
NOTE The steps below have been tested with BIG-IP v11 but should also work for other versions.
1. Configure NXLog to receive log entries via TCP and process them as Syslog (see the examples below). Then
restart NXLog.
2. Make sure the NXLog agent is accessible from all BIG-IP devices being configured. A new route or default
gateway may need to be configured depending on the current setup.
3. Connect via SSH to the BIG-IP device. In case of a High Availability (HA) group, make sure you are logged into
the active unit. You should see (Active) in the command prompt.
4. Review the existing Syslog configuration on BIG-IP and remove it.
5. Configure a remote Syslog destination on BIG-IP. Replace IP_SYSLOG and PORT with the IP address and port
that the NXLog agent is listening on. Replace LEVEL with the required logging level.
383
This command forwards all appliance logs to the remote destination, so nothing will be
NOTE
logged locally as soon as it is executed.
This command redirects logs at the informational level (from info to emerg) to an NXLog agent at
192.168.6.43, via TCP port 1514.
6. In case of a High Availability (HA) group, synchronize the configuration changes to the other units.
Once the configuration has been synchronized to all members of the group, each member
NOTE will be sending logs, inserting its own hostname and IP address. In the event of failover,
logging will continue from both units regardless of which one is currently active.
This configuration uses the im_tcp module to collect the BIG-IP logs. A JSON output sample shows the
resulting logs as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "/var/log/f5.log"
19 Exec to_json();
20 </Output>
384
Output Sample
{
"MessageSourceAddress": "192.168.6.161",
"EventReceivedTime": "2017-03-14 17:03:16",
"SourceModuleName": "in",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 16,
"SyslogFacility": "LOCAL0",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "l-lb2",
"EventTime": "2017-03-14 17:03:53",
"SourceName": "mcpd",
"ProcessID": "7233",
"Message": "notice httpd[5150]: 01070639:5: Pool /Common/q-qa-pool member /Common/q-qa1:25
session status enabled."
}
{
"MessageSourceAddress": "192.168.6.91",
"EventReceivedTime": "2017-03-14 17:10:18",
"SourceModuleName": "in",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 16,
"SyslogFacility": "LOCAL0",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "l-lb1",
"EventTime": "2017-03-14 17:10:33",
"SourceName": "httpd",
"ProcessID": "1181",
"Message": "notice httpd[5150]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78
attempts=1 start=\"Tue Mar 14 16:43:41 2017\" end=\"Tue Mar 14 17:10:33 2017\"."
}
NXLog can also be configured to extract additional fields from the messages, including those that contain key-
value pairs.
This configuration uses the xm_syslog parse_syslog() procedure to parse Syslog messages and the xm_kvp
module to extract additional fields.
385
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Extension kvp>
10 Module xm_kvp
11 KVPDelimiter " "
12 KVDelimiter =
13 EscapeChar \\
14 </Extension>
15
16 <Input in>
17 Module im_tcp
18 Host 0.0.0.0
19 Port 1514
20 <Exec>
21 parse_syslog();
22 if $Message =~ /^([a-z]*) ([a-zA-Z]*)(.*)$/
23 {
24 $F5MsgLevel = $1;
25 $F5Proc = $2;
26 $F5Message = $3;
27 if $F5Message =~ /^\[[0-9]*\]: ([0-9]*):([0-9]+): (.*)$/
28 {
29 [...]
386
Output Sample
{
"MessageSourceAddress": "192.168.6.91",
"EventReceivedTime": "2017-04-16 00:06:43",
"SourceModuleName": "in",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 10,
"SyslogFacility": "AUTHPRIV",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "l-lb1",
"EventTime": "2017-04-16 00:07:59",
"Message": "notice httpd[5320]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78
attempts=1 start=\"Sun Apr 16 00:07:59 2017\".",
"F5MsgLevel": "notice",
"F5Proc": "httpd",
"F5Message": "[5320]: 01070417:5: AUDIT - user john - RAW: httpd(mod_auth_pam):
user=john(john) partition=[All] level=Administrator tty=/usr/bin/tmsh host=192.168.9.78
attempts=1 start=\"Sun Apr 16 00:07:59 2017\".",
"F5MsgID": "01070417",
"F5MsgSev": "5",
"F5Msg": "AUDIT - user john - RAW: httpd(mod_auth_pam): user=john(john) partition=[All]
level=Administrator tty=/usr/bin/tmsh host=192.168.9.78 attempts=1 start=\"Sun Apr 16 00:07:59
2017\".",
"F5Process": "httpd",
"F5Module": "mod_auth_pam",
"user": "john(john)",
"partition": "[All]",
"level": "Administrator",
"tty": "/usr/bin/tmsh",
"host": "192.168.9.78",
"attempts": "1",
"start": "Sun Apr 16 00:19:55 2017"
}
1. Configure NXLog to receive log entries via UDP and process them as Syslog (see the example below). Then
restart the agent.
2. Make sure the NXLog agent is accessible from all BIG-IP devices being configured. A new route or default
gateway may need to be configured, depending on the current setup.
3. Proceed with the Syslog configuration on BIG-IP, using either the command line or the web interface. See the
following sections.
387
Example 263. Receiving BIG-IP Logs via UDP
This configuration uses the im_udp module to collect the BIG-IP logs.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in_syslog_udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog();
10 </Input>
This command redirects Informational Logs to an NXLog agent at 192.168.6.143, via UDP port 514.
3. In case of a High Availability (HA) group, synchronize configuration changes to the other units:
Once the configuration has been synchronized to all members of the group, each member
NOTE will be sending logs, inserting its own hostname and IP address. In the event of failover,
logging will continue from both units regardless of which one is currently active.
3. Type in the Remote IP and Remote Port, then click [ Add ] and [ Update ].
388
4. In case of a High Availability (HA) group, synchronize the configuration changes to the other units:
a. Click on the yellow Changes Pending in the top left corner.
b. Select Active unit which should be marked as (Self).
c. Make sure Sync Device to Group option is chosen and click [ Sync ].
389
Once the configuration has been synchronized to all members of the group, each member will
NOTE be sending logs, inserting its own hostname and IP address. In the event of failover, logging will
continue from both units regardless of which one is currently active.
BIG-IP systems also come with Management Information Base (MIB) files stored on the device itself. Additional
information on that is available in K13322.
1. Configure NXLog with the xm_snmp module. See the example below.
2. Make sure the NXLog agent is accessible from all BIG-IP devices being configured. A new route or default
gateway may need to be configured, depending on the current setup.
3. Proceed with the SNMP configuration on BIG-IP, using either the command line or the web interface. See the
following sections.
390
Example 265. Receiving SNMP Traps
This example NXLog configuration uses the im_udp and xm_snmp modules to receive SNMP traps. The
corresponding JSON-formatted output is shown below.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension snmp>
6 Module xm_snmp
7 MIBDir /usr/share/mibs/bigip
8 # The following <User> section is required for SNMPv3
9 #<User snmp_user>
10 # AuthProto sha1
11 # AuthPasswd q1w2e3r4
12 # EncryptPasswd q1w2e3r4
13 # EncryptProto aes
14 #</User>
15 </Extension>
16
17 <Input in>
18 Module im_udp
19 Host 0.0.0.0
20 Port 162
21 InputType snmp
22 </Input>
23
24 <Output out>
25 Module om_file
26 File "/var/log/f5.log"
27 Exec to_json();
28 </Output>
Output Sample
{
"SNMP.CommunityString": "nxlog",
"SNMP.RequestID": 449377444,
"EventTime": "2017-03-18 16:37:41",
"SeverityValue": 2,
"Severity": "INFO",
"OID.1.3.6.1.2.1.1.3.0": 1277437018,
"OID.1.3.6.1.6.3.1.1.4.1.0": "1.3.6.1.4.1.3375.2.4.0.3",
"OID.1.3.6.1.6.3.1.1.4.3.0": "1.3.6.1.4.1.3375.2.4",
"MessageSourceAddress": "192.168.6.91",
"EventReceivedTime": "2017-03-18 16:37:41",
"SourceModuleName": "in",
"SourceModuleType": "im_udp"
}
391
# tmsh modify sys snmp bigip-traps enabled
# tmsh modify sys snmp agent-trap enabled
# tmsh modify sys snmp auth-trap enabled
4. Configure the remote SNMP destination on BIG-IP. Replace NAME, COMMUNITY, IP_ADDRESS, and PORT with
appropriate values. Replace NETWORK with other unless traps are going out the management interface, when
management should be specified instead.
In case of SNMPv3, this command needs additional parameters, including security-level, auth-protocol, auth-
password, privacy-protocol, and privacy-password.
This command enables sending SNMPv3 traps to 192.168.6.143, using SHA and AES.
392
If the BIG-IP configuration has been previously migrated or cloned, SNMPv3 may not work
NOTE
because the EngineID is not unique. In this case it must be reset as described in K6821.
5. In case of a High Availability (HA) group, synchronize the configuration changes to the other units.
3. Create an SNMP user (SNMPv3 only). Go to System › SNMP › Agent › Access (v3). Click [ Create ] and
specify the user name, authentication type and password, privacy protocol and password, and access type.
Specify an OID value to limit access to certain OIDs, or use .1 to allow full access.
4. Go to System › SNMP › Traps › Destination and click [ Create ]. Specify the SNMP version, community
393
name, destination IP address, destination port, and network to send traffic to. Then click [ Finished ].
SNMPv3 requires additional parameters. This example matches the settings shown in the NXLog
configuration above.
5. In case of a High Availability (HA) group, synchronize the configuration changes to the other units.
a. Click on the yellow Changes Pending in the top left corner.
b. Select the Active unit which should be marked as (Self).
c. Make sure the Sync Device to Group option is chosen and click [ Sync ].
394
Once the configuration has been synchronized to all members of the group, each
member will be sending logs, inserting its own hostname and IP address. In the event of
NOTE
failover, logging will continue from both units regardless of which one is currently
active.
BIG-IP is able to send its own logs via HSL in addition to logs for traffic passing through the device. Because the
load balancer is usually on the edge of the network and all web traffic passes through it, logging traffic on BIG-IP
itself may be an easier and faster solution than processing web server logs on each server separately.
When configuring HSL on BIG-IP, the administrator will have to choose between sending logs via TCP or UDP. TCP
can guarantee reliable delivery. However when load balancing traffic between multiple nodes, BIG-IP will reuse
existing TCP connections to each node in order to reduce overhead related to creating new connections. This
may result in less perfect load balancing between members.
NOTE The steps below have been tested with BIG-IP v12.
In order to configure HSL on BIG-IP, a node for each NXLog server must be created and then added to a pool.
Follow these steps.
395
# tmsh create ltm node NAME { address IP_ADDRESS session user-enabled }
These commands create a pool named nxlog with one NXLog node.
Adaptive
Sends traffic to one of the pool members until this member is either unable to process logs at the
required rate or the connection is lost.
Balanced
Uses the load balancing method configured on the pool and selects a new member each time it sends
data.
Replicated
Sends each log to all members of the pool.
3. Create a log publisher. Replace NAME with a name for the publisher and DESTINATION with the destination
name used in the previous step.
4. Create a log filter. Replace NAME with a name for the filter, LEVEL with the required logging level between
Emergency and Debugging, PUBLISHER with the name used in the previous step, and SOURCE with a
particular process running on BIG-IP (or all, which sends all logs).
396
Example 270. Sending All Logs to the NXLog Pool
The following commands will send all logs to the NXLog pool via the TCP protocol.
3. Assign the logging profile to a virtual server. Replace NAME with the virtual server name and
LOGGING_PROFILE with the profile name used in the previous step. A logging profile can be assigned to
multiple virtual servers.
The following commands configure traffic logging to the NXLog pool via TCP.
397
Example 272. Receiving Traffic Logs From BIG-IP
This example shows BIG-IP traffic logs as received and processed by NXLog using im_tcp and xm_syslog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "/var/log/f5.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.6.91",
"EventReceivedTime": "2017-05-10 19:16:43",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-05-10 19:16:43",
"Hostname": "192.168.6.91",
"Message": "client 192.168.9.78:63717 request GET /cmedia/img/icons/mime/mime-
unknown.png?v170509919 HTTP/1.1 server 192.168.6.101:80 status "
}
398
Example 273. Extracting Additional Fields
Further field extraction can be done with NXLog according to the sequence of fields specified in the
template. For the template string shown above, the following configuration extracts the four fields with a
regular expression.
nxlog.conf
1 <Input in_syslog_tcp>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /^client (.*) request (.*) server (.*) status (.*)$/
8 {
9 $HTTP_Client = $1;
10 $HTTP_Request = $2;
11 $HTTP_Server = $3;
12 $HTTP_Status = $4;
13 }
14 </Exec>
15 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.6.91",
"EventReceivedTime": "2017-05-10 20:06:24",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2017-05-10 20:06:24",
"Hostname": "192.168.6.91",
"Message": "client 192.168.9.78:65275 request GET /?disabledcookies=true HTTP/1.1 server
192.168.6.100:80 status ",
"HTTP_Client": "192.168.9.78:65275",
"HTTP_Request": "GET /?disabledcookies=true HTTP/1.1",
"HTTP_Server": "192.168.6.100:80",
"HTTP_Status": ""
}
399
Example 274. Creating a Virtual Server Forwarding Logs to the NXLog Pool
This example creates a virtual server listening on TCP port 1514 that forwards logs to the nxlog pool.
Once this has been set up, log producers can be configured to forward Syslog logs to 192.168.6.93.
400
Chapter 62. File Integrity Monitoring
File integrity monitoring (FIM) can be used to detect changes to files and directories. A file may be altered due to
an update to a newer version, a security breach, or data corruption. File integrity monitoring helps an
organization respond quickly and effectively to unexpected changes to files and is therefore a standard
requirement for many regulatory compliance objectives.
NXLog can be configured to provide file (or Windows Registry) integrity monitoring. An event is generated for
each detected modification. These events can then be used to generate alerts or be forwarded for storage and
auditing.
There are various ways that monitoring can be implemented; these fall into two categories.
Checksum Monitoring
The im_fim and im_regmon modules (available with NXLog Enterprise Edition only) provide monitoring based
on a cryptographic checksums. On the first run (when a file set or the registry is in a known secure state), a
database of checksums is created. Subsequent scans are performed at regular intervals, and the checksums
are compared. When a change is detected, an event is generated.
• The im_fim module is platform independent, available on all platforms supported by NXLog, and has no
external dependencies. Similarly, the im_regmon module requires no configuration outside of NXLog to
monitor the Windows Registry.
• If there are multiple changes between two scans, only the cumulative effect is logged. For example, if a
file is deleted and a new file is created in its place before the next scan occurs, a single modification event
will be generated.
• It is not possible to detect which user made a change because the filesystem/registry does not provide
that information, and there may be multiple changes by different users between scans.
Real-Time Monitoring
Files (and the Windows Registry) can also be monitored in real-time with the help of kernel-level auditing,
which does not require periodic scanning. This type of monitoring is platform specific.
• Kernel-level monitoring usually provides improved performance, especially for large file sets.
• All events are logged; the granularity of reporting is not limited by the scan interval (because there is no
scanning involved).
• Reported events may be very detailed, and usually include information about which user made the
change.
See the following sections for details about setting up file integrity monitoring on various platforms.
NXLog must have permission to read the files that are to be monitored. Run NXLog as root, make sure the nxlog
user or group has permission to read the files, or change the user/group under which NXLog runs. See the User
and Group directives.
401
Example 275. Using im_fim on Linux
This configuration uses im_fim to monitor a common set of system directories containing configuration,
executables, and libraries. The RIPEMD-160 hash function is selected and the scan interval is set to 3,600
seconds (1 hour).
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/bin/*"
4 File "/etc/*"
5 File "/lib/*"
6 File "/opt/nxlog/bin/*"
7 File "/opt/nxlog/lib/*"
8 File "/sbin/*"
9 File "/usr/bin/*"
10 File "/usr/sbin/*"
11 Exclude "/etc/hosts.deny"
12 Exclude "/etc/mtab"
13 Digest rmd160
14 Recursive TRUE
15 ScanInterval 3600
16 </Input>
Internal Log
2017-06-14 11:44:53 INFO Module 'fim': FIM scan started↵
2017-06-14 11:45:00 INFO Module 'fim': FIM scan finished in 7.24 seconds. Scanned folders: 833
Scanned files: 5081 Read file bytes: 379166339↵
Output Sample
{
"EventTime": "2017-06-14 11:57:33",
"Hostname": "ubuntu-xenial",
"EventType": "CHANGE",
"Object": "FILE",
"PrevFileName": "/etc/ld.so.cache",
"PrevModificationTime": "2017-06-14 11:20:47",
"FileName": "/etc/ld.so.cache",
"ModificationTime": "2017-06-14 11:56:55",
"PrevFileSize": 46298,
"FileSize": 46971,
"DigestName": "rmd160",
"PrevDigest": "1dbe24a108c044153d8499f073274b7ad5507119",
"Digest": "ec0bc108b7c9e5d9eafde9cb1407b91e618d24c4",
"EventReceivedTime": "2017-06-14 11:57:33",
"SourceModuleName": "fim",
"SourceModuleType": "im_fim"
}
See the Linux Audit System chapter for details about setting up kernel-level auditing. It is even possible to
combine the im_fim and im_linuxaudit modules for redundant monitoring.
Monitoring on Windows
The im_fim module can be used on Windows for monitoring a file set.
402
Example 276. Using im_fim on Windows
This configuration monitors the program directories for changes. The scan interval is set to 1,800 seconds
(30 minutes). The events generated by NXLog are similar to those shown in Using im_fim on Linux.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File 'C:\Program Files\*'
4 File 'C:\Program Files (x86)\*'
5 Exclude 'C:\Program Files\nxlog\data\*'
6 Recursive TRUE
7 ScanInterval 1800
8 </Input>
The Windows Registry can be monitored with the im_regmon module. This configuration monitors all
registry keys in the specified path. The keys are scanned every 60 seconds.
nxlog.conf
1 <Input registry>
2 Module im_regmon
3 RegValue 'HKLM\Software\Policies\*'
4 Recursive TRUE
5 ScanInterval 60
6 </Input>
Internal Log
2020-02-26 22:08:32 INFO Module 'in': Registry scan finished in 0.08 seconds. Scanned registry
keys: 337 Scanned registry values: 1250 Read value bytes: 106866↵
Output Sample
{
"EventTime": "2018-01-31 04:01:12",
"Hostname": "WINAD",
"EventType": "CHANGE",
"RegistryValueName": "HKLM\\Software\\Policies\\Microsoft\\TPM\\OSManagedAuthLevel",
"PrevValueSize": 4,
"ValueSize": 4,
"DigestName": "SHA1",
"PrevDigest": "0aaf76f425c6e0f43a36197de768e67d9e035abb",
"Digest": "3c585604e87f855973731fea83e21fab9392d2fc",
"Severity": "WARNING",
"SeverityValue": 3,
"EventReceivedTime": "2018-01-31 04:01:12",
"SourceModuleName": "registry",
"SourceModuleType": "im_regmon",
"MessageSourceAddress": "10.8.0.121"
}
403
Example 278. Extended Hive Key Paths to Monitor
The following example uses the im_regmon module to monitor a list of hive key paths listed in documents
such as the MITRE ATT&CK framework and the JP/CERT Lateral Movements. This list can be modified as and
when needed.
When running a custom list, make sure to double check the internal log for the appropriate number of keys
and values that are being scanned.
nxlog.conf
1 <Input extend_regmon_rules>
2 Module im_regmon
3 Recursive TRUE
4 ScanInterval 30
5
6 RegValue "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\*"
7 RegValue "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution
Options\*"
8 RegValue "HKLM\SYSTEM\CurrentControlSet\Control\WMI\Security\*"
9 RegValue "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\*"
10 RegValue "HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\BootExecute"
11 RegValue
"HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel\NameSpace\*"
12 RegValue "HKLM\SYSTEM\ControlSet001\Enum\STORAGE\VolumeSnapshot"
13 RegValue "HKLM\SYSTEM\ControlSet001\Services\VSS\*"
14 RegValue "HKLM\Software\Microsoft\Windows\CurrentVersion\Runonce"
15 RegValue "HKLM\Software\Microsoft\Windows\CurrentVersion\policies\Explorer\*"
16 RegValue "HKLM\Software\Microsoft\Windows\CurrentVersion\Run\*"
17 RegValue "HKCU\Software\Microsoft\Windows\CurrentVersion\Run\*"
18 RegValue "HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce"
19 RegValue "HKLM\Software\Policies\*"
20 </Input>
404
Chapter 63. FreeRADIUS
Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized
authentication, authorization, and accounting management for users who connect and use a network service.
RADIUS accounting logs can be provided by many networking devices or by the open source Unix service called
FreeRADIUS.
NXLog can be configured to process FreeRADIUS authentication and accounting logs. For processing RADIUSs
NPS, see RADIUS NPS (xm_nps).
The configuration below uses the im_file module to read FreeRADIUS authentication log entries and
separate fields with regular expressions. The result is converted to JSON after fields EventReceivedTime,
SourceModuleName, and SourceModuleType are deleted from the $raw_event.
nxlog.conf
1 <Input freeradius>
2 Module im_file
3 File '/tmp/input'
4 <Exec>
5 if $raw_event =~ /^(?<DateTime>\w{3} \w{3} \d{2} \d{2}:\d{2}:\d{2} \d{4}) :
(?<EventType>\w+): (?<Message>.+)/
6 {
7 $raw_event = $DateTime + ' ' + $EventType + ' ' + $Message;
8 }
9 else drop();
10 </Exec>
11 </Input>
12
13 <Output out>
14 Module om_file
15 File '/tmp/output'
16 <Exec>
17 delete($EventReceivedTime);
18 delete($SourceModuleName);
19 delete($SourceModuleType);
20 to_json();
21 </Exec>
22 </Output>
Event Sample
Thu Dec 20 07:50:44 2018 : Info: Loaded virtual server inner-tunnel↵
Thu Dec 20 07:50:44 2018 : Info: Ready to process requests↵
Thu Dec 20 07:50:46 2018 : Auth: (0) Login OK: [testing/testing123] (from client localhost port
0)↵
Thu Dec 20 07:50:46 2018 : Auth: (1) Login OK: [testing/testing123] (from client localhost port
0)↵
Thu Dec 20 07:50:47 2018 : Auth: (2) Login OK: [testing/testing123] (from client localhost port
0)↵
Thu Dec 20 07:50:49 2018 : Auth: (3) Login incorrect (pap: Cleartext password does not match
"known good" password): [testing/testing] (from client localhost port 0)↵
405
Output Sample
{
"DateTime": "Thu Dec 20 07:50:44 2018",
"EventType": "Info",
"Message": "Loaded virtual server inner-tunnel"
}
{
"DateTime": "Thu Dec 20 07:50:44 2018",
"EventType": "Info",
"Message": "Ready to process requests"
}
{
"DateTime": "Thu Dec 20 07:50:46 2018",
"EventType": "Auth",
"Message": "(0) Login OK: [testing/testing123] (from client localhost port 0)"
}
{
"DateTime": "Thu Dec 20 07:50:46 2018",
"EventType": "Auth",
"Message": "(1) Login OK: [testing/testing123] (from client localhost port 0)"
}
{
"DateTime": "Thu Dec 20 07:50:47 2018",
"EventType": "Auth",
"Message": "(2) Login OK: [testing/testing123] (from client localhost port 0)"
}
{
"DateTime": "Thu Dec 20 07:50:49 2018",
"EventType": "Auth",
"Message": "(3) Login incorrect (pap: Cleartext password does not match \"known good\"
password): [testing/testing] (from client localhost port 0)"
}
The configuration below utilizes the im_file module to read FreeRADIUS accounting logs and the
xm_multiline module to match the start and end of a log entry. Each string is processed and converted to
key-value pairs using the xm_kvp and to JSON using the xm_json modules. The EventReceivedTime,
SourceModuleName, and SourceModuleType fields are deleted from the entry.
406
nxlog.conf
1 <Extension radius>
2 Module xm_multiline
3 HeaderLine /^\s\S\S\S\s+\S\S\S\s+\d{1,2}\s+\d{1,2}\:\d{1,2}\: \
4 \d{1,2}\s+\d{4}/
5 EndLine /^\s+Timestamp = \d*/
6 </Extension>
7
8 <Extension kvp>
9 Module xm_kvp
10 KVDelimiter =
11 KVPDelimiter \n
12 </Extension>
13
14 <Input in>
15 Module im_file
16 File "/tmp/input"
17 ReadFromLast FALSE
18 SavePos FALSE
19 InputType radius
20 <Exec>
21 if $raw_event =~ /^(.+)\s*([\s\S]+)/
22 {
23 $EventTime = parsedate($1);
24 kvp->parse_kvp($2);
25 $Timestamp = datetime(integer($Timestamp) * 1000000);
26 }
27 else log_info("no match for " + $raw_event);
28 delete($EventReceivedTime);
29 delete($SourceModuleName);
30 delete($SourceModuleType);
31 </Exec>
32
33 </Input>
407
Event Sample
Tue May 21 00:00:03 2013↵
Acct-Session-Id = "1/3/0/3_00FA2701"↵
Framed-Protocol = PPP↵
Framed-IP-Address = 1.2.3.4↵
Cisco-AVPair = "ppp-disconnect-cause=Received LCP TERMREQ from peer"↵
User-Name = "user"↵
Acct-Authentic = RADIUS↵
Cisco-AVPair = "connect-progress=LAN Ses Up"↵
Cisco-AVPair = "nas-tx-speed=1410065408"↵
Cisco-AVPair = "nas-rx-speed=1410065408"↵
Acct-Session-Time = 384↵
Acct-Input-Octets = 4497↵
Acct-Output-Octets = 7951↵
Acct-Input-Packets = 64↵
Acct-Output-Packets = 64↵
Acct-Terminate-Cause = User-Request↵
Cisco-AVPair = "disc-cause-ext=PPP Receive Term"↵
Acct-Status-Type = Stop↵
NAS-Port-Type = Ethernet↵
NAS-Port = 402653187↵
NAS-Port-Id = "1/3/0/3"↵
Cisco-AVPair = "client-mac-address=fe00.5104.01ae"↵
Service-Type = Framed-User↵
NAS-IP-Address = 1.2.3.4↵
X-Ascend-Session-Svr-Key = "DCCE87A5"↵
Acct-Delay-Time = 0↵
Proxy-State = 0x313133↵
Proxy-State = 0x323339↵
Client-IP-Address = 1.2.3.4↵
Acct-Unique-Session-Id = "3ff5a50a3cea9cba"↵
Timestamp = 1369087203↵
↵
408
Output Sample
{
"EventTime": "2013-05-21T00:00:03.000000+00:00",
"Acct-Session-Id": "1/3/0/3_00FA2701",
"Framed-Protocol": "PPP",
"Framed-IP-Address": "1.2.3.4",
"Cisco-AVPair": "client-mac-address=fe00.5104.01ae",
"User-Name": "user",
"Acct-Authentic": "RADIUS",
"Acct-Session-Time": 384,
"Acct-Input-Octets": 4497,
"Acct-Output-Octets": 7951,
"Acct-Input-Packets": 64,
"Acct-Output-Packets": 64,
"Acct-Terminate-Cause": "User-Request",
"Acct-Status-Type": "Stop",
"NAS-Port-Type": "Ethernet",
"NAS-Port": 402653187,
"NAS-Port-Id": "1/3/0/3",
"Service-Type": "Framed-User",
"NAS-IP-Address": "1.2.3.4",
"X-Ascend-Session-Svr-Key": "DCCE87A5",
"Acct-Delay-Time": 0,
"Proxy-State": 3289913,
"Client-IP-Address": "1.2.3.4",
"Acct-Unique-Session-Id": "3ff5a50a3cea9cba",
"Timestamp": "2013-05-20T22:00:03.000000+00:00"
}
409
Chapter 64. Graylog
Graylog is a popular open source log management tool with a GUI that uses Elasticsearch as a backend. It
provides centralized log collection, analysis, searching, visualization, and alerting features. NXLog can be
configured as a collector for Graylog, using one of the output writers provided by the xm_gelf module. In such a
setup, NXLog acts as a forwarding agent on the client machine, sending messages to a Graylog node.
See the Graylog documentation for more information about configuring and using Graylog.
2. Select input type GELF UDP and click the [ Launch new input ] button.
3. Select the Graylog node for your input or make it global. Provide a name for the input in the Title textbox.
Change the default port if needed. Use the Bind address option to limit the input to a specific network
interface.
410
Example 281. Sending GELF via UDP
This configuration loads the xm_gelf extension module and uses the GELF_UDP output writer to send GELF
messages via UDP.
nxlog.conf
1 <Extension _gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 </Input>
9
10 <Output out>
11 Module om_udp
12 Host 127.0.0.1
13 Port 12201
14 OutputType GELF
15 </Output>
2. Select input type GELF TCP and click the [ Launch new input ] button.
3. Select the Graylog node for your input or make it global. Provide a name for the input in the Title textbox.
Change the default port if needed. Use the Bind address option to limit the input to a specific network
interface.
411
4. To use TLS configuration, provide the TLS cert file and the TLS private key file (a password is required if the
private key is encrypted). Check Enable TLS.
412
Example 282. Sending GELF via TCP
This configuration loads the xm_gelf extension module and uses the GELF_TCP output writer to send GELF
messages via TCP.
nxlog.conf
1 <Extension _gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 </Input>
9
10 <Output out>
11 Module om_tcp
12 Host 127.0.0.1
13 Port 12201
14 OutputType GELF_TCP
15 </Output>
413
Example 283. Sending GELF via TCP/TLS
This configuration loads the xm_gelf extension module and uses the GELF_TCP output writer with the
om_ssl module to send GELF messages via TLS encrypted connection.
nxlog.conf
1 <Extension _gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "/var/log/messages"
8 </Input>
9
10 <Output out>
11 Module om_ssl
12 Host 127.0.0.1
13 Port 12201
14 CertFile %CERTDIR%/graylog.crt
15 AllowUntrusted TRUE
16 OutputType GELF_TCP
17 </Output>
1. Stop and disable the NXLog system service, as the NXLog process will be managed by Graylog. Install and
configure the collector sidecar for the target system. The details can found in the Graylog Collector Sidecar
documentation.
collector_sidecar.yml
server_url: http://10.0.2.2:9000/api/
update_interval: 30
tls_skip_verify: true
send_status: true
list_log_files:
- /var/log
node_id: graylog-collector-sidecar
collector_id: file:/etc/graylog/collector-sidecar/collector-id
log_path: /var/log/graylog/collector-sidecar
log_rotation_time: 86400
log_max_age: 604800
tags:
- linux
- apache
- redis
backends:
- name: nxlog
enabled: true
binary_path: /usr/bin/nxlog
configuration_path: /etc/graylog/collector-sidecar/generated/nxlog.conf
2. Go to System › Collectors. After a successful sidecar installation, a new collector should appear.
414
3. Click the [ Create configuration ] button.
5. Create a new output of the required type. See the Configuring GELF UDP Collection and Configuring GELF
TCP or TCP/TLS Collection sections above.
6. Create an input for NXLog (for example, a file input).
415
7. Go back to System › Collectors to verify the setup. If everything is fine the collector should be in the
Running state.
416
Chapter 65. HP ProCurve
HP ProCurve switches are capable of sending their logs to a remote Syslog destination via UDP or TCP. When
sending logs over the network it is recommended to use TCP as the more reliable protocol. With UDP there is a
potential to lose entries, especially when there is a high volume of messages. It is also possible to send logs via
TLS if additional security is required.
The HP ProCurve web interface does not provide a way to configure an external Syslog server, so this must be
done via the command line (see the following sections). For more details on configuring logging for HP ProCurve
switches, refer to the HP ProCurve Management and Configuration Guide available from HP Enterprise Support.
The actual document depends on the model and firmware version in use.
In case of multiple switches running in redundancy mode (such as VRRP or similar), each
WARNING device must be configured separately as failover happens per VLAN and logging
configuration is not synchronized.
The steps below have been tested with HP 4000 series switches but should also work for 2000,
NOTE
6000, and 8000 series devices.
1. Configure NXLog to receive log entries over the network and process them as Syslog (see Accepting Syslog
via UDP, TCP, or TLS and the TCP example below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the switch.
3. Connect to the switch via SSH or Telnet.
4. Run the following commands to configure Syslog logging. Replace LEVEL with the logging level (debug, major,
error, warning, or info). Replace FACILITY with the Syslog facility to be used for the logs. Replace
IP_ADDRESS with the address of the NXLog agent; PROTOCOL with udp, tcp, or tls; and PORT with the
required port. If PORT is omitted, the default will be used (514 for UDP, 1470 for TCP, or 6514 for TLS).
# configure
(config)# logging severity LEVEL
(config)# logging facility FACILITY
(config)# logging IP_ADDRESS PROTOCOL PORT
(config)# write memory
The following commands configure the switch to send logs to 192.168.6.143 via the default TCP port.
Only logs with info severity level and higher will be sent, and the local5 Syslog facility will be used.
# configure
(config)# logging severity info
(config)# logging facility local5
(config)# logging 192.168.6.143 tcp
(config)# write memory
417
Example 285. Receiving ProCurve Logs via TCP
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1470
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/hp.log"
19 Exec to_json();
20 </Output>
Events like those at the beginning of the chapter will result in the following output.
Output Sample
{
"MessageSourceAddress": "192.168.10.3",
"EventReceivedTime": "2017-03-18 19:32:02",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 21,
"SyslogFacility": "LOCAL5",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "192.168.10.3",
"EventTime": "2017-03-19 00:27:27",
"SourceName": "mgr",
"Message": " SME TELNET from 192.168.9.78 - MANAGER Mode"
}
418
Chapter 66. IBM QRadar SIEM
IBM QRadar Security Information and Event Management (SIEM) collects event data and uses analytics,
correlation, and threat intelligence features to identify known or potential threats, provide alerting and reports,
and aid in incident investigations. For more information, see IBM QRadar SIEM on IBM.com.
NXLog can be configured to collect events and forward them to QRadar SIEM. This chapter provides information
about setting up this integration, both for generic structured logs and for several specific log types. The last
section shows output examples for forwarding the processed logs to QRadar.
NOTE The instructions and examples in this chapter were tested with QRadar 7.3.1.
• Some events may exceed QRadar’s default Syslog payload length. Consider setting the maximum payload
length to 8,192 bytes. For instructions, see QRadar: How to increase the maximum TCP payload size for event
data on IBM Support.
• The QRadar appliance should be fully updated with recent patches and fixes.
This log source will act as a gateway, passing each event on to another matching log source. Only one TLS listener
is required per port; see Configuring multiple log sources over TLS syslog on IBM Knowledge Center.
First, prepare the TLS certificate and key files (for more information, see OpenSSL Certificate Creation):
1. Locate a certificate authority (CA) certificate and private key, or generate and sign a new one. The CA
certificate (for example, rootCA.pem) will be used by the NXLog agent to authenticate the QRadar receiver in
Forwarding Logs below.
2. Create a certificate and private key for QRadar TLS Syslog (for example, server.crt and server.key).
3. Convert the QRadar private key to a DER-encoded PKCS8 key (see QRadar: TLS Syslog support of DER-
encoded PKCS8 custom certificates):
4. Copy the private key and certificate files to QRadar (the steps below assume the files are copied to
/root/server.*).
419
Then add the log source on QRadar:
1. In the QRadar web interface, go to Menu > Admin > Data Sources > Events > Log Sources.
2. Click Add to add a new log source. The Add a log source window appears.
9. Set Provided Private Key Path to the path of the DER-encoded server key (for example,
/root/server.key.der).
10. Select the Target Event Collector. Use this to poll for and process events using the specified event collector,
rather than on the Console appliance.
11. Make any other changes required, and then click Save.
420
12. Go to Menu > Admin and click Advanced > Deploy Full Configuration after making all required log source
changes.
1. In the QRadar web interface, go to Menu > Admin > Data Sources > Events > Log Sources.
2. Click Add to add a new log source. The Add a log source window appears.
The Syslog hostname field is used by QRadar as the log source identifier to associate events
with a particular log source when received. This value can be adjusted by changing the
NOTE $Hostname = host_ip(); line in the examples below: keep the line as-is to use the
system’s first non-loopback IP address, remove the line to use the system hostname, or set
the line to a custom value (for example, $Hostname = "myhostname";).
421
7. Select the Target Event Collector. Use this to poll for and process events using the specified event collector,
rather than on the Console appliance.
8. Make any other changes required, and then click Save.
9. Go to Menu > Admin and click Advanced > Deploy Full Configuration after making all required log source
changes.
LEEF has several predefined event attributes that should be used where applicable—see LEEF event components
and Predefined LEEF event attributes on IBM Knowledge Center. These fields can be set during parsing, set to
static values manually ($usrName = "john";), renamed using the rename_field() directive, or renamed using the
xm_rewrite Rename directive (NXLog Enterprise Edition only). Additionally, to_leef() will set several predefined
attributes automatically.
Use Universal LEEF as QRadar’s Log Source Type. Once LEEF events have been received by QRadar, specific
fields can be selected for extraction as described in Writing an expression for structured data in LEEF format (in
the QRadar Security Intelligence Platform documentation). LEEF events can also be mapped to QRadar Identifiers
(QIDs). For more information, see the Universal LEEF section in the QRadar DSM Guide.
This example reads Syslog messages from file, parses them, and sets some additional fields. Then the
xm_leef to_leef() procedure is used to convert the event to LEEF (and write it to the $raw_event field).
Because the event is converted in the scope of this input instance, it is not necessary to do additional
processing in the corresponding output instance—see Forwarding Logs for output examples that could be
used to send the events to QRadar.
This example is intended as a starting point for a configuration that provides a specific set
NOTE of fields to QRadar. For logs that are already structured, it may only be necessary to
rename a few fields according to the predefined LEEF attribute names.
422
nxlog.conf (truncated)
1 <Extension _leef>
2 Module xm_leef
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input auth>
10 Module im_file
11 File '/var/log/auth.log'
12 <Exec>
13 # Parse Syslog event and set fields in the event record
14 parse_syslog();
15
16 # Set event category and event ID (for QID mapping)
17 if $Message =~ /^Invalid/
18 {
19 $Category = "Failed";
20 $EventID = "Logon Failure";
21 }
22 else
23 {
24 $Category = "Success";
25 $EventID = "Logon Success";
26 }
27
28 # Extract user name for "usrName" event attribute
29 [...]
Output Sample
<13>Jul 31 07:17:01 10.80.1.49 CRON[968]: LEEF:1.0|NXLog|CRON|4.4.4347|Logon
Success|EventReceivedTime=2019-08-11 22:48:59 ⇥ SourceModuleName=file ⇥
SourceModuleType=im_file ⇥ SyslogFacilityValue=1 ⇥ SyslogFacility=USER ⇥ SyslogSeverityValue=5
⇥ SyslogSeverity=NOTICE ⇥ sev=2 ⇥ Severity=INFO ⇥ identHostName=debian ⇥ devTime=2019-07-31
07:17:01 ⇥ vSrcName=CRON ⇥ ProcessID=968 ⇥ Message=pam_unix(cron:session): session opened for
user root by (uid=0) ⇥ cat=Success ⇥ EventID=Logon Success ⇥ usrName=root ⇥ role=Administrator
⇥ devTimeFormat=yyyy-MM-dd HH:mm:ss↵
<13>Aug 11 22:43:26 10.80.1.49 sshd[5584]: LEEF:1.0|NXLog|sshd|4.4.4347|Logon
Failure|EventReceivedTime=2019-08-11 22:48:59 ⇥ SourceModuleName=file ⇥
SourceModuleType=im_file ⇥ SyslogFacilityValue=1 ⇥ SyslogFacility=USER ⇥ SyslogSeverityValue=5
⇥ SyslogSeverity=NOTICE ⇥ sev=2 ⇥ Severity=INFO ⇥ identHostName=debian ⇥ devTime=2019-08-11
22:43:26 ⇥ vSrcName=sshd ⇥ ProcessID=5584 ⇥ Message=Invalid user baduser from 10.80.0.1 port
33122 ⇥ cat=Failed ⇥ EventID=Logon Failure ⇥ usrName=baduser ⇥ role=User ⇥
devTimeFormat=yyyy-MM-dd HH:mm:ss↵
423
Type should be set to Microsoft DHCP Server and the Protocol Configuration should be set to Syslog—see
Adding a QRadar Log Source.
For more information, see DHCP Server Audit Logging and the Microsoft DHCP Server page in the QRadar DSM
Guide.
424
Example 287. Sending Windows DHCP Events to QRadar
In this example, NXLog is configured to read logs from the following paths:
• C:\Windows\System32\dhcp\DhcpSrvLog-*.log
• C:\Windows\System32\dhcp\DhcpV6SrvLog-*.log
NXLog parses the events and converts the structured data for forwarding to QRadar.
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension dhcp_csv_parser>
6 Module xm_csv
7 Fields ID, Date, Time, Description, IPAddress, LogHostname, MACAddress, \
8 UserName, TransactionID, QResult, ProbationTime, CorrelationID, \
9 DHCID, VendorClassHex, VendorClassASCII, UserClassHex, \
10 UserClassASCII, RelayAgentInformation, DnsRegError
11 </Extension>
12
13 <Extension dhcpv6_csv_parser>
14 Module xm_csv
15 Fields ID, Date, Time, Description, IPAddress, LogHostname, MACAddress, \
16 UserName, TransactionID, QResult, ProbationTime, CorrelationID, \
17 DHCID, VendorClassHex
18 </Extension>
19
20 <Input dhcp>
21 Module im_file
22 File 'C:\Windows\System32\dhcp\DhcpSrvLog-*.log'
23 File 'C:\Windows\System32\dhcp\DhcpV6SrvLog-*.log'
24 <Exec>
25 # Only process lines that begin with an event ID
26 if $raw_event =~ /^\d+,/
27 {
28 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
29 [...]
425
66.3.2. DNS Debug Log
To send DNS debug log events to QRadar, enable debug logging and use the NXLog configuration shown below.
WARNING Do not enable Details in the DNS Server Debug Logging dialog.
If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft DNS Debug and the Protocol Configuration should be set to Syslog—see Adding a QRadar Log
Source. If the Microsoft DNS Debug log source type is not available, see Setting up the QRadar Appliance above.
For more information, see Windows DNS Server and the Microsoft DNS Debug page in the QRadar DSM Guide.
This configuration uses the xm_msdns extension module to parse the Windows DNS debug log.
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension dns_parser>
6 Module xm_msdns
7 </Extension>
8
9 <Input dns>
10 Module im_file
11 File 'C:\logs\dns.log'
12 InputType dns_parser
13 <Exec>
14 $raw_event =~ /(?x)^(?<Date>\d+\/\d+\/\d+)\s(?<Time>\d+:\d+:\d+\s+\w{2})/;
15 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
16 $Message = "AgentDevice=WindowsDNS" +
17 "\tAgentLogFile=" + $FileName +
18 "\tDate=" + $Date +
19 "\tTime=" + $Time +
20 "\tThread ID=" + $ThreadID;
21 if $Context == "EVENT"
22 {
23 $EventDescription =~ s/,//g;
24 $Message = $Message +
25 "\tContext=EVENT" +
26 "\tMessage=" + $EventDescription;
27 }
28 else if $Context == "Note"
29 [...]
Output Sample
<13>Jul 20 08:42:07 10.80.1.49 AgentDevice=WindowsDNS ⇥ AgentLogFile=debug.log ⇥
Date=7/20/2019 ⇥ Time=8:42:07 AM ⇥ Thread ID=0710 ⇥ Context=EVENT ⇥ Message=The DNS server has
finished the background loading of zones. All zones are now available for DNS updates and zone
transfers as allowed by their individual zone configuration.↵
426
QRadar does not support auto-discovery for Exchange Server logs, so it is necessary to add a log source
manually. The Log Source Type should be set to Microsoft Exchange Server and the Protocol Configuration
should be set to Syslog—see Adding a QRadar Log Source.
For more information, see the Microsoft Exchange chapter and the Microsoft Exchange Server pages in the
QRadar DSM Guide.
The following configuration uses the im_file module to read message tracking, Outlook web access (OWA),
and SMTP logs from various paths. The logs are parsed and converted for forwarding to QRadar.
Make sure to use the correct ID for the Exchange Back End site. This can be verified using
NOTE the Internet Information Services (IIS) Manager. The following example collects logs from
the site with ID 2 (W3SVC2/).
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension w3c_parser>
6 Module xm_w3c
7 </Extension>
8
9 <Extension w3c_comma_parser>
10 Module xm_w3c
11 Delimiter ,
12 </Extension>
13
14 <Input exchange_OWA>
15 Module im_file
16 File 'C:\inetpub\logs\LogFiles\W3SVC2\u_ex*.log'
17 InputType w3c_parser
18 <Exec>
19 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
20 if ${cs-uri-query} == undef ${cs-uri-query} = "-";
21 if ${cs-username} == undef ${cs-username} = "-";
22 if ${cs(Referer)} == undef ${cs(Referer)} = "-";
23 $Message = "AgentDevice=MicrosoftExchange" +
24 "\tAgentLogFile=" + $FileName +
25 "\tAgentLogFormat=W3C" +
26 "\tAgentLogProtocol=OWA" +
27 "\tdate=" + $date +
28 "\ttime=" + $time +
29 [...]
427
66.3.4. Microsoft IIS
Microsoft IIS logs can be collected using the W3C Extended Log File Format. The W3C logging should be
configured as described in the Configuring Microsoft IIS by using the IIS Protocol page of the QRadar DSM Guide.
NOTE For NXLog Community Edition, the xm_csv module can be used instead of xm_w3c.
If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft IIS and the Protocol Configuration should be set to Syslog—see Adding a QRadar Log Source.
For more information, see the Microsoft IIS chapter and the QRadar DSM Guide Microsoft IIS Server pages.
428
Example 290. Sending Windows IIS Events to QRadar
This configuration uses the xm_w3c extension module to parse the IIS log, and converts the events to a tab-
delimited format for QRadar.
Input Sample
2019-07-24 09:21:55 127.0.0.1 POST /OWA/auth.owa &CorrelationID=<empty>;&cafeReqId=4b9353b7-
e17b-4bc5-9e54-bc6b4733d6dd;&encoding=; 443
HealthMailboxa733ff32a90d44bb970f7a147fb3f328@nxlog.org 127.0.0.1 AMProbe/Local/ClientAccess -
302 0 0 10171↵
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension w3c_parser>
6 Module xm_w3c
7 </Extension>
8
9 <Input iis>
10 Module im_file
11 File 'C:\inetpub\logs\LogFiles\W3SVC1\u_ex*.log'
12 InputType w3c_parser
13 <Exec>
14 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
15 if ${cs-uri-query} == undef ${cs-uri-query} = "-";
16 if ${cs-username} == undef ${cs-username} = "-";
17 if ${cs(Referer)} == undef ${cs(Referer)} = "-";
18 $Message = "AgentDevice=MSIIS" +
19 "\tAgentLogFile=" + $FileName +
20 "\tAgentLogFormat=W3C" +
21 "\tAgentLogProtocol=W3C" +
22 "\tdate=" + $date +
23 "\ttime=" + $time +
24 "\ts-ip=" + ${s-ip} +
25 "\tcs-method=" + ${cs-method} +
26 "\tcs-uri-stem=" + ${cs-uri-stem} +
27 "\tcs-uri-query=" + ${cs-uri-query} +
28 "\ts-port=" + ${s-port} +
29 [...]
Output Sample
<13>Jul 24 09:21:55 10.80.1.49 AgentDevice=MSIIS ⇥ AgentLogFile=u_ex190724.log ⇥
AgentLogFormat=W3C ⇥ AgentLogProtocol=W3C ⇥ date=2019-07-24 ⇥ time=09:21:55 ⇥ s-ip=127.0.0.1
⇥ cs-method=POST ⇥ cs-uri-stem=/OWA/auth.owa ⇥ cs-uri-
query=&CorrelationID=<empty>;&cafeReqId=4b9353b7-e17b-4bc5-9e54-bc6b4733d6dd;&encoding=; ⇥ s-
port=443 ⇥ cs-username=HealthMailboxa733ff32a90d44bb970f7a147fb3f328@nxlog.org ⇥ c-
ip=127.0.0.1 ⇥ cs(User-Agent)=AMProbe/Local/ClientAccess ⇥ cs(Referer)=- ⇥ sc-status=302 ⇥ sc-
substatus=0 ⇥ sc-win32-status=0 ⇥ time-taken=10171↵
If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft SQL Server and the Protocol Configuration should be set to Syslog—see Adding a QRadar Log
Source.
429
For configuration information, see the Microsoft SQL Server section in the QRadar DSM Guide.
This example reads and parses events from the SQL Server log file, then converts the events to a tab-
delimited format for QRadar.
nxlog.conf (truncated)
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension charconv>
6 Module xm_charconv
7 LineReader UTF-16LE
8 </Extension>
9
10 define ERRORLOG_EVENT /(?x)(?<Date>\d+-\d+-\d+)\s \
11 (?<Time>\d+:\d+:\d+.\d+)\s \
12 (?<Source>\S+)\s+ \
13 (?<Payload>.+)$/s
14
15 <Input sql>
16 Module im_file
17 File 'C:\Program Files\Microsoft SQL Server\' + \
18 'MSSQL14.MSSQLSERVER\MSSQL\Log\ERRORLOG'
19 InputType charconv
20 <Exec>
21 # Attempt to match regular expression
22 if $raw_event =~ %ERRORLOG_EVENT%
23 {
24 # Check if previous lines were saved
25 if defined(get_var('saved'))
26 {
27 $tmp = $raw_event;
28 $raw_event = get_var('saved');
29 [...]
Output Sample
<13>Aug 21 22:55:36 10.80.1.49 AgentDevice=MSSQL ⇥ AgentLogFile=ERRORLOG ⇥ Date=2019-08-21 ⇥
Time=22:55:36.23 ⇥ Source=spid16s ⇥ Message=The Service Broker endpoint is in disabled or
stopped state.↵
This format is recommended instead of Snare or Log Event Extended Format (LEEF) in order to
NOTE take full advantage of the parsing provided by the QRadar DSM. Otherwise additional parsing
and/or mappings would be required to translate Windows EventLog fields to QRadar fields.
If QRadar does not auto-discover the log source, add one manually. The Log Source Type should be set to
Microsoft Windows Security Event Log and the Protocol Configuration should be set to Syslog—see Adding a
QRadar Log Source.
For more information, see the Windows Event Log chapter and the Microsoft Windows Security Event Log section
430
in the QRadar DSM Guide.
This configuration will collect from the Windows EventLog using im_msvistalog, convert the $Message field
to a specific tab-delimited format, and add a BSD Syslog header with xm_syslog.
This example does not filter events, but forwards all events to QRadar. Only a subset of
NOTE those events will be recognized and parsed by the QRadar DSM. For more information
about using EventLog queries to limit collected events, see Windows Event Log.
nxlog.conf (truncated)
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input eventlog>
6 Module im_msvistalog
7 <Exec>
8 if $Category == undef $Category = 0;
9 $EventTimeStr = strftime($EventTime, "YYYY-MM-DDThh:mm:ss.sUTC");
10 if $EventType == 'CRITICAL'
11 {
12 $EventTypeNum = 1;
13 $EventTypeStr = "Critical";
14 }
15 else if $EventType == 'ERROR'
16 {
17 $EventTypeNum = 2;
18 $EventTypeStr = "Error";
19 }
20 else if $EventType == 'INFO'
21 {
22 $EventTypeNum = 4;
23 $EventTypeStr = "Informational";
24 }
25 else if $EventType == 'WARNING'
26 {
27 $EventTypeNum = 3;
28 $EventTypeStr = "Warning";
29 [...]
Output Sample
<13>Jul 15 20:24:43 10.80.1.49 AgentDevice=WindowsLog ⇥ AgentLogFile=System ⇥ Source=Service
Control Manager ⇥ Computer=QRW.nxlog.org ⇥ OriginatingComputer=10.80.1.49 ⇥ User= ⇥ Domain= ⇥
EventIDCode=7036 ⇥ EventType=4 ⇥ EventCategory=0 ⇥ RecordNumber=9830 ⇥ TimeGenerated=2019-07-
15T20:24:43.296533Z ⇥ TimeWritten=2019-07-15T20:24:43.296533Z ⇥ Level=Informational ⇥
Keywords=9259400833873739776 ⇥ Task=None ⇥ Opcode=Info ⇥ Message=The WinCollect service
entered the stopped state.↵
431
Example 293. Forwarding Logs via TCP
This om_tcp instance sends logs to QRadar via TCP. In this example, events are sent from the Microsoft IIS
and Windows EventLog sources.
nxlog.conf
1 <Output qradar>
2 Module om_tcp
3 Host 10.0.0.2
4 Port 514
5 </Output>
6
7 <Route r>
8 Path iis, eventlog => qradar
9 </Route>
Forwarding logs with TLS requires adding a TLS Syslog listener, as described in Adding a TLS Syslog Log Source
above. The root certificate authority (CA) certificate, which is used to verify the authenticity of the QRadar
receiver’s certificate, should be provided to om_ssl with either CADir or CAFile.
In this example, the om_ssl module is used to send logs to QRadar securely, with TLS encryption.
nxlog.conf
1 <Output qradar>
2 Module om_ssl
3 Host 10.0.0.2
4 Port 6514
5 CAFile C:\Program Files\cert\rootCA.pem
6 </Output>
432
Chapter 67. Linux Audit System
The Linux Audit system provides fine-grained logging of security related events. The system administrator
configures rules to specify what events are logged. For example, rules may be configured for logging of:
• the audisp dispatcher daemon which relays events to other applications for additional processing, and
• the auditctl control utility which provides configuration of the kernel component.
These tools are provided for reading the Audit log files:
• aulastlog prints out the last login for all users of a machine,
• ausearch searches Audit logs for events fitting given criteria, and
• auvirt prints a list of virtual machine sessions found in the Audit logs.
For more information about the Audit system, see the System Auditing chapter of the Red Hat Enterprise Linux
Security Guide, the installed manual pages, and the Linux Audit Documentation Project.
There are three types of rules: a control rule modifies Audit’s behavior, a file system rule watches a file or
directory, and a system call rule generates a log event for a particular system call. For more details about Audit
rules, see the Defining Audit Rules page of the Red Hat Enterprise Linux Security Guide.
• -b backlog: Set the maximum number of audit buffers. This should be higher for busier systems or for
heavy log volumes.
• -D: Delete all rules and watches. Normally used as the first rule.
• -e [0..2]: Temporarily disable auditing with 0, enable it with 1, or lock the configuration until the next
reboot with 2 (used as the last rule).
433
Example 295. Control Rules
This is a set of basic rules, some form of which is likely to be found in any ruleset.
• The permissions argument sets the kinds of accesses that are logged, and is a string containing one or
more of r (read access), w (write access), x (execute access), and a (attribute change).
• The key_name argument is an optional tag for identifying the rule.
This rule watches /etc/passwd for modifications and tags these events with passwd.
-w /etc/passwd -p wa -k passwd
• The action argument can be either always (to generate a log entry) or never (to suppress a log entry).
Generally, use never rules before always rules, because rules are matched from first to last.
• The filter argument is one of task (when a task is created), exit (when a system call exits), user (when a
call originates from user space) or exclude (to filter events).
• The system_call argument specifies the system call by name, and can be repeated by using multiple -S
flags.
• The field=value pair can be used to specify additional match options, and can also be used more than
once.
• The key_name argument is an optional tag for identifying the rule.
This rule generates a log entry when the system time is changed.
System call rules can also monitor activities around files, such as:
• creation,
• modification,
• deletion,
• access, permission, and owner modifications.
434
Example 298. Deletion rule
This rule generates a log entry when a file is deleted with the unlink or rename command:
This rule checks whether an incoming or outgoing external network connection has been established.
/etc/audit/rules.d/audit.rules
# Delete all rules
-D
For more examples of rules, see the The Linux Audit Project and auditd-attack sections on GitHub.
The Audit’s syslog plugin should be enabled to forward logs to the /dev/log socket. In this case, the
/etc/audisp/plugins.d/syslog.conf file should be edited to look like the sample below.
1 active = yes
2 direction = out
3 path = builtin_syslog
4 type = builtin
5 args = LOG_INFO
6 format = string
435
A sample rule can be created in the /etc/audit/rules.d/audit.rules file to monitor modifications of
the /tmp/audit_syslog file.
1 -w /tmp/audit_syslog -p wa
NXLog needs to be configured to accept Syslog messages from the /dev/log socket.
By default, NXLog cannot bind to the /dev/log socket due to the limitations of the nxlog
NOTE
user. See the Running Under a Non-Root User on Linux section for ways to handle this.
The configuration below accepts logs from the socket using the im_uds module and the Exec block selects
only messages which contain the audit_syslog string. These messages are parsed with the
parse_syslog_bsd() procedure of the xm_syslog module and converted to JSON using the to_json() procedure
of the xm_json module.
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension json>
6 Module xm_json
7 </Extension>
8
9 <Input from_uds>
10 Module im_uds
11 UDS /dev/log
12 <Exec>
13 if not ($raw_event =~/.+audit_syslog.+/) drop();
14 parse_syslog_bsd();
15 to_json();
16 </Exec>
17 </Input>
Below is an output sample of a JSON-formatted log entry which can be obtained using this configuration.
{
"EventReceivedTime": "2020-04-28T21:09:13.959876+00:00",
"SourceModuleName": "from_uds",
"SourceModuleType": "im_uds",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "administrator",
"EventTime": "2020-04-28T21:09:13.000000+00:00",
"SourceName": "audispd",
"Message": "node=administrator type=SYSCALL msg=audit(1588108153.953:1246): arch=c000003e
syscall=257 success=yes exit=3 a0=ffffff9c a1=55a4e3921110 a2=41 a3=1a4 items=2 ppid=2417
pid=3374 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3
comm=\"vim\" exe=\"/usr/bin/vim.basic\" key=\"audit_syslog\""
}
436
67.3. Using im_linuxaudit
NXLog Enterprise Edition includes an im_linuxaudit module for directly accessing the kernel component of the
Audit System. With this module, NXLog can be configured to configure Audit rules and collect logs without
requiring auditd or any other userspace software.
If an im_linuxaudit module instance is suspended and the Audit backlog limit is exceeded,
all processes that generate Audit messages will be blocked. For this reason, it is
recommended for most cases that FlowControl be disabled for im_linuxaudit module
WARNING instances. With flow control disabled, a blocked route will cause Audit messages to be
discarded. To reduce the risk of log data being discarded, make sure the route’s processing
is fast enough to handle the Audit messages by adjusting the LogQueueSizes of the
following modules and/or adding a pm_buffer instance.
nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 <Rules>
5 # Delete all rules (This rule has no affect; it is performed
6 # automatically by im_linuxaudit)
7 -D
8
9 # Increase buffers from default 64
10 -b 320
11
12 # Watch /etc/passwd for modifications and tag with 'passwd'
13 -w /etc/passwd -p wa -k passwd
14
15 # Generate a log entry when the system time is changed
16 -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k system_time
17
18 # Lock Audit rules until reboot
19 -e 2
20 </Rules>
21 </Input>
This configuration is the same as the previous, but it uses a separate rules file. The referenced
audit.rules file is identical to the one shown in the above example, but it is stored in a different location
(because auditd is not required).
nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 LoadRule '/opt/nxlog/etc/audit.rules'
5 </Input>
437
67.4. Using auditd Userspace
There are also several ways to collect Audit logs via the regular Audit userspace tools, including from auditd logs
and by network via audispd.
1. Install the Audit package. Include the audispd-plugins package if required for use with audispd (see the
Collecting via Network With audispd section below).
◦ For RedHat/CentOS:
◦ For Debian/Ubuntu:
2. Configure Auditd by editing the /etc/audit/auditd.conf configuration file, which contains parameters for
auditd. See the Configuring the Audit Service page in the Red Hat Enterprise Linux Security Guide and the
auditd.conf(5) man page.
3. After modifying the configuration or rules, enable or restart the auditd service to reload the configuration
and update the rules (if they are not locked).
◦ For RedHat/CentOS:
◦ For Debian/Ubuntu:
1. NXLog cannot read logs owned as root when running as the nxlog user. Either omit the User option in
nxlog.conf to run NXLog as root, or adjust the permissions as follows (see Reading Rsyslog Log Files for
more information about /var/log permissions):
a. use the log_group option in /etc/audit/audit.conf to set the group ownership for Audit log files,
b. change the current ownership of the log directory and files with chgrp -R adm /var/log/audit, and
c. add the nxlog user to the adm group with usermod -a -G adm nxlog.
438
Example 304. Reading From audit.log
In the Input block of this configuration, Audit logs are read from file, the key-value pairs are parsed with
xm_kvp, and then some additional fields are added. In the Output block, the messages are converted to
JSON format, BSD Syslog headers are added, and the logs are sent to another host via TCP.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Extension audit_parser>
10 Module xm_kvp
11 KVPDelimiter ' '
12 KVDelimiter =
13 EscapeChar '\'
14 </Extension>
15
16 <Input in>
17 Module im_file
18 File "/var/log/audit/audit.log"
19 <Exec>
20 audit_parser->parse_kvp();
21 $Hostname = hostname();
22 $FQDN = hostname_fqdn();
23 $Tag = "audit";
24 $SourceName = "selinux";
25 </Exec>
26 </Input>
27
28 <Output out>
29 Module om_tcp
30 Host 192.168.1.1
31 Port 1514
32 Exec to_json(); to_syslog_bsd();
33 </Output>
1. Configure the audisp-remote plugin. Use appropriate values for the remote_server and format directives.
439
/etc/audisp/audisp-remote.conf
remote_server = 127.0.0.1
port = 60
transport = tcp
queue_file = /var/spool/audit/remote.log
mode = immediate
queue_depth = 2048
format = ascii
network_retry_time = 1
max_tries_per_record = 3
max_time_per_record = 5
heartbeat_timeout = 0
network_failure_action = stop
disk_low_action = ignore
disk_full_action = ignore
disk_error_action = syslog
remote_ending_action = reconnect
generic_error_action = syslog
generic_warning_action = syslog
overflow_action = syslog
3. Optionally, auditd may be configured to forward logs only (and not write to log files). Edit
/etc/audit/auditd.conf and set write_logs = no (this option replaces log_format = NOLOG).
With the following configuration, NXLog will accept Audit logs via TCP from audispd on the local host, parse
the key-value pairs with xm_kvp, and add some additional fields to the event record.
nxlog.conf
1 <Extension audit_parser>
2 Module xm_kvp
3 KVPDelimiter ' '
4 KVDelimiter =
5 EscapeChar '\'
6 </Extension>
7
8 <Input in>
9 Module im_tcp
10 Host 127.0.0.1
11 Port 60
12 <Exec>
13 audit_parser->parse_kvp();
14 $Hostname = hostname();
15 $FQDN = hostname_fqdn();
16 $Tag = "audit";
17 $SourceName = "auditd";
18 </Exec>
19 </Input>
440
Chapter 68. Linux System Logs
NXLog can be used to collect and process logs from a Linux system.
Linux distributions normally use a "Syslog" system logging agent to retrieve events from the kernel (/proc/kmsg)
and accept log messages from user-space applications (/dev/log). Originally, this logger was syslogd; later
syslog‑ng added additional features, and finally Rsyslog is the logger in common use today. For more
information about Syslog, see Syslog.
Many modern Linux distributions also use the Systemd init system, which includes a journal component for
handling log messages. All messages generated by Systemd-controlled processes are sent to the journal. The
journal also handles messages written to /dev/log. The journal stores logs in a binary format, either in memory
or on disk; the logs can be accessed with the journalctl tool. Systemd can also be configured to forward logs via
a socket to a local logger like Rsyslog or NXLog.
There are several ways that NXLog can be configured to collect Linux logs. See Replacing Rsyslog for details
about replacing Rsyslog altogether, handling all logs with NXLog instead. See Forwarding Messages via Socket for
a simple way to forward all logs to NXLog without disabling Rsyslog (this is the least intrusive option). Finally, it is
also possible to read the log files written by Rsyslog; see Reading Rsyslog Log Files.
1. Configure NXLog to collect events from the kernel, the Systemd journal socket, and the /dev/log socket. See
the example below.
2. Configure Systemd to forward log messages to a socket by enabling the ForwardToSyslog option.
/etc/systemd/journald.conf
[Journal]
ForwardToSyslog=yes
3. Stop and disable Rsyslog by running systemctl stop rsyslog and systemctl disable rsyslog as root.
4. Restart NXLog.
5. Reload the journald configuration by running systemctl force-reload systemd-journald.
441
Example 306. Replacing Rsyslog With NXLog
This example configures NXLog to read kernel events with the im_kernel module, read daemon messages
from the Systemd journal socket with the im_uds module, and accept other user-space messages from the
/dev/log socket with im_uds. In the om_tcp module instance, all of the logs are converted to JSON format,
BSD Syslog headers are added, and the logs are forwarded to another host via TCP.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input kernel>
10 Module im_kernel
11 Exec parse_syslog_bsd();
12 </Input>
13
14 <Input journal>
15 Module im_uds
16 UDS /run/systemd/journal/syslog
17 Exec parse_syslog_bsd();
18 </Input>
19
20 <Input devlog>
21 Module im_uds
22 UDS /dev/log
23 FlowControl FALSE
24 Exec $raw_event =~ s/\s+$//; parse_syslog_bsd();
25 </Input>
26
27 <Output out>
28 Module om_tcp
29 [...]
Some local Syslog sources will add a trailing newline (\n) to each log message. The
NOTE $raw_event =~ s/\s+$//; statement in the devlog input section above will
automatically remove this and any other trailing whitespace before processing the
message.
By default, SELinux blocks communication via Unix domain sockets in CentOS7. To enable
socket communication, run the following commands.
semodule -i nxlog-fix.pp
442
The description below contains steps for configuring Rsyslog to work with NXLog.
1. Configure NXLog to accept log messages from Rsyslog via a socket. See the example below.
2. Configure Rsyslog to write to the socket by adding the following configuration file. See the Rsyslog
documentation for more information about configuring what is forwarded to NXLog.
/etc/rsyslog.d/nxlog.conf
# Load omuxsock module
$ModLoad omuxsock
3. Restart NXLog and Rsyslog in that order to create and use the socket (NXLog must create the socket before
Rsyslog will write to it). Run systemctl restart nxlog and systemctl restart rsyslog.
With this example configuration, NXLog will create the socket and accept log messages from Rsyslog
through the socket. The messages will then be parsed as Syslog, converted to JSON format, prefixed with a
BSD Syslog header, and forwarded to another host via TLS.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_uds
11 UDS /opt/nxlog/var/spool/nxlog/rsyslog_sock
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_ssl
17 Host 192.168.1.1
18 Port 6514
19 CAFile %CERTDIR%/ca.pem
20 CertFile %CERTDIR%/client-cert.pem
21 CertKeyFile %CERTDIR%/client-key.pem
22 Exec $Message = to_json(); to_syslog_bsd();
23 </Output>
443
68.3. Reading Rsyslog Log Files
NXLog can be configured to read from log messages written by Rsyslog, /var/log/messages for example. This is
a slightly more intrusive option than the steps given in Forwarding Messages via Socket.
NXLog will not have access to the facility and severity codes because Rsyslog, by default, follows
NOTE
the BSD Syslog convention of not writing the PRI code to the /var/log/messages file.
By default, NXLog runs as user nxlog and does not have permission to read files in /var/log. The simplest
solution for this is to run NXLog as root by omitting the User option, but it is more secure to provide the
necessary permissions explicitly.
1. Check the user or group ownership of the files in /var/log and configure if necessary. Some distributions
use a group for the log files by default. On Debian/Ubuntu, for example, Rsyslog is configured to use the adm
group. Otherwise, modify the Rsyslog configuration to use different ownership for log files as shown below.
/etc/rsyslog.conf or /etc/rsyslog.d/nxlog.conf
$FileOwner root
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
# Default on Debian/Ubuntu
$FileGroup adm
2. Run NXLog under a user or group that has permission to read the log files. Either use a user or group directly
with the User or Group option in nxlog.conf, or add the nxlog user to a group that has permission. For
example, on Debian/Ubuntu add the nxlog user to the adm group by running usermod -a -G adm nxlog.
3. If necessary, fix permissions for any files NXLog will be reading from that already exist (use the correct group
for your system).
4. Configure NXLog to read from the required file(s) (see the example below). Then restart NXLog.
5. If the Rsyslog configuration has been modified, restart Rsyslog (systemctl restart rsyslog).
444
Example 308. Reading Rsyslog Log Files
With the following configuration, NXLog will read logs from /var/log/messages, parse the events as
Syslog, convert them to JSON, and forward the plain JSON to another host via TCP.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/log/messages'
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_tcp
17 Host 192.168.1.1
18 Port 1514
19 Exec $raw_event = to_json();
20 </Output>
445
Chapter 69. Log Event Extended Format (LEEF)
NXLog Enterprise Edition can be configured to collect or forward logs in the LEEF format.
The LEEF log format is used by IBM Security QRadar products and supports Syslog as a transport. It describes an
event using key-value pairs, and provides a list of predefined event attributes. Additional attributes can be used
for specific applications.
• LEEF version
• Vendor
• Product name
• Product version
• Event ID
• Optional delimiter character, as the character or its hexadecimal value prefixed by 0x or x (LEEF version 2.0)
The EVENT_ATTRIBUTES part contains a list of key-value pairs separated by a tab or the delimiter specified in the
LEEF header.
446
Example 309. Accepting LEEF Logs via TCP
With the following configuration, NXLog will accept LEEF logs via TCP, convert them to JSON, and output the
result to file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _leef>
6 Module xm_leef
7 </Extension>
8
9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_leef();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File '/var/log/json'
19 Exec to_json();
20 </Output>
Input Sample
Oct 11 11:27:23 myserver LEEF:2.0|Microsoft|MSExchange|2013 SP1|15345|src=10.50.1.1 ⇥
dst=2.10.20.20 ⇥ spt=1200↵
Output Sample
{
"EventReceivedTime": "2016-10-11 11:27:24",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"Hostname": "myserver",
"LEEFVersion": "LEEF:2.0",
"Vendor": "Microsoft",
"SourceName": "MSExchange",
"Version": "2013 SP1",
"EventID": "15345"
}
447
Example 310. Sending LEEF Logs via TCP
With this configuration, NXLog will parse the input JSON format from file and forward it as LEEF via TCP.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _leef>
6 Module xm_leef
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/log/json'
12 Exec parse_json();
13 </Input>
14
15 <Output out>
16 Module om_tcp
17 Host 10.12.0.1
18 Port 514
19 Exec to_leef();
20 </Output>
Input Sample
{
"EventTime": "2016-09-13 11:23:11",
"Hostname": "myserver",
"Purpose": "test",
"Message": "This is a test log message."
}
Output Sample
<13>Sep 13 11:23:11 myserver LEEF:1.0|NXLog|in|3.0.1775|unknown|EventReceivedTime=2016-09-13
11:23:12 ⇥ SourceModuleName=in ⇥ SourceModuleType=im_file ⇥ devTime=2016-09-13 11:23:11 ⇥
identHostName=myserver ⇥ Purpose=test ⇥ Message=This is a test log message. ⇥
devTimeFormat=yyyy-MM-dd HH:mm:ss↵
448
Chapter 70. McAfee Enterprise Security Manager
(ESM)
McAfee Enterprise Security Manager (ESM) is a security information and event management (SIEM) solution that
can collect logs from various sources and correlate events for investigation and incident response. For more
information, see McAfee Enterprise Security Manager on McAfee.com.
NXLog can be configured to collect events and forward them to ESM. This chapter provides information about
setting up NXLog to forward events from several types of log sources.
NOTE The instructions and examples in this chapter were tested with ESM 11.2.0.
1. Create or locate a certificate authority (CA) certificate and private key. The CA certificate (for example,
rootCA.pem) will be used by the NXLog agent to authenticate the ESM receiver in Forwarding Logs below.
2. Create a certificate and private key for ESM (for example, server.crt and server.key).
3. Upload the server.crt and server.key files to ESM (for more information, see Install SSL certificate on
McAfee.com):
a. On the McAfee web interface, open the menu in the upper left corner, click on System Properties, and
choose ESM Management in the left panel.
b. Open the Key Management tab and click Certificate.
c. Select Upload Certificate, click Upload, acknowledge the notification, and upload the certificate files.
4. When adding or editing a log source, check Require syslog TLS (see Adding a Log Source below).
449
receiver.
1. On the McAfee web interface, open the menu in the upper left corner and click on More Settings.
2. Select the Local Receiver-ELM in the left panel and click on Add Data Source.
3. Choose a Data Source Vendor, Data Source Model, Data Format, and Data Retrieval. Consult the
sections below for the correct values to use for each log source type.
450
70.2. Sending Specific Log Types for ESM to Parse
To take full advantage of ESM’s log parsing and rules, NXLog can be configured to send log types in a format
expected by ESM. A few common log types are shown here.
Field Value
Data Source Vendor Microsoft
For more information, see DHCP Server Audit Logging and the Microsoft DHCP Server page in the McAfee ESM
Data Source Configuration Reference Guide.
In this example, NXLog is configured to read logs from the DhcpSrvLog and DhcpV6SrvLog log files. NXLog
then adds a Syslog header with xm_syslog to prepare the events for forwarding to ESM.
Input Sample
64,08/31/19,14:38:17,No static IP address bound to DHCP server,,,,,0,6,,,,,,,,,0↵
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input dhcp>
6 Module im_file
7 File 'C:\Windows\System32\dhcp\DhcpSrvLog-*.log'
8 File 'C:\Windows\System32\dhcp\DhcpV6SrvLog-*.log'
9 <Exec>
10 # Discard header lines
11 if $raw_event !~ /^\d+,/ drop();
12
13 # Add Syslog header
14 $Message = $raw_event;
15 to_syslog_bsd();
16 </Exec>
17 </Input>
Output Sample
<13>Aug 31 14:38:17 Host 64,08/31/19,14:38:17,No static IP address bound to DHCP
server,,,,,0,6,,,,,,,,,0↵
451
When adding an ESM data source, use the following parsing configuration (see Adding a Log Source):
Field Value
Data Source Vendor Microsoft
For more information, see Windows DNS Server and the Microsoft DNS Debug page in the McAfee ESM Data
Source Configuration Reference Guide.
The following configuration uses im_file to read from the Windows DNS debug log. A Syslog header is
added with the xm_syslog to_syslog_bsd() procedure.
Input Sample
8/31/2019 15:17:04 PM 2AE8 PACKET 00000005D03B4CE0 UDP Snd 192.168.1.42 fdd7 R Q [8081 DR
NOERROR] A (9)imap-mail(7)outlook(3)com(0)↵
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File 'C:\logs\dns.log'
8 <Exec>
9 # Discard header lines
10 if $raw_event !~ /^\d+\/\d+\/\d+/ drop();
11
12 # Add Syslog header
13 $Message = $raw_event;
14 to_syslog_bsd();
15 </Exec>
16 </Input>
Output Sample
<13>Aug 31 15:17:04 Host 8/31/2019 15:17:04 PM 2AE8 PACKET 00000005D03B4CE0 UDP Snd
192.168.1.42 fdd7 R Q [8081 DR NOERROR] A (9)imap-mail(7)outlook(3)com(0)↵
Field Value
Data Source Vendor Microsoft
452
For more information about collecting Windows Event Log, see the Windows Event Log chapter.
In this configuration, Windows Event Log data is collected from the Security channel with im_msvistalog and
converted to CEF with a Syslog header.
nxlog.conf
1 <Extension _cef>
2 Module xm_cef
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input eventlog>
10 Module im_msvistalog
11 Channel Security
12 <Exec>
13 $Message = to_cef();
14 to_syslog_bsd();
15 </Exec>
16 </Input>
Output Sample
<14>Sep 25 23:25:53 WINSERV Microsoft-Windows-Security-Auditing[568]:
CEF:0|NXLog|NXLog|4.99.5128|0|-|7|end=1569453953000 dvchost=WINSERV
Keywords=9232379236109516800 outcome=AUDIT_SUCCESS SeverityValue=2 Severity=INFO
externalId=4801 SourceName=Microsoft-Windows-Security-Auditing ProviderGuid={54849625-5478-
4994-A5BA-3E3B0328C30D} Version=0 TaskValue=12551 OpcodeValue=0 RecordNumber=395661
ActivityID={61774D29-73EB-0000-4B4D-7761EB73D501} ExecutionProcessID=568 ExecutionThreadID=3164
deviceFacility=Security msg=The workstation was unlocked.\r\n\r\nSubject:\r\n\tSecurity
ID:\t\tS-1-5-21-2262720663-2632382095-2856924348-500\r\n\tAccount
Name:\t\tAdministrator\r\n\tAccount Domain:\t\tWINSERV\r\n\tLogon ID:\t\t0x112FE1\r\n\tSession
ID:\t1 cat=Other Logon/Logoff Events Opcode=Info duid=S-1-5-21-2262720663-2632382095-
2856924348-500 duser=Administrator dntdom=WINSERV TargetLogonId=0x112fe1 SessionId=1
EventReceivedTime=1569453953949 SourceModuleName=eventlog SourceModuleType=im_msvistalog↵
453
Example 314. Forwarding Logs via TCP
This om_tcp instance sends logs to ESM via TCP. In this example, events are sent from the Windows Event
Log source.
nxlog.conf
1 <Output esm>
2 Module om_tcp
3 Host 10.10.1.10
4 Port 514
5 </Output>
6
7 <Route r>
8 Path eventlog => esm
9 </Route>
Forwarding logs with TLS requires adding a certificate to ESM and setting Require syslog TLS on the data
source(s), as described in the Set up TLS Transport section.
The om_ssl module is used here to send logs to ESM securely, with TLS encryption.
nxlog.conf
1 <Output esm>
2 Module om_ssl
3 Host 10.10.1.10
4 Port 6514
5 CAFile C:\Program Files\cert\rootCA.pem
6 </Output>
454
Chapter 71. McAfee ePolicy Orchestrator
McAfee® ePolicy Orchestrator® (McAfee® ePO™) enables centralized policy management and enforcement for
endpoints and enterprise security products. McAfee ePO monitors and manages the network, detecting threats
and protecting endpoints against these threats.
NXLog can be configured to collect events and audit logs from the ePO SQL databases.
The instructions and examples in this section were tested with ePolicy Orchestrator 5.10.0 and
NOTE
NXLog running on the same server.
ePO will need to have the associated packages installed first, prior to log collection from these
NOTE sources. For example, VirusScan Enterprise or Host Intrusion Prevention Content must be
installed.
ePO stores these logs in the dbo.OrionAuditLog table in the SQL database. The following configuration
will query dbo.OrionAuditLog using the im_odbc module configured to collect these audit log events. It
will then format them to JSON via xm_json.
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString DSN=MQIS;database=ePO_Host; \
4 uid=user;pwd=password;
5 IdType timestamp
6 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
7 # record when reading from the database for the first time.
8 ReadFromLast TRUE
9 MaxIdSQL SELECT MAX(StartTime) AS maxid FROM dbo.OrionAuditLog
10 SQL SELECT StartTime as id,StartTime as EventTime, \
11 * FROM dbo.OrionAuditLog \
12 WHERE StartTime > CAST(? AS datetime)
13 Exec delete($id);to_json();
14 </Input>
455
Raw Audit Log Sample of a Successful Logon
EventTime: 2020-02-12 18:36:00↵
AutoId: 7↵
UserId: 1↵
UserName: admin↵
Priority: 3↵
CmdName: Logon Attempt↵
Message: Successful Logon for user "admin" from IP address: 10.0.0.4↵
Success: TRUE↵
StartTime: 2020-02-12 18:36:00↵
EndTime: 2020-02-12 18:36:00↵
RemoteAddress: 10.0.0.4↵
LocalAddress: 2001:0:34f1:8072:2c3a:3f1e:f5ff:fffb↵
TenantId: 1↵
DetailMessage: NULL↵
AdditionalDetailsURI: NULL↵
2020-02-12 18:37:28 McAfeeEPO INFO↵
id: 2020-02-12 18:37:28↵
456
The following configuration uses the im_odbc module to collect VirusScan events from the dbo.EPOEvents
SQL view. The AnalyzerName column determines the source module of the events in the view, therefore the
query contains the conditional clause AnalyzerName LIKE 'VirusScan%.
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString DSN=MQIS;database=ePO_Host; \
4 uid=user;pwd=password;
5 IdType timestamp
6 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
7 # record when reading from the database for the first time.
8 #ReadFromLast TRUE
9 #MaxIdSQL SELECT MAX(ReceivedUTC) AS maxid FROM dbo.EPOEvents
10 SQL SELECT ReceivedUTC as id,ReceivedUTC as EventTime,AutoID,ServerID,\
11 AnalyzerName,AnalyzerHostName,\
12 dbo.RSDFN_ConvertIntToIPString \
13 (cast (AnalyzerIPV4 as varchar(15))) as 'IPv4',\
14 AnalyzerDetectionMethod,SourceHostName,\
15 dbo.RSDFN_ConvertIntToIPString \
16 (cast (SourceIPV4 as varchar(15))) as 'Source IPv4',\
17 SourceProcessName,TargetHostName,\
18 dbo.RSDFN_ConvertIntToIPString \
19 (cast (TargetIPV4 as varchar(15))) as 'Target IPv4',\
20 TargetUserName,TargetFileName,ThreatCategory,ThreatEventID,\
21 ThreatSeverity,ThreatName,ThreatType,ThreatActionTaken,TenantID\
22 FROM dbo.EPOEvents\
23 WHERE ReceivedUTC > CAST(? AS datetime) AND AnalyzerName LIKE 'VirusScan%'
24 Exec delete($id);to_json();
25 </Input>
457
71.3. Collecting Data Loss Prevention (DLP) Events
The McAfee Data Loss Prevention (DLP) Endpoint is a content-based agent solution to inspect user actions. It
scans data-in-use on endpoints, blocks transfer of sensitive data, and it can store its findings as evidence.
458
The configuration below uses the im_odbc module to collect Data Loss Prevention events from the
dbo.EPOEvents SQL view. The AnalyzerName column determines the source module of events in the view,
therefore the query contains the conditional clause AnalyzerName LIKE 'Data%.
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString DSN=MQIS;database=ePO_Host; \
4 uid=user;pwd=password;
5 IdType timestamp
6 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
7 # record when reading from the database for the first time.
8 #ReadFromLast TRUE
9 #MaxIdSQL SELECT MAX(ReceivedUTC) AS maxid FROM dbo.EPOEvents
10 SQL SELECT ReceivedUTC as id,ReceivedUTC as EventTime,AutoID,ServerID,\
11 AnalyzerName,AnalyzerHostName,\
12 dbo.RSDFN_ConvertIntToIPString \
13 (cast (AnalyzerIPV4 as varchar(15))) as 'IPv4',\
14 AnalyzerDetectionMethod,SourceHostName,\
15 dbo.RSDFN_ConvertIntToIPString \
16 (cast (SourceIPV4 as varchar(15))) as 'Source IPv4',\
17 SourceProcessName,TargetHostName,\
18 dbo.RSDFN_ConvertIntToIPString \
19 (cast (TargetIPV4 as varchar(15))) as 'Target IPv4',\
20 TargetUserName,TargetFileName,ThreatCategory,ThreatEventID,\
21 ThreatSeverity,ThreatName,ThreatType,ThreatActionTaken,TenantID\
22 FROM dbo.EPOEvents\
23 WHERE ReceivedUTC > CAST(? AS datetime) AND AnalyzerName LIKE 'Data%'
24 Exec delete($id);to_json();
25 </Input>
459
Chapter 72. Microsoft Active Directory Domain
Controller
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. An AD domain
controller responds to security authentication requests within a Windows domain. Most domain controller
logging, especially for security related activity, is done via the Windows EventLog.
For a full list of Active Directory events that should be monitored, see Events to Monitor on Microsoft Docs.
Event Description
ID
4618 A monitored security event pattern has occurred.
4649 A replay attack was detected. May be a harmless false positive due to a misconfiguration error.
4794 An attempt was made to set the Directory Services Restore Mode.
460
Example 316. Collecting Active Directory Security Events
In this example, im_msvistalog is used to capture the most important security-related events on a Windows
Server 2012/2016 domain controller.
The EventLog supports a limited number of Event IDs in a query. Due to this limitation, an
NOTE Exec block is used to match the required Event IDs rather than listing every Event ID in the
query.
nxlog.conf (truncated)
1 define HighEventIds 4618, 4649, 4719, 4765, 4766, 4794, 4897, 4964, 5124, 1102
2
3 define MediumEventIds 4621, 4675, 4692, 4693, 4706, 4713, 4714, 4715, 4716, 4724, \
4 4727, 4735, 4737, 4739, 4754, 4755, 4764, 4764, 4780, 4816, \
5 4865, 4866, 4867, 4868, 4870, 4882, 4885, 4890, 4892, 4896, \
6 4906, 4907, 4908, 4912, 4960, 4961, 4962, 4963, 4965, 4976, \
7 4977, 4978, 4983, 4984, 5027, 5028, 5029, 5030, 5035, 5037, \
8 5038, 5120, 5121, 5122, 5123, 5376, 5377, 5453, 5480, 5483, \
9 5484, 5485, 6145, 6273, 6274, 6275, 6276, 6277, 6278, 6279, \
10 6280, 24586, 24592, 24593, 24594
11
12 define LowEventIds 4608, 4609, 4610, 4611, 4612, 4614, 4615, 4616, 4624, 4625, \
13 4634, 4647, 4648, 4656, 4657, 4658, 4660, 4661, 4662, 4663, \
14 4672, 4673, 4674, 4688, 4689, 4690, 4691, 4696, 4697, 4698, \
15 4699, 4700, 4701, 4702, 4704, 4705, 4707, 4717, 4718, 4720, \
16 4722, 4723, 4725, 4726, 4728, 4729, 4730, 4731, 4732, 4733, \
17 4734, 4738, 4740, 4741, 4742, 4743, 4744, 4745, 4746, 4747, \
18 4748, 4749, 4750, 4751, 4752, 4753, 4756, 4757, 4758, 4759, \
19 4760, 4761, 4762, 4767, 4768, 4769, 4770, 4771, 4772, 4774, \
20 4775, 4776, 4778, 4779, 4781, 4783, 4785, 4786, 4787, 4788, \
21 4789, 4790, 4869, 4871, 4872, 4873, 4874, 4875, 4876, 4877, \
22 4878, 4879, 4880, 4881, 4883, 4884, 4886, 4887, 4888, 4889, \
23 4891, 4893, 4894, 4895, 4898, 5136, 5137
24
25 <Input events>
26 Module im_msvistalog
27 <QueryXML>
28 <QueryList>
29 [...]
461
4. Go to Computer Configuration > Policies > Windows Settings > Security Settings > Advanced Audit
Policy Configuration > Audit Policies > DS Access.
462
5. Enable the four listed polices to provide access to security auditing events.
For more information on configuring the Advanced Security Auditing Policy, and descriptions of event IDs, please
view Step-By-Step: Enabling Advanced Security Audit Policy via DS Access on Microsoft TechNet.
463
Example 317. Collecting Auditing Policy events via im_msvistalog
Once security auditing has been enabled, the related events in the EventLog can be queried and collected
by NXLog with the im_msvistalog module. This configuration collects all Windows Security Auditing events
that have an Event Level of critical, warning, or error.
nxlog.conf
1 <Input SecurityAuditEvents>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0" Path="Security">
6 <Select Path="Security">*[System[Provider[@Name='Microsoft-Windows
7 -Security-Auditing'] and (Level=1 or Level=2 or Level=3) and
8 ((EventID=4928 and EventID=4931) or (EventID=4932 and EventID=4937)
9 or EventID=4662 or (EventID=5136 and EventID = 5141))]]</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>
For more information on troubleshooting domain controller promotions and installations, please view
Troubleshooting Domain Controller Deployment
464
Example 318. Collecting dcpromo Log Messages via im_file
This configuration uses the im_file module to read from all dcpromo log files. Each event is parsed with a
regular expression, and then the timestamp is parsed with the parsedate() function.
Log Sample
10/02/2018 04:43:47 [INFO] Creating directory partition: CN=Configuration,DC=nxlog,DC=org; 1270
objects remaining↵
10/02/2018 04:43:47 [INFO] Creating directory partition: CN=Configuration,DC=nxlog,DC=org; 1269
objects remaining↵
10/02/2018 04:43:47 [INFO] Creating directory partition: CN=Configuration,DC=nxlog,DC=org; 1268
objects remaining↵
nxlog.conf
1 <Input dcpromo>
2 Module im_file
3 File "%systemroot%\debug\DCPROMO.log"
4 File "%systemroot%\debug\DCPROMO.*.log"
5 <Exec>
6 if $raw_event =~ /^(\S+ \S+) \[(\S+)\] (.+)$/
7 {
8 $EventTime = parsedate($1);
9 $Severity = $2;
10 $Message = $3;
11 }
12 </Exec>
13 </Input>
465
Chapter 73. Microsoft Azure
Azure is a Microsoft-hosted cloud computing service for building and deploying applications. It supports many
different programming languages, frameworks, and integrations.
NXLog can be set up to collect event data from Azure AD and Office 365 APIs. This functionality is available as an
add-on. See Microsoft Azure and Office 365 for more information.
NXLog can be configured to connect to the OMS Log Analytics service and forward or collect log data via its REST
API. See the Azure OMS and Log Analytics documentation for more information about configuring and using
Azure OMS and its log management service.
1. Log in to the Azure portal and go to the Log Analytics service (for instance by typing the service name into
the search bar).
2. Select an existing OMS Workspace or create a new one by clicking the Add button.
3. From the Management section in the main workspace screen, click OMS Portal.
4. In the Microsoft Operations Management Suite, click the settings icon in the top right corner, navigate to
Settings > Connected Sources > Linux Servers, and copy the WORKSPACE ID and PRIMARY KEY values.
These are needed for API access.
466
5. Enable Custom Logs. As of this writing it is a preview feature, available under Settings > Preview Features >
Custom Logs.
6. Place the oms-pipe.py script in a location accessible by NXLog and make sure it is executable by NXLog.
7. Set the customer ID, shared key, and log type values in the script.
8. Configure NXLog to execute the script with the om_exec module. The contents of the $raw_event field will
be forwarded.
467
Example 319. Sending Raw Syslog Events
This configuration reads raw events from file and forwards them to Azure OMS.
nxlog.conf
1 <Input messages>
2 Module im_file
3 File '/var/log/messages'
4 </Input>
5
6 <Output azure_oms>
7 Module om_exec
8 Command oms-pipe.py
9 </Output>
oms-pipe.py (truncated)
#!/usr/bin/env python
# This is a PoF script that can be used with 'om_exec' NXLog module to
# ship logs to Microsoft Azure Cloud (Log Analytics / OMS) via REST API.
# NXLog configuration:
# -------------------
# <Output out>
# Module om_exec
# Command /tmp/samplepy
# </Output>
# -------------------
import requests
import datetime
import hashlib
import hmac
import base64
[...]
468
Example 320. Sending JSON Log Data
With this configuration, NXLog Enterprise Edition reads W3C records with from file with im_file, parses the
records with xm_w3c, converts the internal event fields to JSON format with xm_json to_json(), and forwards
the result to Azure OMS with om_exec.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension w3c_parser>
6 Module xm_w3c
7 </Extension>
8
9 <Input messages>
10 Module im_file
11 File '/var/log/httpd-log'
12 InputType w3c_parser
13 </Input>
14
15 <Output azure_oms>
16 Module om_exec
17 Command oms-pipe.py
18 Exec to_json();
19 </Output>
1. Register an application in Azure Active Directory and generate an access key for the application.
2. Under your Subscription, go to Access control (IAM) and assign the Log Analytics Reader role to this
application.
3. Place the oms-download.py script in a location accessible by NXLog.
4. Set the resource group, workspace, subscription ID, tenant ID, application ID, and application key values in
the script. Adjust the query details as required.
The Tenant ID can be found as Directory ID under the Azure Active Directory Properties
NOTE
tab.
469
Example 321. Collecting Logs From OMS
This configuration uses the im_python module and the oms-download.py script to periodically collect log
data from the Log Analytics service.
nxlog.conf
1 <Input oms>
2 Module im_python
3 PythonCode oms-download.py
4 </Input>
oms-download.py (truncated)
import datetime
import json
import requests
import adal
import nxlog
class LogReader:
Azure SQL database includes auditing features that can be used to generate events based on audit policies.
NXLog can be used as a collector for audit data from an Azure SQL Database instance.
It is also possible to send SQL audit logs directly to OMS Log Analytics. This can be configured on
the Azure portal; see Get started with SQL database auditing on Microsoft Docs. In this case, see
NOTE
Azure Operations Management Suite (OMS) for information about integrating NXLog with OMS
Log Analytics.
To start with, auditing for an instance must be enabled; see Get started with SQL database auditing in the Azure
documentation for detailed steps. Once this is done, NXLog can be configured to periodically download the audit
logs using either PowerShell or Python.
470
• The script requires Microsoft.SqlServer.XE.Core.dll and Microsoft.SqlServer.XEvent.Linq.dll to
run. These libraries are distributed with Microsoft SQL Server installations (including XE edition).
• Azure PowerShell needs to be installed as well; this can be done by executing Install-Module AzureRM
-AllowClobber in PowerShell. For detailed documentation about installing Azure PowerShell, see Install and
configure Azure PowerShell in the Azure documentation.
• There are several variables in the script header that need to be set.
The procedure for non-interactive Azure authentication might vary, depending on the account
type. This example assumes that a service principal to access resources has been created. For
detailed information about creating an identity for unattended script execution, see Use Azure
NOTE PowerShell to create a service principal with a certificate in the Azure documentation.
Alternatively, Save-AzureRmContext can be used to store account information in a JSON file
and it can be loaded later with Import-AzureRmContext.
471
Example 322. Collecting Azure SQL Audit Logs With PowerShell
This configuration uses im_exec to run the azure-sql.ps1 PowerShell script. The xm_json module is used
to parse the JSON event data into NXLog fields.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 envvar systemroot
6 <Input azure_sql>
7 Module im_exec
8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
9 # Bypass the system execution policy for this session only.
10 Arg "-ExecutionPolicy"
11 Arg "Bypass"
12 # Skip loading the local PowerShell profile.
13 Arg "-NoProfile"
14 # This specifies the path to the PowerShell script.
15 Arg "-File"
16 Arg "%systemroot%\azure_sql.ps1"
17 <Exec>
18 # Parse JSON
19 parse_json();
20
21 # Convert $EventTime field to datetime
22 $EventTime = parsedate($event_time);
23 </Exec>
24 </Input>
azure-sql.ps1 (truncated)
# If running 32-bit on a 64-bit system, run 64-bit PowerShell instead.
if ( $env:PROCESSOR_ARCHITEW6432 -eq "AMD64" ) {
Write-Output "Running 64-bit PowerShell."
&"$env:SYSTEMROOT\SysNative\WindowsPowerShell\v1.0\powershell.exe" `
-NonInteractive -NoProfile -ExecutionPolicy Bypass `
-File "$($myInvocation.InvocationName)" $args
exit $LASTEXITCODE
}
################################################################################
• The script requires installation of the Microsoft ODBC Driver; see Installing the Microsoft ODBC Driver for
SQL Server on Linux and macOS on Microsoft Docs.
472
• The azure-storage and pyodbc Python packages are also required.
• There are several variables in the script header that need to be set.
This configuration uses the im_python module to execute the azure-sql.py Python script. The script logs
in to Azure, collects audit logs, and creates NXLog events.
nxlog.conf
1 <Input sql>
2 Module im_python
3 PythonCode azure_sql.py
4 Exec $EventTime = parsedate($EventTime);
5 </Input>
azure-sql.py (truncated)
import binascii, collections, datetime, nxlog, pyodbc
from azure.storage.blob import PageBlobService
################################################################################
# MSSQL details
DRIVER = "{ODBC Driver 13 for SQL Server}"
SERVER = 'tcp:XXXXXXXX.database.windows.net'
DATABASE = 'XXXXXXXX'
USERNAME = 'XXXXXXXX@XXXXXXXX'
PASSWORD = 'XXXXXXXX'
473
Chapter 74. Microsoft Exchange
Microsoft Exchange is a widely used enterprise level email server running on Windows Server operating systems.
The following sections describe various logs generated by Exchange and provide solutions for collecting logs
from these sources with NXLog.
Exchange stores most of its operational logs in a comma-delimited format similar to W3C. These files can be read
with im_file and the xm_w3c extension module. For NXLog Community Edition, the xm_csv extension module can
be used instead, with the fields listed explicitly and the header lines skipped. In some of the log files, the W3C
header is prepended by an additional CSV header line enumerating the same fields as the #Fields directive;
NXLog must be configured to skip that line also. See the sections under Transport Logs for examples.
The information provided here is not intended to be comprehensive, but rather provides a general overview of
NXLog integration with some of the major log mechanisms used by Exchange. Other logs generated by Exchange
can be found in the Logging and other subdirectories of the installation directory.
This Guide focuses on Exchange Server 2010 SP1 and later versions. Older versions are either
NOTE not supported by Microsoft or are being decomissioned. Apart from passing their end of life
date, these versions also lack the audit logging feature.
474
4. Click transport logs in the list on the left.
475
74.1.2. Message Tracking Logs
Message tracking logs provide a detailed record of message activity as mail flows through the transport pipeline
on an Exchange server.
Log Sample
#Software: Microsoft Exchange Server↵
#Version: 15.01.1034.026↵
#Log-type: Message Tracking Log↵
#Date: 2017-09-15T20:01:45.863Z↵
#Fields: date-time,client-ip,client-hostname,server-ip,server-hostname,source-context,connector-
id,source,event-id,internal-message-id,message-id,network-message-id,recipient-address,recipient-
status,total-bytes,recipient-count,related-recipient-address,reference,message-subject,sender-
address,return-path,message-info,directionality,tenant-id,original-client-ip,original-server-
ip,custom-data,transport-traffic-type,log-id,schema-version↵
2017-09-15T20:01:45.863Z,,,,WINEXC,No suitable shadow
servers,,SMTP,HAREDIRECTFAIL,34359738369,<49b4b9a2781a45cba555008075f7bffa@test.com>,8e1061b7-a376-
497c-3172-
08d4fc7497bf,test1@test.com,,6533,1,,,test,Administrator@test.com,Administrator@test.com,,Originatin
g,,,,S:DeliveryPriority=Normal;S:AccountForest=test.com,Email,63dc9d79-5b4e-4f6c-1358-
08d4fc7497c3,15.01.1034.026↵
NXLog can be configured to collect these logs with the im_file module, and to parse them with xm_w3c.
This configuration collects message tracking logs from the defined BASEDIR and parses them using the
xm_w3c module. The logs are then converted to JSON format and forwarded via TCP.
nxlog.conf
1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
2
3 <Extension _json>
4 Module xm_json
5 </Extension>
6
7 <Extension w3c_parser>
8 Module xm_w3c
9 Delimiter ,
10 </Extension>
11
12 <Input messagetracking>
13 Module im_file
14 File '%BASEDIR%\TransportRoles\Logs\MessageTracking\MSGTRK*.LOG'
15 InputType w3c_parser
16 </Input>
17
18 <Output tcp>
19 Module om_tcp
20 Host 10.0.0.1
21 Port 1514
22 Exec to_json();
23 </Output>
For NXLog Community Edition, the xm_csv module can be configured to parse these files.
476
Example 325. Using xm_csv for Message Tracking Logs
This configuration uses the xm_csv module to parse the message tracking logs.
nxlog.conf
1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
2
3 <Extension csv_parser>
4 Module xm_csv
5 Fields date-time, client-ip, client-hostname, server-ip, server-hostname, \
6 source-context, connector-id, source, event-id, \
7 internal-message-id, message-id, network-message-id, \
8 recipient-address, recipient-status, total-bytes, recipient-count, \
9 related-recipient-address, reference, message-subject, \
10 sender-address, return-path, message-info, directionality, \
11 tenant-id, original-client-ip, original-server-ip, custom-data, \
12 transport-traffic-type, log-id, schema-version
13 </Extension>
14
15 <Input messagetracking>
16 Module im_file
17 File '%BASEDIR%\TransportRoles\Logs\MessageTracking\MSGTRK*.LOG'
18 <Exec>
19 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
20 else
21 {
22 csv_parser->parse_csv();
23 $EventTime = parsedate(${date-time});
24 }
25 </Exec>
26 </Input>
Log Sample
#Software: Microsoft Exchange Server↵
#Version: 15.0.0.0↵
#Log-type: Transport Connectivity Log↵
#Date: 2017-09-15T03:09:34.541Z↵
#Fields: date-time,session,source,Destination,direction,description↵
2017-09-15T03:09:33.526Z,,Transport,,*,service started; #MaxConcurrentSubmissions=20;
MaxConcurrentDeliveries=20; MaxSmtpOutConnections=Unlimited↵
NXLog can be configured to collect these logs with the im_file module, and to parse them with xm_w3c.
477
Example 326. Collecting Connectivity Logs With xm_w3c
This configuration collects connectivity logs from the defined BASEDIR and parses them using the xm_w3c
module.
nxlog.conf
1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
2
3 <Extension w3c_parser>
4 Module xm_w3c
5 Delimiter ,
6 </Extension>
7
8 <Input connectivity>
9 Module im_file
10 File '%BASEDIR%\TransportRoles\Logs\Hub\Connectivity\CONNECTLOG*.LOG'
11 InputType w3c_parser
12 </Input>
For NXLog Community Edition, the xm_csv module can be configured to parse these files.
This configuration uses the xm_csv module to parse the connectivity logs.
nxlog.conf
1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
2
3 <Extension csv_parser>
4 Module xm_csv
5 Fields date-time, session, source, Destination, direction, description
6 </Extension>
7
8 <Input connectivity>
9 Module im_file
10 File '%BASEDIR%\TransportRoles\Logs\Hub\Connectivity\CONNECTLOG*.LOG'
11 <Exec>
12 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
13 else
14 {
15 csv_parser->parse_csv();
16 $EventTime = parsedate(${date-time});
17 }
18 </Exec>
19 </Input>
478
Log Sample
#Software: Microsoft Exchange Server↵
#Version: 15.0.0.0↵
#Log-type: SMTP Send Protocol Log↵
#Date: 2017-09-20T21:00:47.866Z↵
#Fields: date-time,connector-id,session-id,sequence-number,local-endpoint,remote-
endpoint,event,data,context↵
2017-09-20T21:00:47.167Z,internet,08D5006A392BE443,0,,64.8.70.48:25,*,,attempting to connect↵
NXLog can be configured to collect these logs with the im_file module, and to parse them with xm_w3c.
This configuration collects protocol logs from the defined BASEDIR and parses them using the xm_w3c
module.
nxlog.conf
1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
2
3 <Extension w3c_parser>
4 Module xm_w3c
5 Delimiter ,
6 </Extension>
7
8 <Input smtp_receive>
9 Module im_file
10 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpReceive\RECV*.LOG'
11 InputType w3c_parser
12 </Input>
13
14 <Input smtp_send>
15 Module im_file
16 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpSend\SEND*.LOG'
17 InputType w3c_parser
18 </Input>
For NXLog Community Edition, the xm_csv module can be configured to parse these files.
479
Example 329. Using xm_csv for Protocol Logs
This configuration uses the xm_csv module to parse the protocol logs.
nxlog.conf
1 define BASEDIR C:\Program Files\Microsoft\Exchange Server\V15
2
3 <Extension csv_parser>
4 Module xm_csv
5 Fields date-time, connector-id, session-id, sequence-number, \
6 local-endpoint, remote-endpoint, event, data, context
7 </Extension>
8
9 <Input smtp_receive>
10 Module im_file
11 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpReceive\RECV*.LOG'
12 <Exec>
13 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
14 else
15 {
16 csv_parser->parse_csv();
17 $EventTime = parsedate(${date-time});
18 }
19 </Exec>
20 </Input>
21
22 <Input smtp_send>
23 Module im_file
24 File '%BASEDIR%\TransportRoles\Logs\Hub\ProtocolLog\SmtpSend\SEND*.LOG'
25 <Exec>
26 if $raw_event =~ /^(\xEF\xBB\xBF)?(date-time,|#)/ drop();
27 else
28 {
29 csv_parser->parse_csv();
30 $EventTime = parsedate(${date-time});
31 }
32 </Exec>
33 </Input>
74.2. EventLog
Exchange Server also logs events to Windows EventLog. Events are logged to the Application and Systems
channels, as well as multiple Exchange-specific crimson channels (see your server’s Event Viewer). For more
information about events generated by Exchange, see the following TechNet articles.
See also Windows Event Log for more information about using NXLog to collect logs from Windows EventLog.
480
Example 330. Collecting Exchange Events From the EventLog
With this configuration, NXLog will use the im_msvistalog module to subscribe to the Application and
System channels (Critical, Error, and Warning event levels only) and the MSExchange Management crimson
channel (all event levels). Note that the Application and System channels will include other non-Exchange
events.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0" Path="Application">
6 <Select Path="Application">
7 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
8 <Select Path="System">
9 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
10 <Select Path="MSExchange Management">*</Select>
11 </Query>
12 </QueryList>
13 </QueryXML>
14 </Input>
See the Microsoft IIS chapter for more information about collecting events from IIS with NXLog.
The nxlog-xchg utility can be used to retrieve these logs. See the Exchange (nxlog-xchg) add-on documentation.
481
Chapter 75. Microsoft IIS
Microsoft Internet Information Server supports several logging formats. This chapter provides information about
configuring IIS logging and NXLog collection. The recommended W3C format is documented below as well as
other supported IIS formats.
This chapter also includes sections about collecting logs from the SMTP Server and about Automatic Retrieval of
IIS Site Log Locations.
1. Open IIS Manager, which can be accessed from the Tools menu in the Server Manager or from
Administrative Tools.
2. In the Connections pane on the left, select the server or site for which to configure logging. Select a server
to configure logging server-wide, or a site to configure logging for that specific site.
3. Double-click the Logging icon in the center pane.
482
The resulting logs can be collected by NXLog as shown in the following sections.
Log Sample
#Software: Microsoft Internet Information Services 10.0↵
#Version: 1.0↵
#Date: 2017-10-02 17:11:27↵
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent)
cs(Referer) sc-status sc-substatus sc-win32-status time-taken↵
2017-10-02 17:11:27 fe80::b5d8:132c:cec9:daef%6 RPC_IN_DATA /rpc/rpcproxy.dll 1d4026cb-6730-43bf-
91eb-df80f41c050f@test.com:6001&CorrelationID=<empty>;&RequestId=11d6a78a-7c34-4f43-9400-
ad23b114aa62&cafeReqId=11d6a78a-7c34-4f43-9400-ad23b114aa62; 80 TEST\HealthMailbox418406e
fe80::b5d8:132c:cec9:daef%6 MSRPC - 500 0 0 7990↵
2017-10-02 17:12:57 fe80::a425:345a:7143:3b15%2 POST /powershell
clientApplication=ActiveMonitor;PSVersion=5.1.14393.1715 80 - fe80::a425:345a:7143:3b15%2
Microsoft+WinRM+Client - 500 0 0 11279↵
Note that field names with special characters must be referenced with curly braces (for example, ${s-ip} and
483
${cs(User-Agent)}).
See also the W3C Extended Log File Format section and the W3C Extended Log File Format (IIS 6.0) and W3C
Extended Log File Examples (IIS 6.0) articles on Microsoft TechNet.
This configuration reads from file with im_file and parses with xm_w3c.
nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input iis_w3c>
6 Module im_file
7 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_ex*.log'
8 InputType w3c_parser
9 </Input>
For NXLog Community Edition, the xm_csv module can be used instead for parsing the records.
484
Example 332. Collecting W3C Format Logs With xm_csv
This configuration parses the logs with the xm_csv module. The header lines are discarded and the $date
and $time fields are parsed in order to set an $EventTime field.
The field list must be set according to the configured IIS fields. The fields shown here
WARNING
correspond with the default field selection in IIS versions 8.5 and 10.
nxlog.conf
1 <Extension w3c_parser>
2 Module xm_csv
3 Fields date, time, s-ip, cs-method, cs-uri-stem, cs-uri-query, \
4 s-port, cs-username, c-ip, cs(User-Agent), cs(Referer), \
5 sc-status, sc-substatus, sc-win32-status, time-taken
6 FieldTypes string, string, string, string, string, string, integer, \
7 string, string, string, string, integer, integer, integer, \
8 integer
9 Delimiter ' '
10 EscapeChar '"'
11 QuoteChar '"'
12 EscapeControl FALSE
13 UndefValue -
14 </Extension>
15
16 <Input iis_w3c>
17 Module im_file
18 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_ex*.log'
19 <Exec>
20 if $raw_event =~ /^#/ drop();
21 else
22 {
23 w3c_parser->parse_csv();
24 $EventTime = parsedate($date + "T" + $time + ".000Z");
25 }
26 </Exec>
27 </Input>
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters
For detailed information about this registry key’s specific values, please see Error logging in HTTP APIs on
Microsoft Support.
Log Sample
#Software: Microsoft HTTP API 2.0↵
#Version: 1.0↵
#Date: 2018-10-01 22:10:02↵
#Fields: date time c-ip c-port s-ip s-port cs-version cs-method cs-uri sc-status s-siteid s-reason
s-queuename↵
2018-10-01 22:10:02 ::1%0 49211 ::1%0 47001 - - - - - Timer_ConnectionIdle -↵
2018-10-01 22:10:02 ::1%0 49212 ::1%0 47001 - - - - - Timer_ConnectionIdle -↵
2018-10-01 23:45:09 172.31.77.6 2094 172.31.77.6 80 HTTP/1.1 GET /qos/1kbfile.txt 503 – ConnLimit↵
485
Example 333. Collecting IIS HTTP API Logs With xm_csv
This configuration parses the logs with the xm_w3c module. The header lines are discarded and the $date
and $time fields are parsed in order to set an $EventTime field.
nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input iis_http>
6 Module im_file
7 File 'C:\Windows\System32\LogFiles\HTTPERR\httperr1.log'
8 InputType w3c_parser
9 </Input>
The xm_w3c module is not included in NXLog Community Edition, so the xm_csv module should
NOTE
be used.
Log Sample
::1, HealthMailbox418406e8ac5b4b61a6b731ac4c660553@test.com, 9/28/2017, 14:49:00, W3SVC1, WINEXC,
::1, 7452, 592, 2538, 302, 0, POST, /OWA/auth.owa, &CorrelationID=<empty>;&cafeReqId=728beb5e-98de-
4680-acb2-45968bef533c;&encoding=;,
127.0.0.1, -, 9/28/2017, 14:49:01, W3SVC1, WINEXC, 127.0.0.1, 6798, 2502, 682, 302, 0, GET, /ecp/,
&CorrelationID=<empty>;&cafeReqId=0ed28871-4083-492f-99c2-
2fbdb06a9466;&LogoffReason=NoCookiesGetOrE14AuthPost,
486
Example 334. Collecting Logs From the IIS Format
This configuration reads from file with im_file and parses the fields with xm_csv. The $Date and $Time
fields are parsed in order to set an $EventTime field.
nxlog.conf
1 <Extension iis_parser>
2 Module xm_csv
3 Fields ClientIPAddress, UserName, Date, Time, ServiceAndInstance, \
4 ServerName, ServerIPAddress, TimeTaken, ClientBytesSent, \
5 ServerBytesSent, ServerStatusCode, WindowsStatusCode, RequestType, \
6 TargetOfOperation, Parameters
7 FieldTypes string, string, string, string, string, string, string, integer, \
8 integer, integer, integer, integer, string, string, string
9 UndefValue -
10 </Extension>
11
12 <Input iis>
13 Module im_file
14 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_in*.log'
15 <Exec>
16 iis_parser->parse_csv();
17 $EventTime = strptime($Date + " " + $Time, "%m/%d/%Y %H:%M:%S");
18 </Exec>
19 </Input>
Log Sample
fe80::a425:345a:7143:3b15%2 - - [02/Oct/2017:13:16:18 -0700] "POST
/mapi/emsmdb/?useMailboxOfAuthenticatedUser=true HTTP/1.1" 401 7226
fe80::a425:345a:7143:3b15%2 - TEST\HealthMailboxc0bafd1 [02/Oct/2017:13:16:20 -0700] "POST
/mapi/emsmdb/?useMailboxOfAuthenticatedUser=true HTTP/1.1" 200 1482
487
Example 335. Collecting NCSA Format Logs
This configuration reads from file with the im_file module and uses a regular expression to parse each
record.
nxlog.conf
1 <Input iis_ncsa>
2 Module im_file
3 File 'C:\inetpub\logs\LogFiles\W3SVC*\u_nc*.log'
4 <Exec>
5 if $raw_event =~ /(?x)^(\S+)\ -\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
6 \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)/
7 {
8 $RemoteHostAddress = $1;
9 if $2 != '-' $UserName = $2;
10 $EventTime = parsedate($3);
11 $HTTPMethod = $4;
12 $HTTPURL = $5;
13 $HTTPResponseStatus = $6;
14 $BytesSent = $7;
15 }
16 </Exec>
17 </Input>
During operation, the IIS SMTP Server pads the W3C log to 64 KiB with NUL characters.
WARNING When the SMTP Server stops, it truncates the file to remove the padding, causing im_file to
re-read the log file and generate duplicate events.
1. Open Internet Information Services (IIS) 6.0 Manager from Administrative Tools.
2. Right click on the corresponding SMTP Virtual Server and click Properties.
488
3. Check Enable logging and choose the logging format from the Active log format drop-down menu. The
W3C format is recommended.
4. Click the [ Properties… ] button to configure the log location and other options.
5. If using the W3C format, adjust the logged fields under the Advanced tab. Include the Date and Time fields
and whatever extended properties are required.
489
Example 336. Collecting W3C Logs From the IIS SMTP Server
The following configuration retrieves W3C logs and parses them using the xm_w3c module.
nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input smtp>
6 Module im_file
7 File 'C:\Windows\System32\LogFiles\SmtpSvc1\ex*.log'
8 InputType w3c_parser
9 </Input>
See the preceding sections for more information about processing the other log formats or using xm_csv for
processing W3C logs with NXLog Community Edition.
490
Example 337. Retrieving Log Locations via Script
The following polyglot script should be installed in the NXLog installation (or ROOT) directory. It uses the
WebAdministration PowerShell module to return the configured log path for each site. If IIS is configured to
use one log file per server, the path should instead be configured manually.
If there are multiple log formats in the log directory due to configuration changes, the
wildcard path should be adjusted to match only those files that are in the
WARNING
corresponding format. For example, for W3C logging use u_ex*.log in the last line of
the script.
get_iis_log_paths.cmd
@( Set "_= (
Rem " ) <#
)
@Echo Off
SetLocal EnableExtensions DisableDelayedExpansion
if defined PROCESSOR_ARCHITEW6432 (
set powershell=%SystemRoot%\SysNative\WindowsPowerShell\v1.0\powershell.exe
) else (
set powershell=powershell.exe
)
%powershell% -ExecutionPolicy Bypass -NoProfile ^
-Command "iex ((gc '%~f0') -join [char]10)"
EndLocal & Exit /B %ErrorLevel%
#>
Import-Module -Name WebAdministration
foreach($Site in $(get-website)) {
$LogDir=$($Site.logFile.directory.replace("%SystemDrive%",$env:SystemDrive))
# WARNING: adjust path to match format (for example, for W3C use `u_ex*.log`).
Write-Output "File '$LogDir\W3SVC$($Site.id)\*.log'" }
nxlog.conf
1 <Extension w3c_parser>
2 Module xm_w3c
3 </Extension>
4
5 <Input iis>
6 Module im_file
7 include_stdout %ROOT%\get_iis_log_paths.cmd
8 InputType w3c_parser
9 </Input>
491
Chapter 76. Microsoft SharePoint
Microsoft SharePoint Server provides many different types of logs, many of which are configurable. Logs are
written to files, databases, and the Windows EventLog. NXLog can be configured to collect these logs, as is shown
in the following sections.
See Monitoring and Reporting in SharePoint Server on TechNet for more information about SharePoint logging.
The trace log files are generated by and stored locally on each server running SharePoint in the farm, using file
names containing the server hostname and timestamp (HOSTNAME-YYYYMMDD-HHMM.log). SharePoint trace logs
are created at regular intervals and whenever there is an IISRESET. It is common for many trace logs to be
generated within a 24-hour period.
If configured in the farm settings, each SharePoint server also writes trace logs to the logging database. These
logs are written by the Diagnostic Data Provider: Trace Log job. NXLog can be configured to collect these logs
from the logging database.
For more information about diagnostic logging, see Configure diagnostic logging in SharePoint Server on
TechNet.
492
The ULS log file contains the following fields.
As shown by the second and third events in the log sample above, long messages span multiple records. In this
case, the timestamp of each subsequent record is followed by an asterisk (*). However, trace log messages are
not guaranteed to appear consecutively within the trace log. See Writing to the Trace Log on MSDN.
2. In the Event Throttling section, use the checkboxes to select a set of categories or subcategories for which
to modify the logging level. Expand categories as necessary to view the corresponding subcategories.
3. Set the event log and trace log levels for the selected categories or subcategories.
Only select the verbose level for troubleshooting, as a large number of logs will be
WARNING
generated.
493
4. To set other levels for other categories or subcategories, click [ OK ] and repeat from step 1.
5. In the Trace Log section, adjust the trace log path and retention policy as required. The specified log location
must exist on all servers in the farm.
Further steps are required to enable writing trace logs to the logging database. For configuring the logging
database itself (server, name, and authentication), see the Configuring Usage Logging section.
1. Log in to Central Administration and go to Monitoring › Timer Jobs › Review job definitions.
494
Example 338. Reading the Trace Log Files
This configuration collects logs from the ULS trace log files and uses xm_csv to parse them. $EventTime
and $Hostname fields are added to the event record. Each event is converted to JSON format and written to
file.
The defined SHAREPOINT_LOGS path should be set to the trace log file directory configured
NOTE
in the Configuring Diagnostic Logging section.
nxlog.conf (truncated)
1 define SHAREPOINT_LOGS C:\Program Files\Common Files\microsoft shared\Web Server \
2 Extensions\16\LOGS
3
4 <Extension json>
5 Module xm_json
6 </Extension>
7
8 <Extension uls_parser>
9 Module xm_csv
10 Fields Timestamp, Process, TID, Area, Category, EventID, Level, Message, \
11 Correlation
12 Delimiter \t
13 </Extension>
14
15 <Input trace_file>
16 Module im_file
17 # Use a file mask to read from ULS trace log files only
18 File '%SHAREPOINT_LOGS%\*-????????-????.log'
19 <Exec>
20 # Drop header lines and empty lines
21 if $raw_event =~ /^(\xEF\xBB\xBF|Timestamp)/ drop();
22 else
23 {
24 # Remove extra spaces
25 $raw_event =~ s/ +(?=\t)//g;
26
27 # Parse with uls_parser instance defined above
28 uls_parser->parse_csv();
29 [...]
Output Sample
{
"EventReceivedTime": "2017-10-12 16:02:20",
"SourceModuleName": "uls",
"SourceModuleType": "im_file",
"Timestamp": "10/12/2017 16:02:18.30",
"Process": "hostcontrollerservice.exe (0x0948)",
"TID": "0x191C",
"Area": "SharePoint Foundation",
"Category": "Topology",
"EventID": "aup1c",
"Level": "Medium",
"Message": "Current app domain: hostcontrollerservice.exe (1)",
"EventTime": "2017-10-12 16:02:18",
"Hostname": "WIN-SHARE.test.com"
}
495
The im_odbc module can be used to collect diagnostic logs from the farm-wide logging database.
The following Input configuration collects logs from the ULSTraceLog view in the WSS_UsageApplication
database.
The datetime data type is not timezone-aware, and the timestamps are stored in UTC.
NOTE Therefore, an offset is applied when setting the $EventTime field in the configuration
below.
nxlog.conf
1 <Input trace_db>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server};\
4 SERVER=SHARESERVE1;DATABASE=WSS_UsageApplication;\
5 Trusted_Connection=yes
6 IdType timestamp
7
8 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
9 # record when reading from the database for the first time.
10 #ReadFromLast TRUE
11 #MaxIdSQL SELECT MAX(LogTime) AS maxid FROM dbo.ULSTraceLog
12
13 SQL SELECT LogTime AS id, * FROM dbo.ULSTraceLog \
14 WHERE LogTime > CAST(? AS datetime)
15 <Exec>
16 # Set $EventTime with correct time zone, remove incorrect fields
17 $EventTime = parsedate(strftime($id, '%Y-%m-%d %H:%M:%SZ'));
18 delete($id);
19 delete($LogTime);
20 </Exec>
21 </Input>
See the Windows EventLog section below for an example configuration that reads events from the Windows
EventLog.
Log Sample
FarmId ⇥ UserLogin ⇥ SiteSubscriptionId ⇥ TimestampUtc ⇥ CorrelationId ⇥ Action ⇥ Target ⇥
Details↵
42319181-e881-44f1-b422-d7ab5f8b0117 ⇥ TEST\Administrator ⇥ 00000000-0000-0000-0000-000000000000 ⇥
2017-10-17 23:15:26.667 ⇥ 00000000-0000-0000-0000-000000000000 ⇥ Administration.Feature.Install ⇥
AccSrvRestrictedList ⇥ {"Id":"a4d4ee2c-a6cb-4191-ab0a-21bb5bde92fb"}↵
42319181-e881-44f1-b422-d7ab5f8b0117 ⇥ TEST\Administrator ⇥ 00000000-0000-0000-0000-000000000000 ⇥
2017-10-17 23:15:26.839 ⇥ 00000000-0000-0000-0000-000000000000 ⇥ Administration.Feature.Install ⇥
ExpirationWorkflow ⇥ {"Id":"c85e5759-f323-4efb-b548-443d2216efb5"}↵
496
76.2.1. Configuring Usage Logging
Usage and health data collection can be enabled and configured as follows. For more information about
configuring usage and health data logging, see Configure usage and health data collection in SharePoint Server
on TechNet.
WARNING The usage and health data collection settings are farm-wide.
1. Log in to Central Administration and go to Monitoring › Reporting › Configure usage and health data
collection.
2. In the Usage Data Collection section, check Enable usage data collection to enable it.
3. In the Event Selection section, use the checkboxes to select the required event categories. It is
recommended that only those categories be enabled for which regular reports are required.
4. In the Usage Data Collection Settings section, specify the path for the usage log files. The specified log
location must exist on all servers in the farm.
5. In the Health Data Collection section, check Enable health data collection to enable it. Click Health
Logging Schedule to edit the job definitions for the Microsoft SharePoint Foundation Timer service.
6. Click the Log Collection Schedule link to edit the job definitions for the Microsoft SharePoint Foundation
Usage service.
7. In the Logging Database Server section, adjust the authentication method as required. To change the
database server and name, see Log usage data in a different logging database by using Windows PowerShell
on TechNet.
497
76.2.2. Collecting Usage Logs
The xm_csv module can be used to parse the tab-delimited usage and health log files on the local server.
This configuration collects logs from the AdministrativeActions usage log file (see Using Administrative
Actions logging in SharePoint Server 2016 on TechNet) and uses xm_csv to parse them. $EventTime and
$Hostname fields are added to the event record. Each event is converted to JSON format and written to file.
The defined SHAREPOINT_LOGS path should be set to the trace log file directory configured
NOTE
in the Configuring Diagnostic Logging section.
Unlike the diagnostic/trace logs, the various usage/health data categories generate logs
NOTE with differing field sets. Therefore it is not practical to parse multiple types of usage/health
logs with a single xm_csv parser.
nxlog.conf (truncated)
1 define SHAREPOINT_LOGS C:\Program Files\Common Files\microsoft shared\Web Server \
2 Extensions\16\LOGS
3
4 <Extension json>
5 Module xm_json
6 </Extension>
7
8 <Extension admin_actions_parser>
9 Module xm_csv
10 Fields FarmId, UserLogin, SiteSubscriptionId, TimestampUtc, \
11 CorrelationId, Action, Target, Details
12 Delimiter \t
13 </Extension>
14
15 <Input admin_actions_file>
16 Module im_file
17 # Use a file mask to read from the USAGE files only
18 File '%SHAREPOINT_LOGS%\AdministrativeActions\*.usage'
19 <Exec>
20 # Drop header lines and empty lines
21 if $raw_event =~ /^(\xEF\xBB\xBF|FarmId)/ drop();
22 else
23 {
24 # Parse with parser instance defined above
25 admin_actions_parser->parse_csv();
26
27 # Set $EventTime field
28 $EventTime = parsedate($TimestampUtc + "Z");
29 [...]
498
Output Sample
{
"EventReceivedTime": "2017-10-17 20:46:14",
"SourceModuleName": "admin_actions",
"SourceModuleType": "im_file",
"FarmId": "42319181-e881-44f1-b422-d7ab5f8b0117",
"UserLogin": "TEST\\Administrator",
"SiteSubscriptionId": "00000000-0000-0000-0000-000000000000",
"TimestampUtc": "2017-10-17 23:15:26.667",
"CorrelationId": "00000000-0000-0000-0000-000000000000",
"Action": "Administration.Feature.Install",
"Target": "AccSrvRestrictedList",
"Details": {
"Id": "a4d4ee2c-a6cb-4191-ab0a-21bb5bde92fb"
},
"EventTime": "2017-10-17 16:15:26",
"Hostname": "WIN-SHARE.test.com"
}
The im_odbc module can be used to collect usage and health logs from the farm-wide logging database.
The following Input configuration collects Administrative Actions logs from the AdministrativeActions view
in the WSS_UsageApplication database.
The datetime data type is not timezone-aware, and the timestamps are stored in UTC.
NOTE Therefore, an offset is applied when setting the $EventTime field in the configuration
below.
nxlog.conf
1 <Input admin_actions_db>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server};\
4 SERVER=SHARESERVE1;DATABASE=WSS_UsageApplication;\
5 Trusted_Connection=yes
6 IdType timestamp
7
8 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
9 # record when reading from the database for the first time.
10 #ReadFromLast TRUE
11 #MaxIdSQL SELECT MAX(LogTime) AS maxid FROM dbo.AdministrativeActions
12
13 SQL SELECT LogTime AS id, * FROM dbo.AdministrativeActions \
14 WHERE LogTime > CAST(? AS datetime)
15 <Exec>
16 # Set $EventTime with correct time zone, remove incorrect fields
17 $EventTime = parsedate(strftime($id, '%Y-%m-%d %H:%M:%SZ'));
18 delete($id);
19 delete($LogTime);
20 </Exec>
21 </Input>
See the Windows EventLog section for an example configuration that reads events from the Windows EventLog.
499
76.3. Audit Logs
SharePoint Information Management provides an audit feature that allows tracking of user actions on a site’s
content. The audit events are stored in the dbo.AuditData table in the WSS_Content database. The events can
be collected via the SharePoint API or by reading the database directly.
Audit logging is disabled by default, and can be enabled on a per-site basis. To enable audit logging, follow these
steps. For more details, see the Configure audit settings for a site collection article on Office Support.
4. On the Site Settings page, in the Site Collection Administration section, click Site collection audit
settings.
If the Site Collection Administration section is not shown, make sure you have adequate
NOTE
permissions.
5. Set audit log trimming settings, select the events to audit, and click [ OK ].
In order for NXLog to have SharePoint Shell access when running as a service, run the following PowerShell
commands. This will add the NT AUTHORITY\SYSTEM user to the SharePoint_Shell_Access role for the
SharePoint configuration database.
This configuration collects audit events via SharePoint’s API with the auditlog.ps1 PowerShell script. The
500
script also adds the following fields (performing lookups as required): $ItemName, $Message, $SiteURL, and
$UserName. Audit logs are collected from all available sites and the site list is updated each time the logs
are collected. See the options in the script header.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 envvar systemroot
6 <Input audit_powershell>
7 Module im_exec
8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
9 Arg "-ExecutionPolicy"
10 Arg "Bypass"
11 Arg "-NoProfile"
12 Arg "-File"
13 Arg "C:\auditlog.ps1"
14 <Exec>
15 parse_json();
16 $EventTime = parsedate($EventTime);
17 </Exec>
18 </Input>
Event Sample
{
"EventReceivedTime": "2018-03-01 02:12:45",
"SourceModuleName": "audit_ps",
"SourceModuleType": "im_exec",
"UserID": 18,
"LocationType": 0,
"EventName": null,
"MachineName": null,
"ItemName": null,
"EventData": "<Version><AllVersions/></Version><Recycle>1</Recycle>",
"Event": 4,
"UserName": "i:0#.w|test\\test",
"SourceName": null,
"SiteURL": "http://win-share",
"EventTime": "2018-03-01 02:12:12",
"EventSource": 0,
"Message": "The audited object is deleted.",
"DocLocation": "Shared Documents/document.txt",
"ItemID": "48341996-7844-4842-bef6-94b43ace0582",
"SiteID": "51108732-0903-4721-aae7-0f9fb5aebfc2",
"MachineIP": null,
"AppPrincipalID": 0,
"ItemType": 1
}
501
auditlog.ps1 (truncated)
# This script can be used with NXLog to fetch Audit logs via the SharePoint
# API. See the configurable options below. Based on:
# <http://shokochino-sharepointexperience.blogspot.ch/2013/05/create-auditing-reports-in-
sharepoint.html>
#Requires -Version 3
This configuration collects audit events from the AuditData table in the WSS_Content database.
The datetime data type is not timezone-aware, and the timestamps are stored in UTC.
NOTE Therefore, an offset is applied when setting the $EventTime field in the configuration
below.
nxlog.conf
1 <Input audit_db>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server}; \
4 Server=SHARESERVE1; Database=WSS_Content; \
5 Trusted_Connection=yes
6 IdType timestamp
7
8 # With ReadFromLast and MaxIdSQL, NXLog will start reading from the last
9 # record when reading from the database for the first time.
10 #ReadFromLast TRUE
11 #MaxIdSQL SELECT MAX(Occurred) AS maxid FROM dbo.AuditData
12
13 SQL SELECT Occurred AS id, * FROM dbo.AuditData \
14 WHERE Occurred > CAST(? AS datetime)
15 <Exec>
16 # Set $EventTime with correct time zone, remove incorrect fields
17 $EventTime = parsedate(strftime($id, '%Y-%m-%d %H:%M:%SZ'));
18 delete($id);
19 delete($Occurred);
20 </Exec>
21 </Input>
502
76.4. Windows EventLog
SharePoint will generate Windows event logs according to the diagnostic log levels configured (see the Diagnostic
Logs section). NXLog can be configured to collect logs from the Windows EventLog as shown below. For more
information about collect Windows EventLog events with NXLog, see the Windows Event Log chapter.
This configuration uses the im_msvistalog module to collect all logs from four SharePoint crimson channels,
as well as Application channel events of Warning or higher level. The Application channel will include other
non-SharePoint events. There may be other SharePoint events generated which will not be collected with
this query, depending on the configuration and the channels used.
nxlog.conf
1 <Input eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0" Path="Application">
6 <Select Path="Application">
7 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
8 <Select Path="System">
9 *[System[(Level=1 or Level=2 or Level=3)]]</Select>
10 <Select Path="Microsoft-Office Server-Search/Operational">
11 *</Select>
12 <Select Path="Microsoft-Office-EduServer Diagnostics">*</Select>
13 <Select Path="Microsoft-SharePoint Products-Shared/Operational">
14 *</Select>
15 <Select Path="Microsoft-SharePoint Products-Shared/Audit">*</Select>
16 </Query>
17 </QueryList>
18 </QueryXML>
19 </Input>
See the Microsoft IIS chapter for more information about collecting events from IIS with NXLog.
503
Chapter 77. Microsoft SQL Server
NXLog can be integrated with SQL Server in several ways. The server error log file can be read and parsed. SQL
Server Auditing can be configured for a database and the logs collected. It is also possible to read logs from or
write logs to databases hosted by SQL Server. The last section provides some additional information about
setting up ODBC for connecting to a database.
This example uses the xm_charconv LineReader input reader to convert the input to UTF-8 encoding. Events
spanning multiple lines are joined and each event is parsed into $EventTime, $Source, and $Message
fields.
nxlog.conf (truncated)
1 <Extension charconv>
2 Module xm_charconv
3 LineReader UTF-16LE
4 </Extension>
5
6 define ERRORLOG_EVENT /(?x)^(\xEF\xBB\xBF)? \
7 (?<EventTime>\d+-\d+-\d+\ \d+:\d+:\d+.\d+) \
8 \ (?<Source>\S+)\s+(?<Message>.+)$/s
9 <Input mssql_errorlog>
10 Module im_file
11 File 'C:\Program Files\Microsoft SQL Server\' + \
12 'MSSQL14.MSSQLSERVER\MSSQL\Log\ERRORLOG'
13 InputType charconv
14 <Exec>
15 # Attempt to match regular expression
16 if $raw_event =~ %ERRORLOG_EVENT%
17 {
18 # Check if previous lines were saved
19 if defined(get_var('saved'))
20 {
21 $tmp = $raw_event;
22 $raw_event = get_var('saved');
23 set_var('saved', $tmp);
24 delete($tmp);
25 # Process and send previous event
26 $raw_event =~ %ERRORLOG_EVENT%;
27 $EventTime = parsedate($EventTime);
28 }
29 [...]
Because there is no closing/footer line for the events, a log message is kept in the buffers,
NOTE
and not forwarded, until a new log message is read.
504
Example 346. Reading From the SQL Server Error Log (NXLog Community Edition)
This example uses the xm_charconv module convert() function to convert the character set to UTF-8. For log
messages that span multiple lines, an event is created for each line. Variables are used to retain the same
$EventTime and $Source values for subsequent events in this case.
nxlog.conf (truncated)
1 <Extension _charconv>
2 Module xm_charconv
3 </Extension>
4
5 <Input mssql_errorlog>
6 Module im_file
7 File 'C:\Program Files\Microsoft SQL Server\' + \
8 'MSSQL14.MSSQLSERVER\MSSQL\Log\ERRORLOG'
9 <Exec>
10 # Convert character encoding
11 $raw_event = convert($raw_event, 'UTF-16LE', 'UTF-8');
12 # Discard empty lines
13 if $raw_event == '' drop();
14 # Attempt to match regular expression
15 else if $raw_event =~ /(?x)^(?<EventTime>\d+-\d+-\d+\ \d+:\d+:\d+.\d+)
16 \ (?<Source>\S+)\s+(?<Message>.+)$/s
17 {
18 # Convert $EventTime field to datetime type
19 $EventTime = parsedate($EventTime);
20 # Save $EventTime and $Source; may be needed for next event
21 set_var('last_EventTime', $EventTime);
22 set_var('last_Source', $Source);
23 }
24 # If regular expression does not match, this is a multi-line event
25 else
26 {
27 # Use the entire line for the $Message field
28 $Message = $raw_event;
29 [...]
While in earlier versions these logs had to be generated by SQL Trace or a custom monitoring process, it is now
possible to start recording audit logs with a few clicks in Management Studio or a relatively simple SQL script.
The following instructions require a Microsoft SQL Server with auditing support and the Microsoft SQL
Management Studio. Consult the relevant documentation below to determine whether "Fine Grained Auditing" is
available for your SQL Server version and edition.
505
For more information, see SQL Server Audit (Database Engine) on Microsoft Docs.
2. Right-click on Audits and select New Audit. The Create Audit dialog box appears. Choose a name for
the audit object.
3. In the Audit destination drop-down list, choose Security log or File (for security reasons, Application
log is not recommended as a target). For File, enter a file path and configure log rotation.
4. Click OK. The Server Audit object is created. Note the red arrow next to the newly created object’s name
indicating this is a disabled object. To enable it, right click on the audit object and select Enable audit (in
case of an error, see Checking SQL Audit Generation below).
SQL script
To instead create the Server Audit object via SQL, run the CREATE SERVER AUDIT and ALTER SERVER AUDIT
commands. For example:
2. Right-click on Server Audit Specifications and select New Audit. The Create Audit dialog box appears.
3. Choose a Server Audit object (the one defined earlier) and select the actions to be reported.
4. Click OK. The Server Audit Specification object is created. Note the red arrow next to the newly created
object’s name indicating this is a disabled object. To enable it, right click on the audit object and select
Enable audit.
SQL script
Alternatively, use the CREATE SERVER AUDIT SPECIFICATION and ALTER SERVER AUDIT SPECIFICATION
commands. For example:
506
77.2.1.3. Creating a Database Audit Specification
GUI
In Management Studio, after connecting to the database instance:
2. Click on the plus (+) next to the database to be audited, then click on the plus (+) next to Security under
the database.
3. Right-click on Database Audit Specifications and select New Audit. The Create Audit dialog box
appears.
4. Choose a Server Audit object (the one defined earlier) and select the actions to be reported.
5. Click OK. The Database Audit Specification object is created.
SQL script
Alternatively, use the CREATE DATABASE AUDIT SPECIFICATION and ALTER DATABASE AUDIT
SPECIFICATION commands. For example:
EventLog
Check the Security and Application EventLogs to see if SQL Auditing is working properly. If there are no
related events in the Security log (though it was set as the destination), check the Application log too. Look for
event ID 33204 in the Application log indicating SQL Server’s failure to write to the Security log.
This is a registry related permission error: the account running the SQL server instance is unable to create an
entry under HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Security and fails with ID 33204.
1. Run regedit.
2. Grant Full Control permission for the account running the SQL server instance (for example, Network
Service or a named account) to HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Security.
3. Disable, then re-enable the Server Audit; this creates a sub-key, MSSQLSERVER$AUDIT.
4. Optionally, remove the Full Control permission that was just added. This permission is no longer
required now that the sub-key has been created.
507
is configured with a Security log destination, the events can be read from the EventLog. If it is configured with a
File target, the events can be queried via ODBC.
508
Example 347. Reading Audit Events From the EventLog
In this example, events with ID 33205 are retrieved and some additional fields are parsed from $Message.
Sample Event
2011-11-11 11:00:00 sql2008-ent AUDIT_SUCCESS 33205 Audit event: event_time:2011-11-11
11:00:00.0000000↵
sequence_number:1↵
action_id:SL↵
succeeded:true↵
permission_bitmask:1↵
is_column_permission:true↵
session_id:57↵
server_principal_id:264↵
database_principal_id:1↵
target_server_principal_id:0↵
target_database_principal_id:0↵
object_id:2105058535↵
class_type:U↵
session_server_principal_name:SQL2008-ENT\myuser↵
server_principal_name:SQL2008-ENT\myuser↵
server_principal_sid:0105000000000002120000001aaaaaabbbbcccccddddeeeeffffffff↵
database_principal_name:dbo↵
target_server_principal_name:↵
target_server_principal_sid:↵
target_database_principal_name:↵
server_instance_name:SQL2008-ENT↵
database_name:logindb↵
schema_name:dbo↵
object_name:users↵
statement:select username nev from dbo.users;↵
additional_information:↵
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0" Path="Security">
6 <Select Path="Security">*[System[(EventID=33205)]]</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 <Exec>
11 if $Message =~ /action_id:(.*)/ $ActionId = $1;
12 if $Message =~ /session_server_principal_name:(.*)/ $SessionSPN = $1;
13 if $Message =~ /database_principal_name:(.*)/ $DBPrincipal = $1;
14 if $Message =~ /server_instance_name:(.*)/ $ServerInstance = $1;
15 if $Message =~ /database_name:(.*)/ $DBName = $1;
16 if $Message =~ /schema_name:dbo(.*)/ $SchemaName = $1;
17 if $Message =~ /object_name:(.*)/ $ObjectName = $1;
18 if $Message =~ /statement:(.*)/ $Statement = $1;
19 </Exec>
20 </Input>
509
77.2.3.2. Reading From the Audit File
The audit file is stored in a binary format and is read with the sys.fn_get_audit_file function. NXLog can be
configured to collect the audit logs via ODBC with the im_odbc module. For more information about ODBC (and
the ConnectionString directive), see the Setting up ODBC section.
The configuration below uses the im_odbc module to collect audit logs via ODBC. A corresponding name
for the action_id is included via a lookup performed on the sys.dm_audit_actions table (see Translating
the action_id Field below for more information).
NOTE This configuration has been tested with SQL Server 2017.
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; DATABASE=TESTDB_doc81;
5 PollInterval 5
6 IdType timestamp
7 SQL SELECT event_time AS 'id', f.*, a.name AS action_name \
8 FROM fn_get_audit_file('C:\audit_log\Audit-*.sqlaudit', default, \
9 default) AS f \
10 INNER JOIN sys.dm_audit_actions AS a \
11 ON f.action_id = a.action_id \
12 WHERE event_time > ?
13 <Exec>
14 delete($id);
15 rename_field($event_time, $EventTime);
16 </Exec>
17 </Input>
The SQL directive requires a SELECT statement for collecting logs. An id field must be returned, and must be used
to limit the results of the SELECT statement. Also, some data types may need special handling in order to be used
with NXLog. Continue to the following sections for more details.
510
new log records without collecting records more than once. In a simple scenario, the id is an auto-increment
integer field in a table, but several other data types are supported too (see the IdType directive). It is also
possible to generate the id field in the SELECT statement rather than using a field directly.
Writing a working SELECT statement for the SQL directive requires consideration of the id field in two ways.
1. The SELECT statement must return an id field. While there could be a field named id in a table, it is more
common to alias a field as id with the AS clause.
2. The SELECT statement must limit the results by including a WHERE clause. The WHERE clause should include
a question mark (?) which will be substituted with the highest value of the id that was previously seen by the
module instance.
The ways that the id can be generated are limited only by the database and the SQL language. However, the
following examples show the basic use of the int and datetime2 data types, as well as three which may require
special handling: datetimeoffset, datetime, and timestamp (or rowversion).
In this example, im_odbc collects logs from a table with an auto-increment (identity) int ID field.
Sample Table
CREATE TABLE dbo.test1 (
RecordID int IDENTITY(1,1) NOT NULL,
EventTime datetime2 NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Input reading_integer_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT RecordID AS id, * FROM dbo.test1 WHERE RecordID > ?
7 Exec delete($id);
8 </Input>
Event Fields
{
"RecordID": 1,
"EventTime": "2017-12-31T23:00:00.000000Z",
"Message": "This is a test message",
"EventReceivedTime": "2018-04-01T10:40:54.313071Z",
"SourceModuleName": "reading_integer_id",
"SourceModuleType": "im_odbc"
}
511
Example 350. Reading Logs by datetime2 ID
This example shows a table with a datetime2 timestamp field, which im_odbc is configured to use as the id.
Sample Table
CREATE TABLE dbo.test1 (
EventTime datetime2 NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Input reading_datetime2_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType timestamp
6 SQL SELECT EventTime AS id, * FROM dbo.test1 WHERE EventTime > ?
7 Exec delete($id);
8 </Input>
This example collects logs from a table with a datetimeoffset field used as the id. The datetimeoffset type
stores both a timestamp and an associated time-zone offset, and is not directly supported by im_odbc.
Thus, the CAST() function is used to convert the value to a datetime2 type.
Sample Table
CREATE TABLE dbo.test1 (
EventTime datetimeoffset NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Input reading_datetimeoffset_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType timestamp
6 SQL SELECT CAST(EventTime AS datetime2) AS id, Message FROM dbo.test1 \
7 WHERE EventTime > ?
8 Exec delete($id);
9 </Input>
512
Example 352. Reading Logs by datetime ID
This example shows a table with a datetime type timestamp which will be used as the id. The datetime type
has been deprecated, and due to a change in the internal representation of datetime values in SQL Server,
some timestamp values (such as the one shown below) cannot be compared correctly without an explicit
casting in the WHERE clause. Without the CAST(), SQL Server may return certain records repeatedly (at
each PollInterval) until a later datetime value is added to the table.
Sample Table
CREATE TABLE dbo.test1 (
EventTime datetime NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Input reading_datetime_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType timestamp
6 SQL SELECT EventTime AS id, * FROM dbo.test1 \
7 WHERE EventTime > CAST(? as datetime)
8 Exec delete($id);
9 </Input>
This example shows a table with a timestamp (or rowversion, see rowversion (Transact-SQL) on Microsoft
Docs) type field which is used as the id. Notice that the IdType directive is set to integer rather than
timestamp, because the timestamp type is not actually a timestamp.
Sample Table
CREATE TABLE dbo.test1 (
RowVersion timestamp NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Input reading_rowversion_id>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT RowVersion AS id, * FROM dbo.test1 WHERE RowVersion > ?
7 Exec delete($id);
8 </Input>
513
77.3.2. Handling Unsupported Data Types
Some of SQL Server’s data types are not directly supported by im_odbc. If an im_odbc instance is configured to
read one of these types, it will log an unsupported odbc type error to the internal log. In this case, the CAST()
function should be used in the SELECT statement to convert the field to a type that im_odbc supports.
In this example, a datetimeoffset type field is read as two distinct fields: $EventTime for the timestamp value
and $TZOffset for the time-zone offset value (in minutes).
Sample Table
CREATE TABLE dbo.test1 (
RecordID int IDENTITY(1,1) NOT NULL,
LogTime datetimeoffset NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Input reading_datetimeoffset>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT RecordID AS id, \
7 CAST(LogTime AS datetime2) AS EventTime, \
8 DATEPART(tz, LogTime) AS TZOffset, \
9 Message \
10 FROM dbo.test1 WHERE RecordID > ?
11 Exec rename_field($id, $RecordID);
12 </Input>
Event Fields
{
"RecordID": 1,
"EventTime": "2017-12-31T23:00:00.000000Z",
"TZOffset": 60,
"Message": "This is a test message",
"EventReceivedTime": "2018-04-01T10:40:54.313071Z",
"SourceModuleName": "odbcdrv17_in",
"SourceModuleType": "im_odbc"
}
514
Example 355. Writing Events to an SQL Server Database
The following configuration inserts records into the dbo.test1 table of the specified database. The
$EventTime and $Message fields in the event record are used for the EventTime and Message fields in the
table.
Sample Table
CREATE TABLE dbo.test1 (
RecordID int IDENTITY(1,1) NOT NULL,
EventTime datetime2 NOT NULL,
Message varchar(100) NOT NULL,
)
nxlog.conf
1 <Output mssql>
2 Module om_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 SQL "INSERT INTO dbo.test1 (EventTime, Message) VALUES (?,?)", \
6 $EventTime, $Message
7 </Output>
To use a DSN instead, consult either the ODBC Data Source Administrator and Data Source
NOTE Wizard sections on Microsoft Docs (for Windows) or the unixODBC documentation (for Linux), in
addition to the content below.
Connections to an SQL Server database can use either Windows Authentication (also called "trusted connection")
or SQL Server Authentication. For more information, see Choose an Authentication Mode on Microsoft Docs.
When connecting to an SQL Server database with SQL Server Authentication, the
connection string stored in the NXLog configuration file will need to include UID and PWD
keywords for username and password, respectively (this is true for both DSN and DSN-less
WARNING connections). Because these credentials are stored in plain text, it is important to verify
that the configuration file permissions are set correctly. It is also possible to fetch the
connection string from another file with the include directive or via a script with
include_stdout.
515
Example 356. Using ODBC Driver 17 for SQL Server With Windows Authentication
This example uses the "ODBC Driver 17 for SQL Server" driver to connect to the specified server and
database. Windows Authentication is used to authenticate (the Trusted_Connection keyword). The UID
and PWD keywords are not required in this case. The user account under which NXLog is running must have
permission to access the database.
nxlog.conf
1 <Input win_auth>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT Record AS id, * FROM dbo.test1 WHERE Record > ?
7 </Input>
Example 357. Using ODBC Driver 13 for SQL Server With SQL Server Authentication
This example uses the "ODBC Driver 13 for SQL Server" driver to connect to the specified server and
database. In this case, SQL Server Authentication is used to authenticate. The UID and PWD keywords must
be used to provide the SQL Server login account and password, respectively.
nxlog.conf
1 <Input sql_auth>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server}; Server=MSSQL-HOST; \
4 UID=test; PWD=testpass; Database=TESTDB
5 IdType integer
6 SQL SELECT Record AS id, * FROM dbo.test1 WHERE Record > ?
7 </Input>
77.5.2. FreeTDS
It is also possible to use the FreeTDS driver on Linux.
For more information about using FreeTDS, see the FreeTDS User Guide.
516
Example 358. Using FreeTDS With SQL Server Authentication
This example uses the FreeTDS driver to connect to the specified server and database.
nxlog.conf
1 <Input freetds>
2 Module im_odbc
3 ConnectionString Driver={FreeTDS}; Server=MSSQL-HOST; Port=1433; UID=test; \
4 PWD=testpass; Database=TESTDB
5 IdType integer
6 SQL SELECT Record AS id, * FROM dbo.test1 WHERE Record > ?
7 </Input>
517
Chapter 78. Microsoft System Center Endpoint
Protection
Microsoft System Center Endpoint Protection (SCEP) is an anti-virus and anti-malware product for Windows
environments that includes a Windows Firewall manager. SCEP (formerly called Forefront) is integrated into
System Center, an enterprise system management product comprised of multiple modules that manages a
Windows-based enterprise IT environment. For more information, see the Endpoint Protection documentation
on Microsoft Docs.
Because the SCEP client logs events to Windows Event Log, it is possible to collect these events with NXLog.
Example 359. Collecting and Parsing Forefront (FCSAM) Events From Windows Event Log
This configuration uses the im_msvistalog module to collect FCSAM client events from Windows Event Log.
This will result in an $EventData field in the event record containing <Data> entries similar to the previous
example.
To extract values from the $EventData field, a regular expression is selected based on the event ID. Then
each <Data> entry is identified by a combination of its position in the list and a pattern match on its value.
For example, the <Data>1.5.1937.0</Data> portion of the EventData string is extracted and saved to the
518
NXLog $ClientVersion field.
This example includes regular expressions for parsing event IDs 3004, 3005, 5007, 5008, 1000, 1001, 1002,
1006 and 1007. Some fields, which are empty or otherwise do not contain useful data are skipped. The
configuration could be extended to parse other events logged by the FCSAM client via adding more regular
expressions, parsing multiple event IDs with a single expression, and/or dividing the parsing into multiple
expressions for a single event.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 define FCSAMEvents 3004, 3005, 5007, 5008, 1000, 1001, 1002, 1006, 1007
6
7 define EventID_3004_REGEX /(?x) \
8 <Data>(?<ClientVersion>(\d+\.\d+\.\d+\.\d+))<\/Data> \
9 <Data>(?<ScanID>(\{[\d\w\-]+\}))<\/Data> \
10 <Data>\d+<\/Data> \
11 <Data>\%\%\d{3}<\/Data> \
12 <Data><\/Data> \
13 <Data>(?<ProcessName>(\w{1}:\\.*\.exe))<\/Data> \
14 <Data>(?<Domain>([\w\d]+))<\/Data> \
15 <Data>(?<User>([\w\d]+))<\/Data> \
16 <Data>(?<SID>(S-[\d\-]+))<\/Data> \
17 <Data>(?<Filename>.*)<\/Data> \
18 <Data>(?<ID>(\d{9,11}))<\/Data> \
19 <Data>(?<SeverityID>(\d{1,2}))<\/Data> \
20 <Data>(?<CategoryID>(\d{1,3}))<\/Data> \
21 <Data>(?<FWLink>(http.*id=\d{10}))<\/Data> \
22 <Data>(?<PathFound>(file:\w{1}:.*\.\w{2,4}))<\/Data> \
23 <Data><\/Data> \
24 <Data><\/Data> \
25 <Data>\d+<\/Data> \
26 <Data>\%\%\d+<\/Data> \
27 <Data>\d+<\/Data> \
28 <Data>\%\%\d+<\/Data> \
29 [...]
519
Event Sample
{
"EventTime": "2019-01-11T12:19:22.000000+01:00",
"Hostname": "Host.DOMAIN.local",
"Keywords": "36028797018963968",
"EventType": "WARNING",
"SeverityValue": 3,
"Severity": "Severe",
"EventID": 3004,
"SourceName": "FCSAM",
"TaskValue": 0,
"RecordNumber": 11595,
"ExecutionProcessID": 0,
"ExecutionThreadID": 0,
"Channel": "System",
"Message": "Microsoft Forefront Client Security Real-Time Protection agent has detected
changes. Microsoft recommends you analyze the software that made these changes for potential
risks. You can use information about how these programs operate to choose whether to allow them
to run or remove them from your computer. Allow changes only if you trust the program or the
software publisher. Microsoft Forefront Client Security can't undo changes that you allow.\r\n
For more information please see the following:
\r\nhttp://go.microsoft.com/fwlink/?linkid=37020&name=EICAR_Test_File&threatid=2147519003\r\n
\tScan ID: {92224018-9446-4C2D-AFCB-EC4456B8859E}\r\n \tAgent: On Access\r\n \tUser: DOMAIN
\\admin\r\n \tName: EICAR_Test_File\r\n \tID: 2147519003\r\n \tSeverity: Severe\r\n \tCategory:
Virus\r\n \tPath Found: file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt\r\n \tAlert Type:
\r\n \tProcess Name: C:\\Program Files\\Mozilla Firefox\\firefox.exe\r\n \tDetection Type:
Concrete\r\n \tStatus: Suspend",
"Opcode": "Info",
"EventData": "<Data>%%830</Data><Data>1.5.1937.0</Data><Data>{92224018-9446-4C2D-AFCB-
EC4456B8859E}</Data><Data>10</Data><Data>%%843</Data><Data></Data><Data>C:\\Program Files
\\Mozilla Firefox\\firefox.exe</Data><Data>DOMAIN</Data><Data>admin</Data><Data>S-1-5-21-
314323950-2314161084-4234690932-
1002</Data><Data>EICAR_Test_File</Data><Data>2147519003</Data><Data>5</Data><Data>42</Data><Dat
a>http://go.microsoft.com/fwlink/?linkid=37020&name=EICAR_Test_File&threatid=2147519003
</Data><Data>file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt</Data><Data></Data><Data></Data
><Data>4</Data><Data>%%814</Data><Data>0</Data><Data>%%823</Data><Data></Data><Data></Data><Dat
a>Severe</Data><Data>Virus</Data><Data></Data><Data></Data>",
"EventReceivedTime": "2019-01-11T12:19:22.883100+01:00",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog",
"Category": "Virus",
"CategoryID": "42",
"ClientVersion": "1.5.1937.0",
"FWLink":
"http://go.microsoft.com/fwlink/?linkid=37020&name=EICAR_Test_File&threatid=2147519003"
,
"Filename": "EICAR_Test_File",
"ID": "2147519003",
"PathFound": "file:C:\\Users\\admin\\Downloads\\eicar.com(2).txt",
"ProcessName": "C:\\Program Files\\Mozilla Firefox\\firefox.exe",
"SID": "S-1-5-21-314323950-2314161084-4234690932-1002",
"ScanID": "{92224018-9446-4C2D-AFCB-EC4456B8859E}",
"SeverityID": "5",
"User": "DOMAIN \\ admin"
}
520
78.2. Collecting and Parsing SCEP Data from Log Files
SCEP client log files are located in the %allusersprofile%\Microsoft\Microsoft Antimalware\Support
directory.
• Definition updates
• Malware detections
• Monitoring alerts
The following configuration collects events from SCEP files with the im_file module. Logs are written in the
UTF-16LE character encoding, so the xm_charconv extension module is used to convert the input.
521
nxlog.conf (truncated)
1 <Extension charconv>
2 Module xm_charconv
3 LineReader UTF-16LE
4 </Extension>
5
6 <Extension _json>
7 Module xm_json
8 </Extension>
9
10 <Input Antimalware>
11 Module im_file
12 File 'C:\ProgramData\Microsoft\Microsoft Antimalware\Support\' + \
13 'MPDetection-*.log'
14 File 'C:\ProgramData\Microsoft\Microsoft Antimalware\Support\' + \
15 'MPLog-*.log'
16 File 'C:\ProgramData\Microsoft\Microsoft Security Client\Support\' + \
17 'EppSetup.log'
18 File 'C:\ProgramData\Microsoft\Microsoft Security Client\Support\' + \
19 'MSSecurityClient_Setup*.log'
20 ReadFromLast TRUE
21 InputType charconv
22 <Exec>
23 file_name() =~ /(?<Filename>[^\\]+)$/;
24 if $FileName =~ /MPLog|MPDetection/
25 if $raw_event =~ /(.*\.\d{3}Z)\s+(.*)/
26 {
27 $EventTime = $1;
28 [...]
522
Event Sample - EppSetup
{
"EventReceivedTime": "2019-06-16T14:39:07.127660+02:00",
"SourceModuleName": "Antimalware",
"SourceModuleType": "im_file",
"Filename": "EppSetup.log",
"Status": "SUCCESS",
"EventTime": "2019/05/31 19:12:05:782",
"TID": "4700",
"PID": "4692"
}
{
"EventReceivedTime": "2019-06-16T14:39:07.127660+02:00",
"SourceModuleName": "Antimalware",
"SourceModuleType": "im_file",
"Filename": "EppSetup.log",
"Message": "Setup ended successfully with result: The operation completed successfully."
}
The following configuration queries the SCCM database with the im_odbc module. This example contains
two SQL queries collecting Last Malware alerts and AV Detection alerts.
523
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input last_malware>
6 Module im_odbc
7 ConnectionString DSN=SMS;database=CM_CND;uid=user;pwd=password;
8 IdType timestamp
9 SQL SELECT DetectionTime as id,* \
10 FROM vEP_LastMalware \
11 WHERE DetectionTime > CAST(? AS datetime)
12 Exec to_json();
13 </Input>
14
15 <Input av_detections>
16 Module im_odbc
17 ConnectionString DSN=SMS;database=CM_CND;uid=user;pwd=password;
18 IdType timestamp
19 SQL SELECT DetectionTime as id,* \
20 FROM v_GS_Threats \
21 INNER JOIN v_R_System \
22 ON v_GS_Threats.ResourceID=v_R_System.ResourceID \
23 WHERE DetectionTime > CAST(? AS datetime)
24 Exec to_json();
25 </Input>
524
Chapter 79. Microsoft System Center Configuration
Manager
System Center Configuration Manager (SCCM) is a software management suite that enables administrators to
manage the deployment and security of devices, applications and operating system patches across a corporate
network. SCCM is part of the Microsoft System Center suite. NXLog can collect and forward the log data created
by SCCM.
SCCM stores log files in various locations depending on the process originator and system configuration.
SCCM enables logging for client and server components by default. NXLog can collect these events with the
im_file module.
525
Example 360. Configuration for File Based Logs
The following configuration uses the im_file module for collecting the log files and parses the contents via
regular expressions to extract the fields. It contains two types of custom regular expressions for the usage
of proper fields.
nxlog.conf (truncated)
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 define type1 /(?x)^(?<Message>.*)\$\$\<\
6 (?<Component>.*)\>\<\
7 (?<EventTime>.*).\d{3}-\d{2}\>\<thread=\
8 (?<Thread>\d+)/s
9
10 define type2 /(?x)^\<\!\[LOG\[(?<Message>.*)\]LOG\]\!\>\<time=\"\
11 (?<Time>.*).\d{3}-\d{2}\"\s+date=\"\
12 (?<Date>.*)\"\s+component=\"\
13 (?<Component>.*)\"\s+context=\"\
14 (?<Context>.*)\"\s+type=\"\
15 (?<Type>.*)\"\s+thread=\"\
16 (?<Thread>.*)\"\s+file=\"\
17 (?<File>.*)\"\>/s
18
19
20 <Input in>
21 Module im_file
22 File 'C:\WINDOWS\SysWOW64\CCM\Logs\*'
23 File 'C:\WINDOWS\System32\CCM\Logs\*'
24 File 'C:\Program Files\Microsoft Configuration Manager\Logs\*'
25 File 'C:\Program Files\SMS_CCM\Logs\*'
26 <Exec>
27 if file_name() =~ /^.*\\(.*)$/ $Filename = $1;
28 if $raw_event =~ %type1%;
29 [...]
For this, an ODBC System Data Source need to be configured either on the server running NXLog or on a remote
526
server, in the case you would like to get log data via ODBC remotely.
For more information, consult the relevant ODBC documentation; the Microsoft ODBC Data Source
Administrator guide or the unixODBC Project.
The below configuration example contains two im_odbc module instances to fetch data from the following two
views:
• V_SMS_Alert — lists information about built-in and user created alerts, which might be displayed in the
SCCM console.
• V_StatMsgWithInsStrings — lists information about status messages returned by each SCCM component.
SCCM provides an overview of audit related information in the Monitoring > Overview >
NOTE System Status > Status Message Queries list in the GUI. SCCM stores audit related
information in the V_StatMsgWithInsStrings view of the SQL database.
Audit related messages are vital to track which accounts have modified or deleted settings in
NOTE
the SCCM environment. These messages are purged from the database after 180 days.
Queries are based on the Microsoft System Center Configuration Manager Schema. For more information, see
the Status and alert views section in the SSCM documentation.
527
Example 361. Configuration with two SQL Queries and a Combined Output
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input sccm_alerts>
6 Module im_odbc
7 ConnectionString DSN=SMS SQL;database=CM_CND;uid=user;pwd=password;
8 SQL SELECT ID,TypeID,TypeInstanceID,Name,FeatureArea, \
9 ObjectWmiClass,Severity FROM V_SMS_Alert
10 </Input>
11
12 <Input sccm_audit>
13 Module im_odbc
14 ConnectionString DSN=SMS SQL;database=CM_CND;uid=user;pwd=password;
15 SQL SELECT * FROM v_StatMsgWithInsStrings
16 </Input>
17
18 <Output outfile>
19 Module om_file
20 File 'C:\logs\out.log'
21 Exec to_json();
22 </Output>
23
24 <Route sccm>
25 Path sccm_alerts, sccm_audit => outfile
26 </Route>
528
Chapter 80. Microsoft System Center Operations
Manager
Microsoft System Center Operations Manager (SCOM) provides infrastructure monitoring across various services,
devices, and operations from a single console. The activities related to these systems are recorded in SCOM’s
databases, and these databases can be queried using SQL. The resulting data can be collected and forwarded by
NXLog.
Alert logs
Alerts are significant events generated by rules and monitors.
The default retention time for resolved alerts and collected events is seven days, after which the
NOTE database entries are groomed. To configure database grooming settings read the TechNet
article How to Configure Grooming Settings for the Operations Manager Database.
• Create a Windows/SQL account with read permissions for the Operations Manager database.
• Configure an ODBC 32-bit System Data Source on the server running NXLog. For more information, consult
the relevant ODBC documentation: the Microsoft ODBC Data Source Administrator guide or the unixODBC
Project.
• Set an appropriate firewall rule on the database server that accepts connections from the server running
NXLog. Open TCP port 1433 or whichever port the SQL Server is configured to allow SQL Server access on.
For further information read the Configure Firewall for Database Engine Access guide.
NXLog can then be configured with one or more im_odbc input modules, each with an SQL query that produces
the fields to be logged.
The configured SQL query must contain a way to serialize the result set, enabling NXLog to
NOTE resume reading logs where it left off after a restart. This is easily achieved by using an auto-
increment-like solution or a timestamp field. See the example below.
This example queries the database for event logs and unresolved alert logs, then sends the results in JSON
format to a plain text file. Note the Exec directive in the scom_alerts input instance. It is used to extract
the content of the AlertParameters field that is itself a composite (XML) structure. You should define your
own regular expressions to extract data you are interested in from the alerts' AlertParameters and Context
fields and the events' EventData and EventParameters fields.
This example uses the DATEDIFF SQL function to generate a timestamp from an SQL datetime field with
millisecond precision. The timestamp is used to serialize the result set as required by NXLog. Starting with
529
SQL Server 2016 the DATEDIFF_BIG T-SQL function can be used instead (see DATEDIFF_BIG (Transact-SQL) at
MSDN).
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input scom_events>
6 Module im_odbc
7 ConnectionString DSN=scom;uid=username@mydomain.local;pwd=mypassword;\
8 database=OperationsManager
9 SQL SELECT CAST(DATEDIFF(minute, '19700101', CAST(EV.TimeGenerated AS DATE)) \
10 AS BIGINT) * 60000 + DATEDIFF(ms, '19000101', \
11 CAST(EV.TimeGenerated AS TIME)) AS 'id', \
12 EV.TimeGenerated AS 'EventTime', \
13 EV.TimeAdded AS 'EventAddedTime', \
14 EV.Number AS 'EventID', \
15 EV.MonitoringObjectDisplayName AS 'Source', \
16 R.DisplayName AS 'RuleName', \
17 EV.EventData, EV.EventParameters \
18 FROM EventView EV JOIN RuleView R WITH (NOLOCK) ON \
19 EV.RuleId = R.id \
20 WHERE CAST(DATEDIFF(minute, '19700101', CAST(EV.TimeGenerated \
21 AS DATE)) AS BIGINT) * 60000 + DATEDIFF(ms, '19000101', \
22 CAST(EV.TimeGenerated AS TIME)) > ?
23 PollInterval 30
24 IdIsTimeStamp FALSE
25 </Input>
26
27 <Input scom_alerts>
28 Module im_odbc
29 [...]
530
Output Sample (Alert Log)
{
"id": 1462887688220,
"Alert Name": "Failed to Connect to Computer",
"Category": "StateCollection",
"Alert Description": "The computer {0} was not accessible.",
"EventTime": "2016-05-10 13:41:28",
"EventAddedTime": "2016-05-10 13:41:28",
"Context": "<DataItem type=\"MonitorTaskDataType\" time=\"2016-05-10T15:41:28.1932994+02:00\"
sourceHealthServiceId=\"00000000-0000-0000-0000-000000000000\"><StateChange><DataItem time=
\"2016-05-10T15:41:25.5592943+02:00\" type=\"System.Health.MonitorStateChangeData\"
sourceHealthServiceId=\"D53BAD42-4C93-6634-E610-BDC3E38ABD5B\" MonitorExists=\"true\"
DependencyInstanceId=\"00000000-0000-0000-0000-000000000000\" DependencyMonitorId=\"00000000-
0000-0000-0000-000000000000\"><ManagedEntityId>CC7109D1-9177-090D-AC3A-
18781CFFF898</ManagedEntityId><EventOriginId>9B02AB65-FDB5-40AE-863F-
6FAD232E06F9</EventOriginId><MonitorId>B59F78CE-C42A-8995-F099-
E705DBB34FD4</MonitorId><ParentMonitorId>A6C69968-61AA-A6B9-DB6E-
83A0DA6110EA</ParentMonitorId><HealthState>3</HealthState><OldHealthState>1</OldHealthState><Ti
meChanged>2016-05-10T15:41:25.5592943+02:00</TimeChanged><Context><DataItem type=
\"System.Availability.StateData\" time=\"2016-05-10T15:41:25.5542835+02:00\"
sourceHealthServiceId=\"D53BAD42-4C93-6634-E610-BDC3E38ABD5B\"><ManagementGroupId>{1457194C-
D3B4-6685-5D3B-E4F7DAB158AD}</ManagementGroupId><HealthServiceId>72704AC7-4FDF-6006-1BB0-
C74868E173D5</HealthServiceId><HostName>member2012r2-01.nxlog.local</HostName><Reachability
ThruServer=\"false\"><State>0</State></Reachability></DataItem></Context></DataItem></StateChan
ge><Diagnostic><DataItem type=\"System.PropertyBagData\" time=\"2016-05-
10T15:41:25.6342865+02:00\" sourceHealthServiceId=\"D53BAD42-4C93-6634-E610-BDC3E38ABD5B
\"><Property Name=\"StatusCode\" VariantType=\"8\">11003</Property><Property Name=
\"ResponseTime\" VariantType=\"8\"></Property></DataItem></Diagnostic></DataItem>",
"AlertParameters": "<AlertParameters><AlertParameter1>member2012r2-
01.nxlog.local</AlertParameter1></AlertParameters>",
"EventReceivedTime": "2016-05-12 10:33:38",
"SourceModuleName": "in_scom_alerts",
"SourceModuleType": "im_odbc",
"AlertMessage": "member2012r2-01.nxlog.local"
}
531
Chapter 81. MongoDB
MongoDB is a document-oriented database system.
NXLog can be configured to collect data from a MongoDB database. A proof-of-concept Perl script is shown in
the example below.
This configuration uses im_perl to execute a Perl script which reads data from a MongoDB database. The
generated events are written to file with om_file.
When new documents are available in the database, the script sorts them by ObjectId and processes them
sequentially. Each document is passed to NXLog by calling Log::Nxlog::add_input_data(). The script will
poll the database continuously with Log::Nxlog::set_read_timer(). In the event that the MongoDB
server is unreachable, the timer delay will be increased to attempt reconnection later.
The Perl script shown here is a proof-of-concept only. The script must be modified to
NOTE
correspond with the data to be collected from MongoDB.
nxlog.conf
1 <Input perl>
2 Module im_perl
3 PerlCode mongodb-input.pl
4 </Input>
5
6 <Output file>
7 Module om_file
8 File '/tmp/output.log'
9 </Output>
mongodb-input.pl (truncated)
#!/usr/bin/perl
use strict;
use warnings;
use FindBin;
use lib $FindBin::Bin;
use Log::Nxlog;
use MongoDB;
use Try::Tiny;
my $counter;
my $client;
my $collection;
my $cur;
my $count;
my $logfile;
[...]
For this example, a JSON data set of US ZIP (postal) codes was used. The data set was fed to MongoDB with
mongoimport -d zips -c zips --file zips.json.
532
Input Sample
{ "_id" : "01001", "city" : "AGAWAM", "loc" : [ -72.622739, 42.070206 ], "pop" : 15338, "state"
: "MA" }
{ "_id" : "01008", "city" : "BLANDFORD", "loc" : [ -72.936114, 42.182949 ], "pop" : 1240,
"state" : "MA" }
{ "_id" : "01010", "city" : "BRIMFIELD", "loc" : [ -72.188455, 42.116543 ], "pop" : 3706,
"state" : "MA" }
{ "_id" : "01011", "city" : "CHESTER", "loc" : [ -72.988761, 42.279421 ], "pop" : 1688, "state"
: "MA" }
{ "_id" : "01020", "city" : "CHICOPEE", "loc" : [ -72.576142, 42.176443 ], "pop" : 31495,
"state" : "MA" }
Output Sample
ID: 01001 City: AGAWAM Loc: -72.622739,42.070206 Pop: 15338 State: MA↵
ID: 01008 City: BLANDFORD Loc: -72.936114,42.182949 Pop: 1240 State: MA↵
ID: 01010 City: BRIMFIELD Loc: -72.188455,42.116543 Pop: 3706 State: MA↵
ID: 01011 City: CHESTER Loc: -72.988761,42.279421 Pop: 1688 State: MA↵
ID: 01020 City: CHICOPEE Loc: -72.576142,42.176443 Pop: 31495 State: MA↵
533
Chapter 82. Nagios Log Server
Nagios Log Server provides centralized management, monitoring, and analysis of logging data. It utilizes the ELK
(Elasticsearch, Logstash, and Kibana) stack. NXLog can be customized to send log data to the Nagios Log Server
over TCP, UDP, and TLS/SSL protocols.
By default, Nagios Log Server does not require any post-installation configuration which means logs can be
received from NXLog right away.
To see the IP address and ports of the Nagios Log Server instance, open the Configure page and find the
Configuration Editor section.
The configuration below reads systemd messages using the im_systemd module and selects only those
entries which contain the sshd character combination. The selected messages are processed with the
xm_kvp module and converted to JSON using the xm_json module. Sending over TCP is carried out using
the om_tcp module.
534
1 <Extension kvp>
2 Module xm_kvp
3 KVDelimiter =
4 KVPDelimiter " "
5 </Extension>
6
7 <Extension json>
8 Module xm_json
9 </Extension>
10
11 <Input systemd>
12 Module im_systemd
13 ReadFromLast TRUE
14 Exec if not ($raw_event =~ /sshd/) drop();
15 </Input>
16
17 <Output out>
18 Module om_tcp
19 Host 192.168.31.179
20 Port 3515
21 <Exec>
22 kvp->parse_kvp();
23 to_json();
24 </Exec>
25 </Output>
Below is the event sample of a log message which is sent over TCP.
{
"Severity": "info",
"SeverityValue": 6,
"Facility": "syslog",
"FacilityValue": 4,
"Message": "Accepted password for administrator from 192.168.31.179 port 46534 ssh2",
"SourceName": "sshd",
"ProcessID": 3168,
"User": "root",
"Group": "root",
"ProcessName": "sshd",
"ProcessExecutable": "/usr/sbin/sshd",
"ProcessCmdLine": "sshd: administrator [priv]",
"Capabilities": "3fffffffff",
"SystemdCGroup": "/system.slice/ssh.service",
"SystemdUnit": "ssh.service",
"SystemdSlice": "system.slice",
"SelinuxContext": "unconfined\n",
"EventTime": "2020-03-25 18:59:53",
"BootID": "1eb2f28ae8064c7a954e2420be54a7f2",
"MachineID": "0823d4a95f464afeb0021a7e75a1b693",
"SysInvID": "984c8a16fd20462a9ac8c0682081979c",
"Hostname": "ubuntu",
"Transport": "syslog",
"EventReceivedTime": "2020-03-25T18:59:53.565177+00:00",
"SourceModuleName": "systemd",
"SourceModuleType": "im_systemd"
}
535
The configuration below reads Windows Event Log entries and selects only those entries which contain IDs
4624 and 4625 using the im_msvistalog module. The collected logs are then converted to JSON using the
xm_json module after the Message field is deleted from the entry. Sending over UDP is carried out using the
om_udp module.
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input in_eventlog>
6 Module im_msvistalog
7 <QueryXML>
8 <QueryList>
9 <Query Id="0">
10 <Select Path="Security">
11 *[System[Level=0 and (EventID=4624 or EventID=4625)]]</Select>
12 </Query>
13 </QueryList>
14 </QueryXML>
15 <Exec>
16 delete($Message);
17 json->to_json();
18 </Exec>
19 </Input>
20
21 <Output out>
22 Module om_udp
23 Host 192.168.31.179
24 Port 5544
25 Exec to_json();
26 </Output>
Below is the event sample of a log message which is sent over UDP.
536
{
"EventTime": "2020-03-22T13:48:55.455545-07:00",
"Hostname": "WIN-IVR26CIVSF6",
"Keywords": "9232379236109516800",
"EventType": "AUDIT_SUCCESS",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 4624,
"SourceName": "Microsoft-Windows-Security-Auditing",
"ProviderGuid": "{54849625-5478-4994-A5BA-3E3B0328C30D}",
"Version": 2,
"TaskValue": 12544,
"OpcodeValue": 0,
"RecordNumber": 15033,
"ActivityID": "{CFEB8893-00D2-0000-E289-EBCFD200D601}",
"ExecutionProcessID": 532,
"ExecutionThreadID": 572,
"Channel": "Security",
"Category": "Logon",
"Opcode": "Info",
"SubjectUserSid": "S-1-5-18",
"SubjectUserName": "WIN-IVR26CIVSF6$",
"SubjectDomainName": "WORKGROUP",
"SubjectLogonId": "0x3e7",
"TargetUserSid": "S-1-5-90-0-6",
"TargetUserName": "DWM-6",
"TargetDomainName": "Window Manager",
"TargetLogonId": "0x1c8f13",
"LogonType": "2",
"LogonProcessName": "Advapi ",
"AuthenticationPackageName": "Negotiate",
"WorkstationName": "-",
"LogonGuid": "{00000000-0000-0000-0000-000000000000}",
"TransmittedServices": "-",
"LmPackageName": "-",
"KeyLength": "0",
"ProcessId": "0x848",
"ProcessName": "C:\\Windows\\System32\\winlogon.exe",
"IpAddress": "-",
"IpPort": "-",
"ImpersonationLevel": "%%1833",
"RestrictedAdminMode": "-",
"TargetOutboundUserName": "-",
"TargetOutboundDomainName": "-",
"VirtualAccount": "%%1842",
"TargetLinkedLogonId": "0x1c8f24",
"ElevatedToken": "%%1842",
"EventReceivedTime": "2020-03-22T13:48:56.870657-07:00",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog"
}
Configuration of NXLog for sending logs over SSL/TLS is already described in the Sending NXLogs With SSL/TLS
section on the Nagios website.
To read more about encrypted transfer of data, see the Encrypted Transfer and TLS/SSL (om_ssl) chapters on
NXLog website.
Other examples of sending log data using NXLog from the Nagios website:
537
• Configuring NXLog To Send Additional Log Files
• Configuring NXLog To Send Multi-Line Log Files
On the log source page, find the Verify Incoming Logs section, type in the IP address of the NXLog server and
click the Verify button. The verification should show a number of log entries which have already been accepted
by the Log Server from the specified IP address.
To observe the collected entries, go to the Reports page and click the required IP address (hostname) in the
table.
The table with log entries will open. To expand information about the specified entry, click its line in the table.
Each entry contains structured information about its fields and values.
538
539
Chapter 83. Nessus Vulnerability Scanner
The results of a Nessus scan, saved as XML, can be collected and parsed with NXLog Enterprise Edition.
Scan Sample
<?xml version="1.0" ?>
<NessusClientData_v2>
<Report xmlns:cm="http://www.nessus.org/cm" name="Scan Testbed">
<ReportHost name="192.168.1.112">
<HostProperties>
<tag name="HOST_END">Wed Jun 18 04:20:45 2014</tag>
<tag name="patch-summary-total-cves">1</tag>
<tag name="traceroute-hop-1">?</tag>
<tag name="traceroute-hop-0">10.10.10.20</tag>
<tag name="operating-system">Linux Kernel</tag>
<tag name="host-ip">192.168.1.112</tag>
<tag name="HOST_START">Wed Jun 18 04:19:21 2014</tag>
</HostProperties>
<ReportItem port="6667" svc_name="irc" protocol="tcp" severity="0" pluginID="22964"
pluginName="Service Detection" pluginFamily="Service detection">
<description>It was possible to identify the remote service by its banner or by
looking at the error
message it sends when it receives an HTTP request.
</description>
<fname>find_service.nasl</fname>
<plugin_modification_date>2014/06/03</plugin_modification_date>
<plugin_name>Service Detection</plugin_name>
<plugin_publication_date>2007/08/19</plugin_publication_date>
<plugin_type>remote</plugin_type>
<risk_factor>None</risk_factor>
<script_version>$Revision: 1.137 $</script_version>
<solution>n/a</solution>
<synopsis>The remote service could be identified.</synopsis>
<plugin_output>An IRC server seems to be running on this port is running on this
port.</plugin_output>
</ReportItem>
</ReportHost>
</Report>
</NessusClientData_v2>
While the above sample illustrates the correct syntax, it is not a complete Nessus report. For
NOTE
more information refer to the Nessus v2 File Format document on tenable.com.
The preferred approach for parsing Nessus scans is with im_perl and a Perl script; this provides fine-grained
control over the collected information. If Perl is not available, the xm_multiline and xm_xml extension modules
can be used instead. Both methods require NXLog Enterprise Edition.
In this example, the im_perl input module executes the nessus.pl Perl script which reads the Nessus scan.
The script generates an event for each ReportItem, and includes details from Report and ReportHost in
each event. Furthermore, normalized $EventTime, $Severity, and $SeverityValue fields are added to
the event record.
540
nxlog.conf
1 <Input perl>
2 Module im_perl
3 PerlCode nessus.pl
4 </Input>
Event Sample
{
"EventTime": "2014-06-18 04:20:45",
"Report": "Scan Testbed",
"ReportHost": "192.168.1.112",
"port": "6667",
"svc_name": "irc",
"protocol": "tcp",
"NessusSeverityValue": 0,
"NessusSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"pluginID": "22964",
"pluginName": "Service Detection",
"pluginFamily": "Service detection",
"description": "It was possible to identify the remote service by its banner or by looking at
the error\nmessage it sends when it receives an HTTP request.\n",
"fname": "find_service.nasl",
"plugin_modification_date": "2014/06/03",
"plugin_name": "Service Detection",
"plugin_publication_date": "2007/08/19",
"plugin_type": "remote",
"risk_factor": "None",
"script_version": "$Revision: 1.137 $",
"solution": "n/a",
"synopsis": "The remote service could be identified.",
"plugin_output": "An IRC server seems to be running on this port is running on this port.",
"EventReceivedTime": "2017-11-29 20:29:40",
"SourceModuleName": "perl",
"SourceModuleType": "im_perl"
}
nessus.pl (truncated)
#!/usr/bin/perl
use strict;
use warnings;
use FindBin;
use lib $FindBin::Bin;
use Log::Nxlog;
use XML::LibXML;
sub read_data {
my $doc = XML::LibXML->load_xml( location => 'scan.nessus' );
my $report = $doc->findnodes('/NessusClientData_v2/Report');
my @nessus_sev = ("INFO","LOW","MEDIUM","HIGH","CRITICAL");
my @nxlog_sev_val = (2,3,4,5,5);
my @nxlog_sev = ("INFO","WARNING","ERROR","CRITICAL","CRITICAL");
my %mon2num = qw(
Jan 01 Feb 02 Mar 03 Apr 04 May 05 Jun 06
[...]
541
Example 367. Parsing Events With xm_multiline
This example depicts an alternative way to collect results from Nessus XML scan files, recommended only if
Perl is not available. This configuration generates an event for each ReportItem found in the scan report.
nxlog.conf
1 <Extension multiline_parser>
2 Module xm_multiline
3 HeaderLine /^<ReportItem/
4 EndLine /^<\/ReportItem>/
5 </Extension>
6
7 <Extension _xml>
8 Module xm_xml
9 ParseAttributes TRUE
10 </Extension>
11
12 <Input in>
13 Module im_file
14 File "nessus_report.xml"
15 InputType multiline_parser
16 <Exec>
17 # Discard everything that doesn't seem to be an xml event
18 if $raw_event !~ /^<ReportItem/ drop();
19
20 # Parse the xml event
21 parse_xml();
22 </Exec>
23 </Input>
Event Sample
{
"EventReceivedTime": "2017-11-09 10:22:58",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"ReportItem.port": "6667",
"ReportItem.svc_name": "irc",
"ReportItem.protocol": "tcp",
"ReportItem.severity": "0",
"ReportItem.pluginID": "22964",
"ReportItem.pluginName": "Service Detection",
"ReportItem.pluginFamily": "Service detection",
"ReportItem.description": "It was possible to identify the remote service by its banner or by
looking at the error\nmessage it sends when it receives an HTTP request.\n",
"ReportItem.fname": "find_service.nasl",
"ReportItem.plugin_modification_date": "2014/06/03",
"ReportItem.plugin_name": "Service Detection",
"ReportItem.plugin_publication_date": "2007/08/19",
"ReportItem.plugin_type": "remote",
"ReportItem.risk_factor": "None",
"ReportItem.script_version": "$Revision: 1.137 $",
"ReportItem.solution": "n/a",
"ReportItem.synopsis": "The remote service could be identified.",
"ReportItem.plugin_output": "An IRC server seems to be running on this port is running on
this port."
}
542
Chapter 84. NetApp
NetApp storage is capable of sending logs to a remote Syslog destination via UDP as well as saving audit logs
directly to a network share.
Log Sample
4/14/2017 15:40:25 p-netapp1 DEBUG repl.engine.error: replStatus="8",
replFailureMsg="5898503", replFailureMsgDetail="0", functionName="repl_util::Result
repl_core::Instance::endTransfer(spinnp_uuid_t*)", lineNumber="738"↵
For more details about configuring logging on NetApp storage, please refer to the Product Documentation
section of the NetApp Support site. Search for your ONTAP version, which can be determined by running
version -b from the command line.
> version -b
/cfcard/x86_64/freebsd/image1/kernel: OS 8.3.1P2
The steps below have been tested with ONTAP 8 and should work for earlier versions. Exact
NOTE
commands for newer versions may vary.
1. Configure NXLog to receive log entries via UDP and process them as Syslog (see the examples below). Then
restart NXLog.
2. Make sure the NXLog agent is accessible from each member of the cluster.
3. Log in to the cluster address with SSH.
4. Run the following command to configure the Syslog destination. Replace NAME and IP_ADDRESS with the
required values. The default port for UDP is 514.
5. Now select the messages to be sent. Use the same NAME as in the previous step and set MSGS to the required
value.
A list of messages can be obtained by running the command with a question mark (?) as the argument.
It is also possible to specify a severity level in addition to message types. The severity levels are EMERGENCY,
ALERT, CRITICAL, ERROR, WARNING, NOTICE, INFORMATIONAL, and DEBUG.
543
Example 369. Sending Messages at Informational Level to 192.168.6.143
The following commands send all messages with Informational severity level (including higher
severites) to 192.168.6.143 in Syslog format via UDP port 514.
This example shows NetApp Syslog logs as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/netapp.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.5.61",
"EventReceivedTime": "2017-04-14 15:38:58",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 0,
"SyslogFacility": "KERN",
"SyslogSeverityValue": 7,
"SyslogSeverity": "DEBUG",
"SeverityValue": 1,
"Severity": "DEBUG",
"Hostname": "192.168.5.61",
"EventTime": "2017-04-14 15:40:25",
"Message": "[p-netapp1:repl.engine.error:debug]: replStatus=\"8\", replFailureMsg=\"5898503
\", replFailureMsgDetail=\"0\", functionName=\"repl_util::Result
repl_core::Instance::endTransfer(spinnp_uuid_t*)\", lineNumber=\"738\""
}
Messages that contain key-value pairs, like the example at the beginning of the section, can be parsed with
544
the xm_kvp module to extract more fields if required.
nxlog.conf
1 <Output out>
2 Module om_null
3 </Output>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Extension kvp>
10 Module xm_kvp
11 KVPDelimiter ,
12 KVDelimiter =
13 EscapeChar \\
14 </Extension>
15
16 <Input in_syslog_udp>
17 Module im_udp
18 Host 0.0.0.0
19 Port 514
20 <Exec>
21 parse_syslog();
22 if $Message =~ /(?x)^\[([a-z-A-Z0-9-]*):([a-z-A-Z.]*):([a-z-A-Z]*)\]:
23 \ ([a-zA-Z]+=.+)/
24 {
25 $NAUnit = $1;
26 $NAMsgName = $2;
27 $NAMsgSev = $3;
28 $NAMessage = $4;
29 kvp->parse_kvp($4);
30 }
31 </Exec>
32 </Input>
545
Output Sample
{
"MessageSourceAddress": "192.168.5.63",
"EventReceivedTime": "2017-04-15 23:13:45",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 0,
"SyslogFacility": "KERN",
"SyslogSeverityValue": 7,
"SyslogSeverity": "DEBUG",
"SeverityValue": 1,
"Severity": "DEBUG",
"Hostname": "192.168.5.63",
"EventTime": "2017-04-15 23:15:14",
"Message": "[p-netapp3:repl.engine.error:debug]: replStatus=\"5\", replFailureMsg=\"5898500
\", replFailureMsgDetail=\"0\", functionName=\"void
repl_volume::Query::_queryResponse(repl_spinnp::Request&, const spinnp_repl_result_t&,
repl_spinnp::Response*)\", lineNumber=\"149\"",
"NAUnit": "p-netapp3",
"NAMsgName": "repl.engine.error",
"NAMsgSev": "debug",
"NAMessage": "replStatus=\"5\", replFailureMsg=\"5898500\", replFailureMsgDetail=\"0\",
functionName=\"void repl_volume::Query::_queryResponse(repl_spinnp::Request&, const
spinnp_repl_result_t&, repl_spinnp::Response*)\", lineNumber=\"149\"",
"replStatus": "5",
"replFailureMsg": "5898500",
"replFailureMsgDetail": "0",
"functionName": "void repl_volume::Query::_queryResponse(repl_spinnp::Request&, const
spinnp_repl_result_t&, repl_spinnp::Response*)",
"lineNumber": "149"
}
To accomplish this, create and enable an audit policy for each virtual server.
These commands set up an audit policy that sends logs to the specified share, rotates log files at 100 MB,
and retains the last 10 rotated log files.
546
Example 373. Reading Logs From a NetApp EventLog File
This example shows NetApp events as collected and processed by NXLog from an EventLog file.
nxlog.conf
1 <Input in_file_evt>
2 Module im_msvistalog
3 File C:\Temp\NXLog\audit_vs_p12_cifs_last.evtx
4 </Input>
5
6 <Output file_from_eventlog>
7 Module om_file
8 File "C:\Temp\evt.log"
9 Exec to_json();
10 </Output>
Output Sample
{
"EventTime": "2017-05-10 21:17:12",
"Hostname": "e3864b4d-8937-11e5-b812-00a098831757/bf4a40a5-9216-11e5-8d9a-00a098831757",
"Keywords": -9214364837600035000,
"EventType": "AUDIT_SUCCESS",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 4624,
"SourceName": "NetApp-Security-Auditing",
"ProviderGuid": "{3CB2A168-FE19-4A4E-BDAD-DCF422F13473}",
"Version": 101,
"OpcodeValue": 0,
"RecordNumber": 0,
"ProcessID": 0,
"ThreadID": 0,
"Channel": "Security",
"ERROR_EVT_UNRESOLVED": true,
"IpAddress' IPVersion='4": "192.168.17.151",
"IpPort": "49421",
"TargetUserSID": "S-1-5-21-4103495029-501085275-2219630704-2697",
"TargetUserName": "App_Service",
"TargetUserIsLocal": "false",
"TargetDomainName": "DOMAIN",
"AuthenticationPackageName": "KRB5",
"LogonType": "3",
"EventReceivedTime": "2017-05-10 22:33:00",
"SourceModuleName": "in_file_evt",
"SourceModuleType": "im_msvistalog"
}
547
Chapter 85. .NET Application Logs
NXLog can be used to capture logs directly from Microsoft .NET™ applications using third-party utilities. This
guide demonstrates how to set up these utilities with a sample .NET application and a corresponding NXLog
configuration.
This guide uses the SharpDevelop IDE, but Microsoft Visual Studio™ on Windows, or MonoDevelop on Linux
could also be used. The log4net package and log4net.Ext.Json extension are also required.
The following instructions were tested with SharpDevelop 5.1.0, .NET 4.5, log4net 2.0.5, and
log4net.Ext.Json 1.2.15.14586. To use NuGet packages without the NuGet package manager,
NOTE
simply download the nupkg file using the "Download" link, add a .zip extension to the file
name, and extract.
1. Create a new Solution in SharpDevelop by selecting File › New › Solution and choosing the Console
Application option. Enter a name and click [ Create ].
2. Place the log4net and log4net.Ext.Json DLL files in the bin\Debug directory of your project.
3. Select Project › Add Reference. Open the .NET Assembly Browser tab and click [ Browse ]. Add the two
DLL files so that they appear in the Selected References list, then click [ OK ].
4. Edit the AssemblyInfo.cs file (under Properties in the Projects sidebar) and add the following line.
548
5. Click the Refresh icon in the Projects sidebar to show all project files.
6. Create a file named App.config in the bin\Debug folder, open it for editing, and add the following code.
Update the remoteAddress value of the with the IP address (or host name) of the NXLog instance.
App.config
<configuration>
<configSections>
<section name="log4net"
type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<log4net>
<appender name="UdpAppender" type="log4net.Appender.UdpAppender">
<remoteAddress value="192.168.56.103" />
<remotePort value="514" />
<layout type="log4net.Layout.SerializedLayout, log4net.Ext.Json" />
</appender>
<root>
<level value="DEBUG"/>
<appender-ref ref="UdpAppender"/>
</root>
</log4net>
</configuration>
7. Edit the Program.cs file, and replace its contents with the following code. This loads the log4net module and
creates some sample log messages.
549
Program.cs
using System;
using log4net;
namespace demo
{
class Program
{
private static readonly log4net.ILog mylog = log4net.LogManager.GetLogger(typeof(
Program));
public static void Main(string[] args)
{
log4net.Config.BasicConfigurator.Configure();
mylog.Debug("This is a debug message");
mylog.Warn("This is a warn message");
mylog.Error("This is an error message");
mylog.Fatal("This is a fatal message");
Console.ReadLine();
}
}
}
8. Configure NXLog.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input in>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 <Exec>
10 $raw_event =~ s/\s+$//;
11
12 # Parse JSON into fields for later processing if required
13 parse_json();
14 </Exec>
15 </Input>
16
17 <Output out>
18 Module om_file
19 File "/tmp/output"
20 </Output>
21
22 <Route r>
23 Path in => out
24 </Route>
9. In SharpDevelop, press the F5 key to build and run the application. The following output should appear.
Demo Output
4301 [1] DEBUG demo.Program (null) - This is a debug message↵
4424 [1] WARN demo.Program (null) - This is a warn message↵
4425 [1] ERROR demo.Program (null) - This is an error message↵
4426 [1] FATAL demo.Program (null) - This is a fatal message↵
10. Examine the /tmp/output file. It should show the sample log entries produced by the .NET application.
550
NXLog Output
{"date":"2014-03-
19T09:41:08.7231787+01:00","Level":"DEBUG","AppDomain":"demo.exe","Logger":"demo.Program","Threa
d":"1","Message":"This is a debug message","Exception":""}↵
{"date":"2014-03-
19T09:41:08.8456254+01:00","Level":"WARN","AppDomain":"demo.exe","Logger":"demo.Program","Thread
":"1","Message":"This is a warn message","Exception":""}↵
{"date":"2014-03-
19T09:41:08.8466327+01:00","Level":"ERROR","AppDomain":"demo.exe","Logger":"demo.Program","Threa
d":"1","Message":"This is an error message","Exception":""}↵
{"date":"2014-03-
19T09:41:08.8476223+01:00","Level":"FATAL","AppDomain":"demo.exe","Logger":"demo.Program","Threa
d":"1","Message":"This is a fatal message","Exception":""}↵
551
Chapter 86. Nginx
The Nginx web server supports error and access logging. Both types of logs can be written to file, or forwarded
as Syslog via UDP, or written as Syslog to a Unix domain socket. The sections below provide a brief overview; see
the Logging section of the Nginx documentation for more detailed information.
With the following directive, Nginx will log all messages of "warn" severity or higher to the specified log file.
nginx.conf
error_log /var/log/nginx/error.log warn;
Following is a log message generated by Nginx, an NXLog configuration for parsing it, and the resulting
JSON.
Log Sample
2017/08/07 04:37:16 [emerg] 17479#17479: epoll_create() failed (24: Too many open files)↵
nxlog.conf
1 <Input nginx_error>
2 Module im_file
3 File '/var/log/nginx/error.log'
4 <Exec>
5 if $raw_event =~ /^(\S+ \S+) \[(\S+)\] (\d+)\#(\d+): (\*(\d+) )?(.+)$/
6 {
7 $EventTime = strptime($1, '%Y/%m/%d %H:%M:%S');
8 $NginxLogLevel = $2;
9 $NginxPID = $3;
10 $NginxTID = $4;
11 if $6 != '' $NginxCID = $6;
12 $Message = $7;
13 }
14 </Exec>
15 </Input>
Output Sample
{
"EventReceivedTime": "2017-08-07T04:37:16.245375+02:00",
"SourceModuleName": "nginx_error",
"SourceModuleType": "im_file",
"EventTime": "2017-08-07T04:37:16.000000+02:00",
"NginxLogLevel": "emerg",
"NginxPID": "17479",
"NginxTID": "17479",
"Message": "epoll_create() failed (24: Too many open files)"
}
552
Example 375. Collecting Error Logs via Syslog
With this directive, Nginx will forward all messages of "warn" severity or higher to the specified Syslog
server. The messages will be generated with the "local7" facility.
nginx.conf
error_log syslog:server=192.168.1.1:514,facility=local7 warn;
nxlog.conf
1 <Input nginx_error>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /^\S+ \S+ \[\S+\] (\d+)\#(\d+): (\*(\d+) )?(.+)$/
8 {
9 $NginxPID = $1;
10 $NginxTID = $2;
11 if $4 != '' $NginxCID = $4;
12 $Message = $5;
13 }
14 </Exec>
15 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.1.12",
"EventReceivedTime": "2017-08-07T04:37:16.441368+02:00",
"SourceModuleName": "nginx_error",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 23,
"SyslogFacility": "LOCAL7",
"SyslogSeverityValue": 1,
"SyslogSeverity": "ALERT",
"SeverityValue": 5,
"Severity": "CRITICAL",
"Hostname": "nginx-host",
"EventTime": "2017-08-07T04:37:16.000000+02:00",
"SourceName": "nginx",
"Message": "epoll_create() failed (24: Too many open files)",
"NginxPID": "17479",
"NginxTID": "17479"
}
553
Example 376. Collecting Error Logs via Unix Domain Socket
With this directive, Nginx will forward all messages of "warn" severity or higher to the specified Unix
domain socket. The messages will be sent in Syslog format with the "local7" Syslog facility.
nginx.conf
error_log syslog:server=unix:/var/log/nginx/error.sock,facility=local7 warn;
nxlog.conf
1 <Input nginx_error>
2 Module im_uds
3 UDS /var/log/nginx/error.sock
4 <Exec>
5 parse_syslog();
6 if $Message =~ /^\S+ \S+ \[\S+\] (\d+)\#(\d+): (\*(\d+) )?(.+)$/
7 {
8 $NginxPID = $1;
9 $NginxTID = $2;
10 if $4 != '' $NginxCID = $4;
11 $Message = $5;
12 }
13 </Exec>
14 </Input>
The log format can be customized by setting the log_format directive; see the Nginx documentation for more
information.
554
Example 377. Collecting Access Logs via Syslog
With this directive, Nginx will forward access logs to the specified Syslog server. The messages will be
generated with the "local7" facility and the "info" severity.
nginx.conf
access_log syslog:server=192.168.1.1:514,facility=local7,severity=info;
nxlog.conf
1 <Input nginx_access>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
8 \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
9 \ \"([^\"]+)\"/
10 {
11 $Hostname = $1;
12 if $2 != '-' $AccountName = $2;
13 $EventTime = parsedate($3);
14 $HTTPMethod = $4;
15 $HTTPURL = $5;
16 $HTTPResponseStatus = $6;
17 if $7 != '-' $FileSize = $7;
18 if $8 != '-' $HTTPReferer = $8;
19 if $9 != '-' $HTTPUserAgent = $9;
20 delete($Message);
21 }
22 </Exec>
23 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.1.12",
"EventReceivedTime": "2017-08-07T06:15:55.662319+02:00",
"SourceModuleName": "nginx_access",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 23,
"SyslogFacility": "LOCAL7",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "192.168.1.12",
"EventTime": "2017-08-07T06:15:55.000000+02:00",
"SourceName": "nginx",
"HTTPMethod": "GET",
"HTTPURL": "/",
"HTTPResponseStatus": "304",
"FileSize": "0",
"HTTPUserAgent": "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0"
}
555
Example 378. Collecting Access Logs via Unix Domain Socket
With this directive, Nginx will forward all messages of "warn" severity or higher to the specified Unix
domain socket. The messages will be sent in Syslog format with the "local7" Syslog facility.
nginx.conf
access_log syslog:server=unix:/var/log/nginx/access.sock,facility=local7,severity=info;
nxlog.conf
1 <Input nginx_access>
2 Module im_uds
3 UDS /var/log/nginx/access.sock
4 <Exec>
5 parse_syslog();
6 if $Message =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
7 \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
8 \ \"([^\"]+)\"/
9 {
10 $Hostname = $1;
11 if $2 != '-' $AccountName = $2;
12 $EventTime = parsedate($3);
13 $HTTPMethod = $4;
14 $HTTPURL = $5;
15 $HTTPResponseStatus = $6;
16 if $7 != '-' $FileSize = $7;
17 if $8 != '-' $HTTPReferer = $8;
18 if $9 != '-' $HTTPUserAgent = $9;
19 delete($Message);
20 }
21 </Exec>
22 </Input>
556
Chapter 87. Okta
Okta provides identity management cloud software services.
NXLog can be set up to pull events from Okta using their REST API. For more information, see the Okta add-on.
557
Chapter 88. Osquery
Osquery provides easy access to operating system logs via SQL queries as it exposes operating system data in a
relational data model.
NXLog can be integrated with osquery when deployed on Windows, MacOS, Linux, and FreeBSD. Osquery does
not provide mechanisms to forward logs, it relies on software such as NXLog to do so.
22 bash /usr/bin/bash
37 vim /usr/bin/vim
4 System
For more information about osquery commands, see the osqueryi (shell) and SQL Introduction sections on the
osquery website.
• differential — logs changes in the system between the previous and the current query executions.
• snapshot — logs the data set obtained in a certain point in time.
For more information on installing osquery, see the Getting Started section on the osquery website.
558
Osquery can be configured via the osquery.conf file using a JSON format. This file should be located under the
following paths:
• Linux: /etc/osquery/
• FreeBSD: /usr/local/etc/
• MacOS: /private/var/osquery/
The following configuration is an example of a differential logging configuration. The schedule object
contains the nested processes object, which contains two fields:
• query — This key specifies the SQL statement. In this case, it selects all entries from the processes
table.
• interval — This key contains the number of seconds after which the statement is executed again. In
this example, the query is executed every 10 seconds.
osquery.conf
{
"schedule": {
"processes": {
"query": "SELECT pid, name, path FROM processes;",
"interval": 10
}
}
}
The processes object contains the additional snapshot key, which is a boolean flag to enable the snapshot
logging mode.
osquery.conf
{
"schedule": {
"processes": {
"query": "SELECT pid, name, path FROM processes;",
"interval": 10,
"snapshot": true
}
}
}
For more information, see the Configuration section on the osquery website.
559
• osqueryd.INFO,
• osqueryd.WARNING,
• osqueryd.ERROR.
By default, all osquery log files are available under the following paths:
Below are the samples of the execution logs from Ubuntu and Windows.
osqueryd.INFO on Ubuntu
Log file created at: 2019/11/25 10:07:54↵
Running on machine: ubuntu↵
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg↵
I1125 10:07:54.233732 28060 events.cpp:863] Event publisher not enabled: auditeventpublisher:
Publisher disabled via configuration↵
I1125 10:07:54.233835 28060 events.cpp:863] Event publisher not enabled: syslog: Publisher
disabled via configuration↵
osqueryd.INFO on Windows
Log file created at: 2019/11/28 10:57:00↵
Running on machine: WIN-SFULD4GOF4H↵
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg↵
I1128 10:57:00.979398 3908 scheduler.cpp:105] Executing scheduled query processes: SELECT pid,
name, path FROM processes;↵
E1128 10:57:01.009029 3908 processes.cpp:312] Failed to lookup path information for process 4↵
E1128 10:57:01.024600 3908 processes.cpp:332] Failed to get cwd for 4 with 31↵
I1128 10:58:01.649113 3908 scheduler.cpp:105] Executing scheduled query processes: SELECT pid,
name, path FROM processes;↵
E1128 10:58:01.681404 3908 processes.cpp:312] Failed to lookup path information for process 4↵
E1128 10:58:01.712568 3908 processes.cpp:332] Failed to get cwd for 4 with 31↵
560
Example 383. Differential Logs
Below are the samples of the differential logs from Ubuntu and Windows.
osqueryd.results.log on Ubuntu
{"name":"users","hostIdentifier":"ubuntu","calendarTime":"Mon Nov 25 09:11:40 2019
UTC","unixTime":1574673100,"epoch":0,"counter":0,"logNumericsAsNumbers":false,"columns":{"direc
tory":"/","uid":"111","username":"kernoops"},"action":"removed"}↵
{"name":"users","hostIdentifier":"ubuntu","calendarTime":"Mon Nov 25 09:11:40 2019
UTC","unixTime":1574673100,"epoch":0,"counter":0,"logNumericsAsNumbers":false,"columns":{"direc
tory":"/bin","uid":"2","username":"bin"},"action":"removed"}↵
osqueryd.results.log on Windows
{"name":"processes","hostIdentifier":"WIN-SFULD4GOF4H","calendarTime":"Fri Nov 29 18:18:00 2019
UTC","unixTime":1575051480,"epoch":0,"counter":23,"logNumericsAsNumbers":false,"columns":{"name
":"conhost.exe","path":"C:\\Windows\\System32\\conhost.exe","pid":"2936"},"action":"removed"}↵
{"name":"processes","hostIdentifier":"WIN-SFULD4GOF4H","calendarTime":"Fri Nov 29 18:18:00 2019
UTC","unixTime":1575051480,"epoch":0,"counter":23,"logNumericsAsNumbers":false,"columns":{"name
":"dllhost.exe","path":"C:\\Windows\\System32\\dllhost.exe","pid":"3784"},"action":"removed"}↵
Below are the samples of the snapshot logs from Ubuntu and Windows.
osqueryd.snapshots.log on Ubuntu
{"snapshot":[{"name":"gsd-rfkill","path":"/usr/lib/gnome-settings-daemon/gsd-
rfkill","pid":"944"},{"name":"gsd-screensaver","path":"/usr/lib/gnome-settings-daemon/gsd-
screensaver-proxy","pid":"947"},{"name":"gsd-sharing","path":"/usr/lib/gnome-settings-
daemon/gsd-sharing","pid":"949"},{"name":"gsd-smartcard","path":"/usr/lib/gnome-settings-
daemon/gsd-smartcard","pid":"955"},{"name":"gsd-sound","path":"/usr/lib/gnome-settings-
daemon/gsd-sound","pid":"962"},{"name":"gsd-wacom","path":"/usr/lib/gnome-settings-daemon/gsd-
wacom","pid":"965"},{"name":"kstrp","path":"","pid":"98"}],"action":"snapshot","name":"users","
hostIdentifier":"ubuntu","calendarTime":"Mon Nov 25 09:14:25 2019
UTC","unixTime":1574673265,"epoch":0,"counter":0,"logNumericsAsNumbers":false}↵
osqueryd.snapshots.log on Windows
{"snapshot":[{"name":"[System
Process]","path":"","pid":"0"},{"name":"System","path":"","pid":"4"},{"name":"smss.exe","path":
"C:\\Windows\\System32\\smss.exe","pid":"244"},{"name":"csrss.exe","path":"C:\\Windows\\System3
2\\csrss.exe","pid":"328"},{"name":"wininit.exe","path":"C:\\Windows\\System32\\wininit.exe","p
id":"408"},{"name":"winlogon.exe","path":"C:\\Windows\\System32\\winlogon.exe","pid":"452"},{"n
ame":"services.exe","path":"C:\\Windows\\System32\\services.exe","pid":"512"},{"name":"RuntimeB
roker.exe","path":"C:\\Windows\\System32\\RuntimeBroker.exe","pid":"2664"},{"name":"sihost.exe"
,"path":"C:\\Windows\\System32\\sihost.exe","pid":"2700"},{"name":"svchost.exe","path":"C:\\Win
dows\\System32\\svchost.exe","pid":"2708"}],"action":"snapshot","name":"processes","hostIdentif
ier":"WIN-SFULD4GOF4H","calendarTime":"Fri Nov 29 18:13:04 2019
UTC","unixTime":1575051184,"epoch":0,"counter":0,"logNumericsAsNumbers":false}↵
For more information about the logging system of osquery, see the Logging section on the osquery website.
561
Example 385. Configuring NXLog for Unix-like Systems
The following configuration uses the im_file module to read the osquery log entries and process them with
the xm_json module.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input osquery_diff>
6 Module im_file
7 File "/var/log/osquery/osqueryd.results.log"
8 Exec parse_json();
9 </Input>
10
11 <Input osquery_snap>
12 Module im_file
13 File "/var/log/osquery/osqueryd.snapshots.log"
14 Exec parse_json();
15 </Input>
The following configuration uses the im_file module to read the osquery log entries and process them with
the xm_json module.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input osquery_diff>
6 Module im_file
7 File "C:\\Program Files\\osquery\\log\\osqueryd.results.log"
8 Exec parse_json();
9 </Input>
10
11 <Input osquery_snap>
12 Module im_file
13 File "C:\\Program Files\\osquery\\log\\osqueryd.snapshots.log"
14 Exec parse_json();
15 </Input>
Using an appropriate output module, NXLog can be configured to forward osquery logs to a remote system.
As an example, the om_tcp module is used.
nxlog.conf
1 <Output snap_out>
2 Module om_tcp
3 Host 192.168.1.1
4 Port 1515
5 </Output>
562
Chapter 89. Postfix
NXLog can be configured to collect logs from the Postfix mail server. Postfix logs its actions to the standard
system logger with the mail facility type.
The component indicates the Postfix process that produced the log message. Most log entries, those relevant to
particular email messages, also include the queue ID of the email message as the first part of the message.
Log Sample
Oct 10 01:23:45 mailhost postfix/smtpd[2534]: 4F9D195432C: client=localhost[127.0.0.1]↵
Oct 10 01:23:45 mailhost postfix/cleanup[2536]: 4F9D195432C: message-
id=<20161001103311.4F9D195432C@mail.example.com>↵
Oct 10 01:23:46 mailhost postfix/qmgr[2531]: 4F9D195432C: from=<origin@other.com>, size=344, nrcpt=1
(queue active)↵
Oct 10 01:23:46 mailhost postfix/smtp[2538]: 4F9D195432C: to=<destination@example.com>,
relay=mail.example.com[216.150.150.131], delay=11, status=sent (250 Ok: queued as 8BDCA22DA71)↵
lmtp_tls_loglevel
smtp_tls_loglevel
smtpd_tls_loglevel
The loglevel directives should be set to 0 (disabled, the default) or 1 during normal operation. Values of 2 or
3 can be used for troubleshooting.
debug_peer_level
Specify the increment in logging level when a remote client or server matches a pattern in the
debug_peer_list parameter (default 2).
debug_peer_list
Provide a list of remote client or server hostnames or network address patterns for which to increase the
logging level.
See the Postfix Debugging Howto and the postconf(5) man page for more information.
563
Example 388. Reading From Syslog Log File
This configuration reads the Postfix logs from file and forwards them via TCP to a remote host.
nxlog.conf
1 <Input postfix>
2 Module im_file
3 File "/var/log/mail.log"
4 </Input>
5
6 <Output out>
7 Module om_tcp
8 Host 192.168.1.1
9 Port 1514
10 </Output>
It is also possible to parse individual Postfix messages into fields, providing access to more fine-grained filtering
and analysis of log data. The NXLog Exec directive can be used to apply regular expressions for this purpose.
Here is the Input module instance again, extended to parse the Postfix messages in the example above.
Various fields are added to the event record, depending on the particular message received. Then in the
Output module instance, only those log entries that are from Postfix’s smtp component and are being
relayed through mail.example.com are logged to the output file.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input postfix>
6 Module im_file
7 File "/var/log/mail.log"
8 <Exec>
9 if $raw_event =~ /(?x)^(\S+\ +\d+\ \d+:\d+:\d+)\ (\S+)
10 \ postfix\/(\S+)\[(\d+)\]:\ (.+)$/
11 {
12 $EventTime = parsedate($1);
13 $HostName = $2;
14 $SourceName = "postfix";
15 $Component = $3;
16 $ProcessID = $4;
17 $Message = $5;
18 if $Component == "smtpd" and
19 $Message =~ /(\w+): client=(\S+)\[([\d.]+)\]/
20 {
21 $QueueID = $1;
22 $ClientHostname = $2;
23 $ClientIP = $3;
24 }
25 if $Component == "cleanup" and
26 $Message =~ /(\w+): message-id=(<\S+@\S+>)/
27 {
28 $QueueID = $1;
29 [...]
Using the example log entries above, this configuration results in a single JSON entry written to the log file.
564
Output Sample
{
"EventReceivedTime": "2016-10-05 16:38:57",
"SourceModuleName": "postfix",
"SourceModuleType": "im_file",
"EventTime": "2016-10-10 01:23:46",
"HostName": "mail",
"SourceName": "postfix",
"Component": "smtp",
"ProcessID": "2538",
"Message": "4F9D195432C: to=<destination@example.com>,
relay=mail.example.com[216.150.150.131], delay=11, status=sent (250 Ok: queued as
8BDCA22DA71)",
"QueueID": "4F9D195432C",
"Recipient": "<destination@example.com>",
"RelayHostname": "mail.example.com",
"RelayIP": "216.150.150.131",
"Delay": "11",
"Status": "sent",
"SMTPCode": "250",
"QueueIDDelivered": "8BDCA22DA71"
}
565
Chapter 90. Promise
The Promise Storage Area Network (SAN) is capable of sending SNMP traps to remote destinations.
Unfortunately Syslog is not supported on these units.
Log Sample
2654 Fan 4 Enc 1 Info Apr 27, 2017 19:08:48 PSU fan or blower speed is decreased↵
There is a single management interface no matter how many shelves are installed, so configuration only needs
to be performed once from the Promise web interface or the command line.
More information about configuring Promise arrays is available in the E-Class product manual. Also, additional
details on CNMP configuration and links to MIB files are available in the following KB article.
1. Configure NXLog for receiving SNMP traps (see the example below). Remember to place the MIB file in the
directory specified by the MIBDir directive. Then restart NXLog.
2. Make sure the NXLog agent is accessible from the unit.
3. Configure Promise by using the web interface or the command line. See the following sections.
This example shows SNMP trap messages from Promise, as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Extension snmp>
10 Module xm_snmp
11 MIBDir /usr/share/mibs/iana
12 </Extension>
13
14 <Input in_snmp_udp>
15 Module im_udp
16 Host 0.0.0.0
17 Port 162
18 InputType snmp
19 Exec parse_syslog();
20 </Input>
21
22 <Output file_snmp>
23 Module om_file
24 File "/var/log/snmp.log"
25 Exec to_json();
26 </Output>
566
Output Sample
{
"SNMP.CommunityString": "public",
"SNMP.RequestID": 1295816642,
"EventTime": "2017-04-27 20:44:37",
"SeverityValue": 2,
"Severity": "INFO",
"OID.1.3.6.1.2.1.1.3.0": 67,
"OID.1.3.6.1.6.3.1.1.4.1.0": "1.3.6.1.4.1.7933.1.20.0.11.0.1",
"OID.1.3.6.1.4.1.7933.1.20.0.10.1": 2654,
"OID.1.3.6.1.4.1.7933.1.20.0.10.2": 327683,
"OID.1.3.6.1.4.1.7933.1.20.0.10.3": 327683,
"OID.1.3.6.1.4.1.7933.1.20.0.10.4": 2,
"OID.1.3.6.1.4.1.7933.1.20.0.10.5": "Fan 4 Enc 1",
"OID.1.3.6.1.4.1.7933.1.20.0.10.6": "Apr 27, 2017 19:08:48",
"OID.1.3.6.1.4.1.7933.1.20.0.10.7": "PSU fan or blower speed is decreased",
"MessageSourceAddress": "192.168.10.21",
"EventReceivedTime": "2017-04-27 20:44:37",
"SourceModuleName": "in_snmp_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"Hostname": "INFO",
"Message": "OID.1.3.6.1.2.1.1.3.0=\"67\" OID.1.3.6.1.6.3.1.1.4.1.0=
\"1.3.6.1.4.1.7933.1.20.0.11.0.1\" OID.1.3.6.1.4.1.7933.1.20.0.10.1=\"2654\"
OID.1.3.6.1.4.1.7933.1.20.0.10.2=\"327683\" OID.1.3.6.1.4.1.7933.1.20.0.10.3=\"327683\"
OID.1.3.6.1.4.1.7933.1.20.0.10.4=\"2\" OID.1.3.6.1.4.1.7933.1.20.0.10.5=\"Fan 4 Enc 1\"
OID.1.3.6.1.4.1.7933.1.20.0.10.6=\"Apr 27, 2017 19:08:48\" OID.1.3.6.1.4.1.7933.1.20.0.10.7=
\"PSU fan or blower speed is decreased\""
}
The steps below have been tested on the VTrak E600 series and should work on other models as
NOTE
well.
567
5. Make sure Running Status is Started and Startup Type is set to Automatic.
6. Click [ Submit ] and confirm SNMP restart.
5. Specify the remote IP address under Trap Sink Server and the logging level under Trap Filter.
6. Select Save SNMP Trap Sink.
7. Select Return to Previous Menu and then Restart.
8. Make sure Startup Type is set to Automatic.
568
Chapter 91. Rapid7 InsightIDR SIEM
Rapid7 InsightIDR is an intruder analytics suite that helps detect and investigate security incidents. It works with
data collected from network logs, authentication logs, and other log sources from endpoint devices.
NXLog can be configured to collect and forward event logs to InsightIDR. It can also be used to rewrite event
fields to meet the log field name requirements of InsightIDR’s Universal Event Format (UEF).
1. Create, deploy and activate an InsightIDR Collector. A Collector is required before adding any data sources to
InsightIDR.
Read more about the requirements in Rapid7’s InsightIDR Collector Requirements documentation before you
install and deploy the InsightIDR Collector.
2. To confirm that the Collector is running, select Data Collection in the left side panel, then under the Data
Collection Management pane, select the Collectors tab.
Here you can check the state of the Collectors. If the Collector is not running, review the Collector
Troubleshooting page in the Rapid7’s Collector Troubleshooting documentation.
3. To add a new Data Source, in the Data Collection Management pane, select the Event Sources tab, then in
the Product Types list, adjacent to Rapid7, click Add.
4. To configure the Event Source, select the name of the Collector and the Event Source from the
corresponding dropdown lists, optionally enter the Display Name, and then select the Timezone from the
dropdown list.
For the Event Source, select either Rapid 7 Raw Data (if using JSON) or Rapid7 Generic Syslog (if using
Syslog-formatted logs).
5. For the Collection Method, select the Listen For Syslog button, in the Port field enter the port number, then
from the drop down list select a Protocol.
6. If TCP was selected for the Protocol, optionally select Encrypted, then click Save.
The newly created Event Source is visible under the Event Sources tab of the Data Collection Management
Pane.
569
Example 391. Event Logs Collected from Event Tracing for Windows (ETW)
This configuration uses the im_etw module to collect Windows DNS Server log data and send it to
InsightIDR as JSON.
nxlog.conf
1 <Input etw_in>
2 Module im_etw
3 Provider Microsoft-Windows-DNSServer
4 Exec to_json();
5 </Input>
570
Example 392. Sending Windows Event Log Security Events
This example sends Windows Event Log collected from the Security Channel using the im_msvistalog
module. The events are sent to InsightIDR in Snare format. When sending Windows Event Log security
events, create a data source with the type Rapid7 Generic Windows Event Log.
nxlog.conf
1 <Input eventlog_in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id='0'>
6 <Select Path='Security'>*</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 <Exec>
11 $Message = replace($Message, "\t", " "); $Message = replace \
12 ($Message, "\n", " "); $Message = replace($Message, "\r", " ");
13 $raw_event = $Message;
14 to_syslog_snare();
15 </Exec>
16 </Input>
571
Example 393. Sending Other Windows Event Log Events
In this configuration, the im_msvistalog module is configured to collect Windows DHCP events and send
them as JSON, but other types of Windows events can be collected too. In this case, the Rapid7 log source is
set as Rapid7 Generic Syslog, so the logs are indexed and parsed.
nxlog.conf
1 <Input dhcp_server_eventlog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="DhcpAdminEvents">*</Select>
7 <Select Path="Microsoft-Windows-Dhcp-Server/FilterNotifications"> \
8 *</Select>
9 <Select Path="Microsoft-Windows-Dhcp-Server/Operational">*</Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 Exec to_syslog_bsd();
14 </Input>
• The Event Source should be one of the supported Rapid7 Universal Event Format types.
• The logs need to be converted to either JSON or KVP.
• Relevant fields should be rewrited, added and whitelisted to support the UEF specification for the log source
type.
• Confirm that the logs have no format violations and are correctly indexed by Rapid7 InsightIDR. See the
Verifying Data Collection section.
The following configuration examples are based on collecting Rapid7 Ingress Authentication events. The steps,
fields and input options will vary depending on the UEF source types. For more information, see the Universal
Event Sources section in the Rapid7 documentation.
572
Use the xm_rewrite module to rename raw data fields to match SIEM and dashboard field
NOTE
names.
Use the xm_kvp module to delete, add and rename raw data fields. For more information, see
NOTE the Universal Event Formats in InsightIDR: A Step-by-Step NXLog Guide in the Rapid7
documentation.
Use the xm_rewrite module to specify which fields to keep and rename. The fields to rewrite will depend on
the operating system as shown below. For both, the fields $version and $event_type are added.
nxlog.conf
1 <Extension rewrite>
2 Module xm_rewrite
3 # Fields associated with UEF are whitelisted
4 Keep EventTime, version, event_type, authentication_result \
5 IpAddress, WorkstationName, Hostname
6
7 # Rename the following fields to the UEF specification
8 Rename EventTime, time
9 Rename Hostname, account
10 Rename IpAddress, source_ip
11 Rename WorkstationName, authentication_target
12 Rename Version, version
13 </Extension>
nxlog.conf
1 <Extension rewrite>
2 Module xm_rewrite
3 # The syslog raw data needs to be parsed first
4 Exec parse_syslog();
5 Keep Hostname, account, version, user, custom_message, Message, \
6 event_type, EventReceivedTime, authentication_result, raw_event, \
7 authentication_target, source_ip
8 Rename HostName, authentication_target
9 Rename EventReceivedTime, time
10 Rename Message, custom_message
11 </Extension>
In Windows, $EventTime is converted to the required ISO 8601 format. The SUCCESS and FAILURE results
are mapped to $authentication_result based on the event ID.
573
nxlog.conf
1 <Input in_auth_windows>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Security">*[System[(Level<=4)]]</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 <Exec>
11 # Convert the $EventTime string to ISO 8601 extended format.
12 $EventTime = strftime($EventTime, '%Y-%m-%dT%H:%M:%SZ');
13
14 # Add the required input for $version
15 $version = "v1";
16
17 # Add the required input for $event_type
18 $event_type = "INGRESS_AUTHENTICATION";
19
20 # Add the required authentication results for EventLog IDs
21 if ($EventID IN (4625)) { $authentication_result = "FAILURE"; } \
22 else if ($EventID IN (4624)) { $authentication_result = "SUCCESS"; } \
23 # Drop all other event IDs
24 else drop();
25
26 # Add the process to rewrite the fields and convert to JSON
27 rewrite->process();
28 to_json();
29 </Exec>
30 </Input>
In Linux, $EventReceivedTime is used and converted to the ISO 8601 format. The SUCCESS and FAILURE
results are mapped to $authentication_result based on string results in the $raw_event field.
Additional parsing of the $raw_event field is made to obtain the string data for the $account and
$source_ip values.
574
nxlog.conf (truncated)
1 <Input in_auth_linux>
2 Module im_file
3 File "/var/log/auth.log"
4 <Exec>
5 # Convert the $EventReceivedTime string to ISO 8601 extended format
6 $EventReceivedTime = strftime($EventReceivedTime, '%Y-%m-%dT%H:%M:%SZ');
7
8 # Add the required input for $version
9 $version = "v1";
10
11 # Add the required input for $event_type
12 $event_type = "INGRESS_AUTHENTICATION";
13
14 # Use xm_rewrite module for the $source_ip and $account fields
15 rewrite->process();
16
17 # Obtain the $source_ip and $account string data from $raw_event
18 if ($raw_event =~ /Accepted publickey for\ (\S+)\ from\ (\S+)/)
19 {
20 $account = $1;
21 $source_ip = $2;
22 }
23
24 # Success and Authentication messages based on the $raw_event field
25 if ($raw_event =~ /authentication failure/) \
26 { $authentication_result = "FAILURE"; } \
27 else if ($raw_event =~ /successfully authenticated/) \
28 { $authentication_result = "SUCCESS"; } \
29 [...]
575
Example 396. Ingress Authentication UEF Event Samples in JSON
The following examples display the JSON output based on the NXLog configuration files above. It is
recommended to first test the input to determine that the fields are renamed and added. There is an
option to provide a $custom_message, as displayed in the Linux example.
576
Example 397. Full Ingress Authentication Event Sample Indexed in Rapid7
Once indexed, logs collected using NXLog can be further processed in Rapid7 InsightIDR.
577
Chapter 92. RSA NetWitness
RSA NetWitness Platform is a threat detection and incident response suite that leverages logs and other data
sources for monitoring, reporting, and investigations. NXLog is an officially supported RSA Ready certified
product and can be configured as the log collection agent for NetWitness.
1. Make sure Syslog collection is enabled. RSA NetWitness creates Syslog listeners by default for UDP on port
514, TCP on port 514, and SSL on port 6514. See Configure Syslog Event Sources for Remote Collector on RSA
Link for further setup notes.
2. Add a Log Decoder using the "Envision Config File" resource.
a. From the NetWitness menu, select Configure > Live Content.
b. In the Keywords field, enter Envision Config File.
c. In the Matching Resources pane, check the Envision Config File entry and click Deploy in the menu
bar.
f. In the Review pane, review the changes and click Deploy. Click Close after the deployment task has
finished.
3. Deploy the Common Event Format.
a. From the NetWitness menu, select Live > Search.
b. In the Keywords field, enter Common Event Format.
578
c. In the Matching Resources pane, check the Common Event Format entry and click Deploy in the
menu bar.
c. Enable the cef parser in the Service Parsers Configuration and click Apply.
579
5. Edit the CEF configuration to collect NXLog event times.
a. Connect via SFTP using WinSCP or another utility.
b. Locate and back up the XML file at /etc/netwitness/ng/envision/etc/devices/cef/cef.xml.
c. Edit the file, adding the following lines after the end of the preceding <MESSAGE … /> section:
<MESSAGE
id1="NXLog_NXLog"
id2="NXLog_NXLog"
eventcategory="1612000000"
functions="<@msg:*PARMVAL($MSG)><@event_time:*EVNTTIME($MSG,'%R %F
%Z',event_time_string)><@endtime:*EVNTTIME($MSG,'%W-%D-%G
%Z',param_endtime)><@starttime:*EVNTTIME($MSG,'%W-%G-%FT%Z',param_starttime)>"
content="<param_endtime><param_starttime><msghold>" />
6. If required, edit the CEF custom configuration to support custom fields as follows.
a. Connect via SFTP.
b. Locate and back up the XML file at /etc/netwitness/ng/envision/etc/devices/cef/cef-
custom.xml, if it exists.
c. Create the file with the following contents. Or if the file already exists, add only the required sections.
<VendorProducts>
<Vendor2Device vendor="NXlog" product="NXLog Enterprise Edition"
device="NXLog_NXLog" group="Analysis"/>
</VendorProducts>
<ExtensionKeys>
<ExtensionKey cefName="Keywords" metaName="Keywords"/>
<ExtensionKey cefName="Severity" metaName="Severity"/>
<ExtensionKey cefName="SeverityValue" metaName="SeverityValue"/>
<ExtensionKey cefName="SourceName" metaName="SourceName"/>
<ExtensionKey cefName="ProviderGuid" metaName="ProviderGuid"/>
<ExtensionKey cefName="TaskValue" metaName="TaskValue"/>
<ExtensionKey cefName="OpcodeValue" metaName="OpcodeValue"/>
<ExtensionKey cefName="RecordNumber" metaName="RecordNumber"/>
<ExtensionKey cefName="ExecutionProcessID" metaName="ExecutionProcessID"/>
<ExtensionKey cefName="ExecutionThreadID" metaName="ExecutionThreadID"/>
<ExtensionKey cefName="param2" metaName="param2"/>
<ExtensionKey cefName="SourceModuleName" metaName="SourceModuleName"/>
<ExtensionKey cefName="SourceModuleType" metaName="SourceModuleType"/>
<ExtensionKey cefName="EventReceivedTime" metaName="param_starttime"/>
580
d. Locate and back up the XML file at /etc/netwitness/ng/envision/etc/table-map-custom.xml, if it
exists.
e. Create the file with the following contents. Or if the file already exists, add the lines between <mappings>
and </mappings>.
<mappings>
<mapping envisionName="severity" nwName="severity" flags="None" format="Text"/>
<mapping envisionName="Keywords" nwName="Keywords" flags="None" format="Text"/>
<mapping envisionName="Severity" nwName="Severity" flags="None" format="Text"/>
<mapping envisionName="SeverityValue" nwName="SeverityValue" flags="None" format="Text"/>
<mapping envisionName="dvcpid" nwName="dvcpid" flags="None" format="Text"/>
<mapping envisionName="hardware_id" nwName="hardware.id" flags="None" format="Text"/>
<mapping envisionName="SourceName" nwName="SourceName" flags="None" format="Text"/>
<mapping envisionName="ProviderGuid" nwName="ProviderGuid" flags="None" format="Text"/>
<mapping envisionName="TaskValue" nwName="TaskValue" flags="None" format="Text"/>
<mapping envisionName="OpcodeValue" nwName="OpcodeValue" flags="None" format="Text"/>
<mapping envisionName="RecordNumber" nwName="RecordNumber" flags="None" format="Text"/>
<mapping envisionName="ExecProcID" nwName="ExecProcID" flags="None" format="Text"/>
<mapping envisionName="ExecThreadID" nwName="ExecThreadID" flags="None" format="Text"/>
<mapping envisionName="cs_devfacility" nwName="deviceFacility" flags="None" format="Text"/>
<mapping envisionName="info" nwName="info" flags="None" format="Text"/>
<mapping envisionName="param2" nwName="param2" flags="None" format="Text"/>
<mapping envisionName="SourceModuleName" nwName="SourceModuleName" flags="None"
format="Text"/>
<mapping envisionName="SourceModuleType" nwName="SourceModuleType" flags="None"
format="Text"/>
<mapping envisionName="param_endtime" nwName="end" flags="None" format="TimeT"/>
<mapping envisionName="param_starttime" nwName="start" flags="none" format="TimeT"/>
</mappings>
581
b. Click Start Capture to start the log collection.
582
Example 398. Converting and Forwarding EventLog Data in CEF
This example configuration reads from the Windows EventLog with im_msvistalog, converts the log data to
CEF, and forwards it to NetWitness via TCP.
The xm_cef extension module provides the to_cef() function, which generates the CEF format. The xm_syslog
extension module provides the to_syslog_bsd() procedure, which adds the BSD Syslog header.
nxlog.conf
1 <Extension _cef>
2 Module xm_cef
3 </Extension>
4
5 <Extension syslog>
6 Module _xm_syslog
7 </Extension>
8
9 <Input eventlog>
10 Module im_msvistalog
11 </Input>
12
13 <Output netwitness_tcp>
14 Module om_tcp
15 Host 127.0.0.1
16 Port 514
17 <Exec>
18 $Message = to_cef();
19 to_syslog_bsd();
20 </Exec>
21 </Output>
nxlog.conf
1 <Output netwitness_udp>
2 Module om_udp
3 Host 127.0.0.1
4 Port 514
5 <Exec>
6 $Message = to_cef();
7 to_syslog_bsd();
8 </Exec>
9 </Output>
Go to Admin and select the Log Decoder. In the Events area, select an event to view its details.
583
It is also possible to examine the raw log to verify that the output to NetWitness is in CEF.
Output Sample
Nov 13 12:34:17 test.test.com Service_Control_Manager: CEF:0|NXLog|NXLog|4.1.4016|0|-|7|end=2018-11-
13 12:34:17 dvchost=test.test.com Keywords=9259400833873739776 outcome=INFO SeverityValue=2
Severity=INFO externalId=7036 SourceName=Service Control Manager ProviderGuid={555908D1-A6D7-4695-
8E1E-26931D2012F4} Version=0 TaskValue=0 OpcodeValue=0 RecordNumber=3037 ExecutionProcessID=496
ExecutionThreadID=2136 deviceFacility=System msg=The Windows Installer service entered the stopped
state. param1=Windows Installer param2=stopped EventReceivedTime=2018-11-13 12:40:28
SourceModuleName=eventlog SourceModuleType=im_msvistalog↵
584
Chapter 93. SafeNet KeySecure
SafeNet KeySecure devices are capable of sending their logs to a remote Syslog destination via UDP or TCP.
KeySecure has four different logs: System, Audit, Activity, and Client Event. Each one has a slightly different
format, and each can be configured with up to two Syslog servers. There is also an option to sign and encrypt
logs messages before sending them to the remote destination. Configuration for this type of scenario is outside
of the scope of this section.
In case of a cluster with two or more KeySecure devices, the configuration change on one of them will be
replicated to other members. Each member will be sending logs separately. For more details regarding logging
configuration on SafeNet KeySecure, refer to the KeySecure Appliance User Guide.
This section covers configuration for sending logs via UDP. To use TCP instead, just select it
NOTE
instead where appropriate.
1. Configure NXLog for receiving Syslog logs (see the examples below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the KeySecure device.
3. Configure Syslog logging on KeySecure using either the web interface or the command line. See the following
sections.
The steps in the following sections have been tested on KeySecure 460 and should work on
NOTE
other models also.
585
Example 399. Receiving Logs From KeySecure
This example shows a KeySecure Audit log message as received and processed by NXLog. Use the im_tcp
module instead of im_udp to receive Syslog messages via TCP instead.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/keysecure.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.5.20",
"EventReceivedTime": "2017-03-26 18:11:36",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 17,
"SyslogFacility": "LOCAL1",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "p-keysecure1",
"EventTime": "2017-03-26 18:12:26",
"SourceName": "IngrianAudit",
"Message": "2017-03-26 18:12:26 [admin] [Login] [CLI]: Logged in from 192.168.15.231 via SSH"
}
586
Example 400. Extracting Additional Fields
Additional field extraction can also be configured. Note that this depends on which particular log the
message is coming from, as each has a different format.
nxlog.conf
1 <Input in_syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ /(?x)^\d{4}-\d{2}-\d{2}\ \d{2}:\d{2}:\d{2}\ \[([a-zA-Z]*)\]
8 \ \[([a-zA-Z]*)\]\ \[([a-zA-Z]*)\]:\ (.*)$/
9 {
10 $KSUsername = $1;
11 $KSEvent = $2;
12 $KSSubsys = $3;
13 $KSMessage = $4;
14 }
15 </Exec>
16 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.5.20",
"EventReceivedTime": "2017-04-15 19:14:59",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 17,
"SyslogFacility": "LOCAL1",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "p-keysecure1",
"EventTime": "2017-04-15 19:16:29",
"SourceName": "IngrianAudit",
"Message": "2017-04-15 19:16:29 [admin] [Login] [CLI]: Logged in from 192.168.15.231 via
SSH",
"KSUsername": "admin",
"KSEvent": "Login",
"KSSubsys": "CLI",
"KSMessage": "Logged in from 192.168.15.231 via SSH"
}
587
5. Click [ Save ].
6. Repeat for the other log types as required.
# configure
# system syslog
# audit syslog
# activity syslog
# clientevent syslog
588
Example 401. Forwarding System Logs
The following commands enable sending System logs to 192.168.6.43 via UDP port 514.
p-keysecure1# configure
p-keysecure1 (config)# system syslog
Enable Syslog [y]:
Syslog Server #1 IP: 192.168.6.143
Syslog Server #1 Port [514]:
Server #1 Proto:
1: udp
2: tcp
Enter a number (1 - 2) [1]:
Syslog Server #2 IP:
Syslog Server #2 Port [514]:
Server #2 Proto:
1: udp
2: tcp
Enter a number (1 - 2) [1]:
Syslog Facility:
1: local0
2: local1
3: local2
4: local3
5: local4
6: local5
7: local6
8: local7
Enter a number (1 - 8) [2]:
System Log syslog settings successfully saved. Syslog is enabled.
Warning: The syslog protocol insecurely transfers logs in cleartext
589
Chapter 94. Salesforce
Salesforce provides customer relationship management (CRM) and other enterprise products.
NXLog can be set up to fetch Event Log Files from Salesforce using the REST API. For more information, see the
Salesforce add-on.
590
Chapter 95. Snare
The Snare Agent is a popular log collection software for Windows EventLog. The Snare format is supported by
many tools and SIEM vendors. It uses tab delimited records and can use Syslog as the transport. NXLog can be
configured to collect or forward logs in the Snare format.
The Snare format can be used with or without the Syslog header.
Snare Format
HOSTNAME ⇥ MSWinEventLog ⇥ Criticality ⇥ EventLogSource ⇥ SnareCounter ⇥ SubmitTime ⇥ EventID ⇥
SourceName ⇥ UserName ⇥ SIDType ⇥ EventLogType ⇥ ComputerName ⇥ CategoryString ⇥ DataString ⇥
ExpandedString ⇥ OptionalMD5Checksum↵
With the following configuration, NXLog will accept Snare format logs via UDP, parse them, convert to JSON,
and output the result to file. This configuration supports both "Snare over Syslog" and the regular Snare
format.
nxlog.conf (truncated)
1 <Extension snare>
2 Module xm_csv
3 Fields $MSWINEventLog, $Criticality, $EventLogSource, $SnareCounter, \
4 $SubmitTime, $EventID, $SourceName, $UserName, $SIDType, \
5 $EventLogType, $ComputerName, $Category, $Data, $Expanded, \
6 $MD5Checksum
7 FieldTypes string, integer, string, integer, datetime, integer, string, \
8 string, string, string, string, string, string, string, string
9 Delimiter \t
10 </Extension>
11
12 <Extension json>
13 Module xm_json
14 </Extension>
15
16 <Extension syslog>
17 Module xm_syslog
18 </Extension>
19
20 <Input in>
21 Module im_udp
22 Host 0.0.0.0
23 Port 6161
24 <Exec>
25 parse_syslog_bsd();
26 if $Message =~ /^((\w+)\t)?(MSWinEventLog.+)$/
27 {
28 if $2 != ''
29 [...]
591
Input Sample ("Snare Over Syslog")
<13>Nov 21 11:40:27 myserver MSWinEventLog ⇥ 0 ⇥ Security ⇥ 32 ⇥ Mon Nov 21 11:40:27 2016 ⇥
592 ⇥ Security ⇥ Andy ⇥ User ⇥ Success Audit ⇥ MAIN ⇥ DetailedTracking ⇥ Process ended ⇥
Ended process ID: 2455↵
Output Sample
{
"EventReceivedTime": "2016-11-21 11:40:28",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "myserver",
"EventTime": "2016-11-21 11:40:27",
"Message": "Ended process ID: 2455",
"MSWINEventLog": "MSWinEventLog",
"Criticality": 0,
"EventLogSource": "Security",
"SnareCounter": 32,
"SubmitTime": "2016-11-21 11:40:27",
"EventID": 592,
"SourceName": "Security",
"UserName": "Andy",
"SIDType": "User",
"EventLogType": "SuccessAudit",
"ComputerName": "MAIN",
"CategoryString": "DetailedTracking",
"DataString": "Process ended",
"ExpandedString": "Ended process ID: 2455"
}
592
Example 403. Sending EventLog in Snare Format
With this configuration, NXLog will read the Windows EventLog, convert it to Snare format, and output it via
UDP. NXLog log messages are also included (via the im_internal module). Tabs and newline sequences are
replaced with spaces.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input internal>
6 Module im_internal
7 </Input>
8
9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message =~ s/(\t|\R)/ /g;
12 </Input>
13
14 <Output out>
15 Module om_udp
16 Host 192.168.1.1
17 Port 514
18 Exec to_syslog_snare();
19 </Output>
20
21 <Route r>
22 Path internal, eventlog => out
23 </Route>
Output Sample
<13>Nov 21 11:40:27 myserver MSWinEventLog ⇥ 0 ⇥ Security ⇥ 32 ⇥ Mon Nov 21 11:40:27 2016 ⇥
592 ⇥ Security ⇥ N/A ⇥ N/A ⇥ Success Audit ⇥ MAIN ⇥ DetailedTracking ⇥ Process ended ⇥ Ended
process ID: 2455↵
593
Chapter 96. Snort
NXLog can be used to capture and process logs from the Snort network intrusion prevention system.
Snort writes log entries to the /var/log/snort/alert file. Each entry contains the date and time of the event,
the packet header, a description of the type of breach that was detected, and a severity rating. Each log entry
traverses multiple lines, and there is neither a fixed number of lines nor a separator.
Following are three example Snort rules and corresponding log messages.
Snort Rule
alert icmp any any -> any any (msg:"ICMP Packet"; sid:477; rev:3;)
Log Sample
[**] [1:477:3] ICMP Packet [**]↵
[Priority: 0]↵
04/30-07:54:41.759229 172.25.212.245 -> 172.25.212.153↵
ICMP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:96 DF↵
Type:8 Code:0 ID:16348 Seq:0 ECHO↵
Snort Rule
alert tcp any any -> any any (msg:"Exploit detected"; sid:1000001; content:"exploit";)
Log Sample
[**] [1:1000001:0] Exploit detected [**]↵
[Priority: 0]↵
04/30-07:54:38.312536 172.25.212.204:80 -> 192.168.255.110:46127↵
TCP TTL:64 TOS:0x0 ID:19844 IpLen:20 DgmLen:505 DF↵
***AP*** Seq: 0xF936BE12 Ack: 0x2C9A47D8 Win: 0x7B TcpLen: 20↵
Snort Rule
alert tcp any any -> any any (msg:"Advanced exploit detected"; \
sid:1000002; content:"backdoor"; reference:myserver,myrules; \
gid:1000001; rev:1; classtype:shellcode-detect; priority:100; \
metadata:meta data;)
Log Sample
[**] [1000001:1000002:1] Advanced exploit detected [**]↵
[Classification: Executable Code was Detected] [Priority: 100]↵
04/30-07:54:35.707783 192.168.255.110:46117 -> 172.25.212.204:80↵
TCP TTL:127 TOS:0x0 ID:14547 IpLen:20 DgmLen:435 DF↵
***AP*** Seq: 0x49649AA5 Ack: 0x5BC496C0 Win: 0x40 TcpLen: 20↵
[Xref => myserver myrules]↵
594
Example 405. Parsing Snort Logs
This configuration uses an xm_multiline extension module instance with a HeaderLine regular expression
to parse the log entries. An Exec directive is also used to drop all empty lines.
In the Input module instance, another regular expression captures the parts of the message and adds
corresponding fields to the event record. Additional information could be extracted also, such as Xref data,
by adding (.*)\s+(.*)\s+\[Xref => (.*)\] to the expression and then $Xref = $13; below it.
Finally, the log entries are formatted as JSON with the to_json() procedure.
nxlog.conf (truncated)
1 <Extension snort>
2 Module xm_multiline
3 HeaderLine /^\[\*\*\] \[\S+] (.*) \[\*\*\]/
4 Exec if $raw_event =~ /^\s+$/ drop();
5 </Extension>
6
7 <Extension _json>
8 Module xm_json
9 </Extension>
10
11 <Input in>
12 Module im_file
13 File "/var/log/snort/alert"
14 InputType snort
15 <Exec>
16 if $raw_event =~ /(?x)^\[\*\*\]\ \[\S+\]\ (.*)\ \[\*\*\]\s+
17 (?:\[Classification:\ ([^\]]+)\]\ )?
18 \[Priority:\ (\d+)\]\s+
19 (\d\d).(\d\d)\-(\d\d:\d\d:\d\d\.\d+)
20 \ (\d+.\d+.\d+.\d+):?(\d+)?\ ->
21 \ (\d+.\d+.\d+.\d+):?(\d+)?\s+\ /
22 {
23 $EventName = $1;
24 $Classification = $2;
25 $Priority = $3;
26 $EventTime = parsedate(year(now()) + "-" + $4 + "-" + $5 + " " + $6);
27 $SourceIPAddress = $7;
28 $SourcePort = $8;
29 [...]
Output Sample
{
"EventReceivedTime": "2014-05-05 09:08:58",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"EventName": "Advanced exploit detected",
"Classification": "Executable Code was Detected",
"Priority": "100",
"EventTime": "2014-04-30 07:54:35",
"SourceIPAddress": "192.168.255.110",
"SourcePort": "46117",
"DestinationIPAddress": "172.25.212.204",
"DestinationPort": "80"
}
595
Chapter 97. Splunk
Splunk is a software platform for data collection, indexing, searching, and visualization. NXLog can be configured
as an agent for Splunk, collecting and forwarding logs to the Splunk instance. Splunk can accept logs forwarded
via UDP, TCP, TLS, or HTTP.
For more information, see the Splunk Enterprise documentation. See also the Sending ETW Logs to Splunk with
NXLog post.
When planning a migration to NXLog, the various types of log sources being collected by Splunk universal
forwarders should be evaluated. Depending on the type of log source, it could be as simple as creating a new
TCP data input port and following some of the examples contained in this chapter, such as forwarding BSD
Syslog events. As long as the log source provides data in a standard format that Splunk can easily index, and
Splunk is retaining the original field names, no special configurations need be written.
In the case of Windows Event Log providers, special NXLog configurations are required to emulate the event
fields and format sent by the Splunk universal forwarder since Splunk renames at least four Windows fields and
adds some new fields to the event schema. See the comparison table below.
* NXLog normalizes this field name across all modules and log sources.
It should be emphasized that NXLog is capable of forwarding Windows events or any other kind
of structured logs to Splunk for indexing without any need to emulate the format or event
NOTE schema used by the Splunk universal forwarder. There is no technical requirement or advantage
in using Splunk’s proprietary format for forwarding logs to Splunk, especially for new Splunk
deployments which have no existing corpus of Windows events.
The only purpose of emulating the Splunk universal forwarder format is to maintain continuity with
previously indexed Windows events that were forwarded with the Splunk universal forwarder. Forwarding
Windows Event Log data in JSON format over TCP to Splunk is the preferred method.
596
97.1.1. Forwarding Windows Events Using JSON
This section assumes that any preexisting Windows Event Log data currently indexed in Splunk will be managed
separately—due to some of its fields names being altered from the original Windows field names—until it ages
out of the system. However, if there is a need to maintain Splunk-specific field names of Windows events, see the
next section that provides a solution for using NXLog to forward Windows events as if they were sent by the
Splunk universal forwarder.
After defining a network data input port (see Adding a TCP or UDP Data Input in the next section for details), the
only NXLog configuration needed for forwarding events to Splunk is a simple, generic TCP (or UDP) output
module instance that converts the logs to JSON as they are being sent.
Example 406. Forwarding Windows DNS Server Events in JSON Format to Splunk
This example uses Windows ETW to collect Windows DNS Server events. The output instance defines the IP
address and port of the host where Splunk Enterprise is receiving data on TCP port 1527 which was defined
in Splunk to have a Source Type of _json.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input dns_server>
6 Module im_etw
7 Provider Microsoft-Windows-DNSServer
8 </Input>
9
10 <Output splunk>
11 Module om_tcp
12 Host 192.168.1.21
13 Port 1527
14 Exec to_json();
15 </Output>
597
Output Sample (whitespace added)
{
"SourceName": "Microsoft-Windows-DNSServer",
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
"EventId": 515,
"Version": 0,
"ChannelID": 17,
"OpcodeValue": 0,
"TaskValue": 5,
"Keywords": "4611686018428436480",
"EventTime": "2020-05-19T10:42:06.313322-05:00",
"ExecutionProcessID": 1536,
"ExecutionThreadID": 3896,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Domain": "WIN-R4QHULN6KLH",
"AccountName": "Administrator",
"UserID": "S-1-5-21-915329490-2962477901-227355065-500",
"AccountType": "User",
"Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
"Type": "5",
"NAME": "www.example.com",
"TTL": "3600",
"BufferSize": "17",
"RDATA": "0x106E73312E6578616D706C652E636F6D2E",
"Zone": "example.com",
"ZoneScope": "Default",
"VirtualizationID": ".",
"EventReceivedTime": "2020-05-19T10:42:07.313482-05:00",
"SourceModuleName": "dns_server",
"SourceModuleType": "im_etw",
"MessageSourceAddress": "192.168.1.61"
}
Since Splunk readily accepts formats like JSON and XML that support highly structured data, querying JSON-
formatted logs is easily accomplished with Splunk’s spath command.
598
procedures is imperative for Splunk to correctly ingest the logs being forwarding using this emulation technique.
When creating configurations with NXLog for maintaining backwards compatibility with events previously
collected by the universal forwarder, only a few general principles need to be observed:
• When creating a new TCP data input in Splunk, choose the right Source Type.
• In the NXLog configuration, rename event fields to the field names Splunk associates with that Source Type.
• In the NXLog configuration, make sure the data matches the format shown in Splunk as closely as possible,
unless Splunk is failing to parse specific fields.
• In the NXLog configuration, manually parse embedded structured data as new, full-fledged fields. A common
cause of failed parsing using this technique are fields containing long strings of embedded subfields.
The following steps should be followed for each type of log source being forwarded:
1. Examine the events in Splunk and note which value is assigned to sourcetype= listed below each event. The
universal forwarder may list different values for sourcetype even when they are coming from the same
source. Try to determine which one is the best fit.
2. In Splunk, create a new TCP Data Input port for each log source type to be forwarded and set the Source
Type to the same one assigned to events that have been sent by the universal forwarder after they have
been ingested by Splunk.
3. Note which fields are being parsed and indexed after they have been received and processed by Splunk.
4. Create an NXLog configuration that will capture the log source data, rename the field names to those
associated with the Source Type, and format them to match the format that the Splunk universal forwarder
uses.
The actual format used by the Splunk universal forwarder is "cooked" data which has a binary header
component and a footer. A single line containing the date and time of the event marks the beginning of the event
data on the next line, which is generally formatted as key-value pairs, unquoted, separated by an equals sign (=),
with only one key-value pair per line. The header and footer parts are not needed for forwarding events to a TCP
Data Input port. Only the first line containing the event’s date/time and the subsequent lines containing the key-
value pairs are needed.
Windows Event Log data can be forwarded to Splunk using NXLog in such a way that Splunk parses and indexes
them as if they were sent by the Splunk universal forwarder. Only three criteria need to be met:
1. The Splunk Add-on for Microsoft Windows has been installed where the forwarded events will be received.
See About installing Splunk add-ons on Splunk Docs for more details.
2. The NXLog configuration rewrites events to match the field names expected by the corresponding log source
in the Splunk Add-on for Microsoft Windows and formats the event to match the format of the Splunk
universal forwarder.
3. A unique TCP Data Input port is created for each type of Windows Event Provider by following the procedure
in Adding a TCP or UDP Data Input. When specifying the Source type it is imperative to choose the correct
name from the dropdown list that follows this naming convention: WinEventLog:Provider[/Channel].
When adding a new TCP Data Input, the desired Source type for Windows might not be present
in the Select Source Type dropdown menu. If so, select or manually enter WinEventLog and
NOTE create the TCP Data Input. Once created, go back to the list of TCP Data Inputs and edit it by
clicking the TCP port number. Make sure Set source type is set to Manual, then enter the
correct name in the Source type field.
The following examples have been tested with Splunk 8.0.0 and the "Splunk Add-on for
NOTE
Microsoft Windows" version 8.0.0.
Example 407. Forwarding Windows DNS Server Audit Events Using the Universal Forwarder Format
599
This example illustrates the method for emulating the Splunk Universal Forwarder for sending Windows
DNS Server Audit events to Splunk. First, a new TCP Input on port 1515 with a Source type of
WinEventLog:Microsoft-Windows-DNSServer/Audit is created for receiving the forwarded events.
This configuration uses the im_msvistalog module to collect and the parse the log data. Since there will be
no need for filtering in this example, a simple File directive defines the location of the log source to be
read, otherwise a QueryXML block would have been used to define the filters and the Provider/Channel as
the log source. The Exec block contains the necessary logic for converting the parsed data to the format
used by the Splunk universal forwarder. Since each event will be formatted and output as a multi-line
record stored as a single string in the $raw_event field, the xm_rewrite is used to delete the original fields.
Once converted, events are then forwarded over TCP port 1515 to Splunk.
nxlog.conf (truncated)
1 <Extension Drop_Fields>
2 Module xm_rewrite
3 Keep # Remove all
4 </Extension>
5
6 <Input DNS_Server_Audit>
7 Module im_msvistalog
8 File %SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-DNSServer%4Audit.evtx
9 <Exec>
10 # Create a header variable for storing the Splunk datetime string
11 create_var('timestamp_header');
12 create_var('event'); # The Splunk equivalent of a $raw_event
13 create_var('message'); # For preserving the $Message field
14 create_var('vip_fields'); # Message subfields converted to fields
15
16 # Get the Splunk datetime string needed for the Header Line
17 $dts = strftime($EventTime,'YYYY-MM-DD hh:mm:ss.sTZ');
18 $hr = ""; # Hours, 2-digit
19 $ap = ""; # For either "AM" or "PM";
20 if ($dts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/ ) {
21 if (hour($EventTime) < 12) {
22 $ap = "AM";
23 $hr = $4;
24 if (hour($EventTime) == 0) $hr = "12";
25 }
26 if (hour($EventTime) > 11) {
27 $ap = "PM";
28 if (hour($EventTime) == 12) $hr = $4;;
29 [...]
600
Events should be automatically parsed by Splunk as shown below.
601
Example 408. Forwarding Sysmon DNS Query Events Using the Universal Forwarder Format
This example illustrates the method for emulating the Splunk Universal Forwarder for sending Windows
Sysmon DNS Query Events events to Splunk. First, a new TCP Input on port 1515 with a Source type of
WinEventLog:Microsoft-Windows-Sysmon/Operational is created for receiving the forwarded events.
The configuration uses the im_msvistalog module to collect and the parse the log data. The QueryXML block
is used to specify not only the Provider/Channel, but also provides additional filtering for collecting only DNS
Query events. The Exec block contains the necessary logic for converting the data to the format used by the
Splunk universal forwarder. Since each event will be formatted and output as a multi-line record stored as a
single string in the $raw_event field, the xm_rewrite is used to delete the original fields. Once converted,
events are then forwarded over TCP port 1517 to Splunk.
602
nxlog.conf (truncated)
1 <Extension Drop_Fields>
2 Module xm_rewrite
3 Keep # Remove all
4 </Extension>
5
6 <Input DNS_Sysmon>
7 Module im_msvistalog
8 <QueryXML>
9 <QueryList>
10 <Query Id="0">
11 <Select Path="Microsoft-Windows-Sysmon/Operational">
12 *[System[(EventID=22)]]
13 </Select>
14 </Query>
15 </QueryList>
16 </QueryXML>
17 <Exec>
18 # Create a header variable for storing the Splunk datetime string
19 create_var('timestamp_header');
20 create_var('event'); # The Splunk equivalent of a $raw_event
21 create_var('message'); # For preserving the $Message field
22 create_var('message_fields'); # Message subfields converted to fields
23
24 # Get the Splunk datetime string needed for the Header Line
25 $dts = strftime($EventTime,'YYYY-MM-DD hh:mm:ss.sTZ');
26 $hr = ""; # Hours, 2-digit
27 $ap = ""; # For either "AM" or "PM";
28 if ($dts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/ ) {
29 [...]
603
97.1.3. File and Directory-Based Forwarding
The only means available to the Splunk Universal Forwarder for selecting log sources to monitor is by manually
defining paths to files or directories on the local host. This same technique is available with NXLog. Since NXLog
is also designed to forward to other NXLog agents, this feature can be leveraged to reduce the number of open
network connections to a Splunk Enterprise server when events are forwarded from a single NXLog central
logging server.
604
Example 409. Forwarding File-Based Centralized Logs to Splunk
In the following example, a central NXLog server receives events for all log sources within the enterprise
and forwards each log source type via a TCP data input connection that has been preconfigured on the
Splunk Enterprise server for that Source type.
nxlog.conf (truncated)
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 # Receive Events from ALL Enterprise Servers
6 <Input syslog_in>
7 Module im_tcp
8 Host 0.0.0.0
9 Port 1514
10 </Input>
11
12 <Input dns_audit_in>
13 Module im_tcp
14 Host 0.0.0.0
15 Port 1515
16 </Input>
17
18 # Cache the Events to Disk in case of Splunk unavailability
19 <Output syslog_cache>
20 Module om_file
21 File '/opt/nxlog/var/log/cached/syslog.bin'
22 OutputType Binary
23 </Output>
24
25 <Output dns_audit_cache>
26 Module om_file
27 File '/opt/nxlog/var/log/cached/dns-audit.bin'
28 OutputType Binary
29 [...]
605
2. Configure the input settings.
a. Select the Source type appropriate for the logs to be sent. For more information, see the Sending
Generic Structured Logs and Sending Specific Log Types for Splunk to Parse sections below.
b. Choose an App context; for example, Search & Reporting (search).
c. Adjust the remaining default values, if required, and click Review.
1. In order to generate certificates, issue the following commands from the server’s console. The script will ask
for a password to protect the key.
606
$ mkdir /opt/splunk/etc/certs
$ export OPENSSL_CONF=/opt/splunk/openssl/openssl.cnf
$ /opt/splunk/bin/genRootCA.sh -d /opt/splunk/etc/certs
$ /opt/splunk/bin/genSignedServerCert.sh -d /opt/splunk/etc/certs -n splunk -c splunk -p
2. Go to the app’s folder and edit the inputs file. For the Search & Reporting app, the path is
$SPLUNK_HOME/etc/apps/search/local/inputs.conf. Add [tcp-ssl] and [SSL] sections.
inputs.conf
[tcp-ssl://10514]
disabled = false
sourcetype = <optional>
[SSL]
serverCert = /opt/splunk/etc/certs/splunk.pem
sslPassword = <The password provided in step 1>
requireClientCert = false
server.conf
[sslConfig]
sslPassword = <Automatically generated>
sslRootCAPath = /opt/splunk/etc/certs/cacert.pem
5. Setup can be tested with netstat or a similar command. If everything went correctly, the following output is
produced.
6. Copy the cacert.pem file from $SPLUNK_HOME/etc/certs to the NXLog certificate directory.
This configuration illustrates how to send a log file via a TLS-encrypted connection. The AllowUntrusted
setting is required in order to accept a self-signed certificate.
nxlog.conf
1 <Output out>
2 Module om_ssl
3 Host 127.0.0.1
4 Port 10514
5 CertFile %CERTDIR%/cacert.pem
6 AllowUntrusted TRUE
7 </Output>
607
Set up and use HTTP Event Collector in Splunk Web, Format events for HTTP Event Collector, and Input endpoint
descriptions.
1. Open Settings > Data inputs and click on the HTTP Event Collector type.
2. Click the Global Settings button (in the upper-right corner).
3. For All Tokens, click the Enabled button.
4. Optionally, set the Default Source Type, Default Index, and Default Output Group settings.
5. Check Enable SSL to require events to be sent encrypted (recommended). See Configuring TLS Collection.
6. Change the HTTP Port Number if required, or leave it set to the default port 8088.
7. Click Save.
1. If not already on the HTTP Event Collector page, open Settings > Data inputs and click on the HTTP Event
Collector type.
2. Click New Token.
3. Enter a name for the token and modify any other settings if required; then click Next.
4. For the Source type, choose Automatic. The source type will be specified using an HTTP header as shown in
the examples in the following sections.
5. Choose an App context; for example, Search & Reporting (search).
6. Adjust the remaining default values, if required, and click Review.
608
7. Verify the information on the summary page and click Submit. The HEC token is created and its value is
presented.
8. The configuration can be tested with the following command (substitute the correct token):
$ curl -k https://<host>:8088/services/collector \
-H 'Authorization: Splunk <token>' -d '{"event":"test"}'
If configured correctly, Splunk will respond that the test event was delivered.
{"text":"Success","code":0}
The HEC uses a JSON event format, with event data in the event key and additional metadata sent in time, host,
source, sourcetype, index, and fields keys. For details about the format, see Format events for HTTP Event
Collector on Splunk Docs and in particular, the Event metadata section there. Because the source type is
specified in the event metadata, it is not necessary to set the source type on Splunk or to use separate tokens for
different source types.
609
Example 411. Forwarding Structured Data to HEC
This example shows an output instance that uses the xm_json and om_http modules to send the data to
the HEC. Events are formatted specifically for the HEC standard /services/collector endpoint.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension clean_splunk_fields>
6 Module xm_rewrite
7 Keep time, host, source, sourcetype, index, fields, event
8 </Extension>
9
10 <Output out>
11 Module om_http
12 URL https://127.0.0.1:8088/services/collector
13 AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3
14 HTTPSCAFile %CERTDIR%/cacert.pem
15 <Exec>
16 # Rename event fields to what Splunk uses
17 if $Severity rename_field($Severity, $vendor_severity);
18 if $SeverityValue rename_field($SeverityValue, $severity_id);
19
20 # Convert all fields to JSON and write to $event field
21 $event = to_json();
22
23 # Convert $EventTime to decimal seconds since epoch UTC
24 $time = string(integer($EventTime));
25 $time =~ /^(?<sec>\d+)(?<ms>\d{6})$/;
26 $time = $sec + "." + $ms;
27
28 # Specify the log source type
29 [...]
Output Sample
{
"event": {
"EventReceivedTime": "2019-10-18 19:58:19",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"SyslogFacility": "USER",
"vendor_severity": "INFO",
"severity_id": 2,
"EventTime": "2019-10-18 19:58:02",
"Hostname": "myserver2",
"ProcessID": 14533,
"SourceName": "sshd",
"Message": "Failed password for invalid user"
},
"time": "1571428682.218749",
"sourcetype": "_json",
"host": "myserver2",
"source": "sshd"
}
610
97.3.2. Sending Structured Logs via TCP/TLS
It is also possible to send JSON-formatted events to Splunk via TCP or TLS. To extract fields and index the event
timestamps as sent by the configuration below, add a new source type with the corresponding settings:
Name Value
TIME_PREFIX "time":"
TIME_FORMAT %s.%6N
Then select this new source type the TCP data input, as described in Adding a TCP or UDP Data Input.
611
Example 412. Forwarding Structured Data to Splunk via TCP
This configuration sets the $time field for Splunk, converts the event data to JSON with the xm_json
to_json() procedure, and forwards via TCP with the om_tcp module.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Output out>
6 Module om_tcp
7 Host 127.0.0.1
8 Port 514
9 <Exec>
10 # Convert $EventTime to decimal seconds since epoch UTC
11 $time = string(integer($EventTime));
12 $time =~ /^(?<sec>\d+)(?<ms>\d{6})$/;
13 $time = $sec + "." + $ms;
14 delete($sec);
15 delete($ms);
16
17 # Write to JSON
18 to_json();
19 </Exec>
20 </Output>
These instructions have been tested with Splunk 7.3.1.1 and the "Splunk Add-on for Microsoft
NOTE
Windows" version 6.0.0.
612
1. Install the Splunk Add-on for Microsoft Windows. See About installing Splunk add-ons on Splunk Docs for
more details.
2. Configure the log source type as XmlWinEventLog.
3. Optionally, add a configuration value to use the event SystemTime value as Splunk’s event _time during
indexing (otherwise Splunk will fall back to using the received time). This can be added to the specific event
source or to the XmlWinEventLog source type. To modify the XmlWinEventLog source type from the
Splunk web interface, follow these steps:
a. Open Settings > Source types.
b. Find the XmlWinEventLog source type (uncheck Show only popular) and click Edit.
c. Open the Advanced tab and add the following configuration value:
Name Value
4. Use the im_msvistalog CaptureEventXML directive to capture the XML-formatted event data from the Event
Log. Forward this value to Splunk.
613
Example 413. Forwarding EventLog XML to Splunk via the HEC
This example reads events from the Security channel. With the CaptureEventXML directive set to TRUE, the
XML event data is stored in the $EventXML field. The contents of this field are then assigned to the
$raw_event field, which sent is to Splunk by the the splunk_hec output instance.
nxlog.conf
1 <Input eventxml>
2 Module im_msvistalog
3 Channel Security
4 CaptureEventXML TRUE
5 Exec $raw_event = $EventXML;
6 </Input>
7
8 <Output splunk_hec>
9 Module om_http
10 URL https://127.0.0.1:8088/services/collector/raw
11 AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3
12 </Output>
614
Example 414. Forwarding BSD Syslog to Splunk via TCP
In this example, events in Syslog format are read from file and sent to Splunk via TCP with no additional
processing. Because the source type is set to syslog, Splunk automatically parses the Syslog header
metadata.
nxlog.conf
1 <Input syslog>
2 Module im_file
3 File '/var/log/messages'
4 </Input>
5
6 <Output splunk>
7 Module om_tcp
8 Host 10.10.1.12
9 Port 514
10 </Output>
615
Chapter 98. Symantec Endpoint Protection
The Symantec Endpoint Protection security suite provides anti-malware, anti-virus, firewall, intrusion detection,
and other features for servers and desktop computers. The product includes two main components: the
Symantec Endpoint Protection client which runs on client systems requiring protection; and the Symantec
Endpoint Protection Manager (SEPM) which communicates with clients, maintains policies, provides an
administrative console, and stores log data. For more information, see What is Symantec Endpoint Protection?
on Symantec Support.
Symantec Endpoint Protection Manager (SEPM) stores log data in an MSSQL Server database or in an
embedded database. For more details, see Managing log data in the Symantec Endpoint Protection Manager
(SEPM) on Symantec Support.
The following steps and configurations were tested with SEPM 14.2; see Released versions of
NOTE
Symantec Endpoint Protection on Symantec Support.
1. Create a Windows/SQL account with read permissions for the SEPM database.
2. Configure an ODBC 32-bit System Data Source on the server running NXLog. For more information, consult
the relevant ODBC documentation: the Microsoft ODBC Data Source Administrator guide or the unixODBC
Project.
3. Set an appropriate firewall rule on the database server that accepts connections from the server running
NXLog. For more information, see Configure a Windows Firewall for Database Engine Access on Microsoft
Docs.
4. Configure NXLog to collect logs via ODBC with the im_odbc module.
If a custom query is needed, it may be helpful to consult the Database schema reference for
TIP
Endpoint Protection 14.x on Symantec Support.
616
Example 415. Collecting SEPM Logs from SQL Database
This example uses the im_odbc module to connect to the Symantec Endpoint Protection Manager server
via ODBC and collect logs from the MSSQL database. The first query below collects alerts and the second
(commented) query collects audit events.
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString DSN=SymantecEndpointSecurityDSN; \
4 database=sem5;uid=user;pwd=password;
5
6 # Query for Virus Alerts
7 SQL SELECT DATEADD(s,convert(bigint,TIME_STAMP)/1000,'01-01-1970 00:00:00') \
8 AS EventTime,IDX,ALERT_IDX,COMPUTER_IDX,SOURCE,VIRUSNAME_IDX, \
9 FILEPATH,ALERTDATETIME,USER_NAME FROM V_ALERTS
10
11 # Alternative query for the Audit log
12 #SQL SELECT DATEADD(s,convert(bigint,TIMESTAMP)/1000,'01-01-1970 00:00:00') \
13 # AS EventTime,METHOD,ARGUMENTS,IP_ADDR FROM V_AUDIT_LOG
14 </Input>
617
2. Configure NXLog to collect logs via ODBC with the im_odbc module. Specify SQL Anywhere as the ODBC
Driver in the ConnectionString directive.
For more technical information about querying the embedded database, check How to query the
TIP
SEPM embedded database on Symantec Support.
If it becomes necessary to migrate the embedded database to an MSSQL database, consult Moving
TIP
from the embedded database to Microsoft SQL Server on Symantec Support.
618
Example 416. Collecting SEPM Logs from Embedded Database
This example uses the im_odbc module to connect to the Symantec Endpoint Protection Manager
embedded database via ODBC with the SQL Anywhere driver. The first query below collects alerts and the
second (commented) query collects audit events.
nxlog.conf
1 <Input in>
2 Module im_odbc
3 ConnectionString Driver=SQL Anywhere 17;ENG=Host; \
4 UID=user;PWD=password;DBN=sem5;LINKS=ShMem;
5
6 # Query for Virus Alerts
7 SQL SELECT DATEADD(ss, TIME_STAMP/1000, '1970-01-01 00:00:00') AS EventTime, \
8 IDX,Alert_IDX,Computer_IDX,Source,Virusname_IDX,FilePath,AlertDateTime, \
9 User_Name,Last_Log_Session_Guid FROM V_ALERTS
10
11 # Alternative query for the Audit log
12 #SQL SELECT DATEADD(ss, TIMESTAMP/1000, '1970-01-01 00:00:00') AS EventTime, \
13 # Method,Arguments,IP_ADDR FROM V_AUDIT_LOG
14
15 Exec $EventTime = strftime($EventTime, 'YYYY-MM-DDThh:mm:ss.sTZ');
16 </Input>
619
Chapter 99. Synology DiskStation
The Synology DiskStation is a Linux-based Network-attached storage (NAS) appliance. It runs syslog-ng and is
capable of forwarding logs to a remote Syslog via UDP or TCP, including an option for SSL. Configuration is
performed via the web interface.
NOTE The steps below have been tested with DSM 5.2 and should work with newer versions as well.
1. Configure NXLog to receive log entries over the network and process them as Syslog (see the TCP example
below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from DiskStation device being configured.
3. Log in to the DiskStation web interface.
4. Go to Log Center › Log Sending.
5. Under the Location tab, specify the Syslog server, port, protocol, and log format. Enable and configure SSL if
required.
6. Click [ Apply ].
620
Example 417. Receiving DiskStation Logs via TCP
This configuration uses the im_tcp module to collect the DiskStation logs via TCP. A JSON output sample
shows the resulting logs as received and processed by NXLog.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 Exec parse_syslog();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "/var/log/synology.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.4.20",
"EventReceivedTime": "2017-07-28 18:30:04",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "DiskStation1",
"EventTime": "2017-07-28 18:30:02",
"Message": "Connection PWD\\sql_psqldw1:\tCIFS client [PWD\\sql_psqldw1] from
[192.168.15.138(IP:192.168.15.138)] accessed the shared folder [db_backup]."
}
{
"MessageSourceAddress": "192.168.4.20",
"EventReceivedTime": "2017-07-28 18:29:48",
"SourceModuleName": "in_syslog_tcp",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "DiskStation1",
"EventTime": "2017-07-28 18:29:56",
"Message": "System Test message from Synology Syslog Client from (0.240.175.244)"
}
621
Chapter 100. Syslog
NXLog can be configured to collect or generate log entries in the various Syslog formats. This section describes
the various Syslog protocols and discusses how to use them with NXLog.
Log Sample
<30>Nov 21 11:40:27 myserver sshd[26459]: Accepted publickey for john from 192.168.1.1 port 41193
ssh2↵
BSD Syslog defines both the log entry format and the transport. The message format is free-form, allowing for
the payload to be JSON or another structured data format.
While this is the common and recommended format for a BSD Syslog message, there are no set
NOTE requirements and a device may send a BSD Syslog log message containing only a free-form
message, without PRI or HEADER parts.
The PRI part, or "priority", is calculated from the facility and severity codes. The facility code indicates the type of
program that generated the message, and the severity code indicates the severity of the message (see the Syslog
Facilities and Syslog Severities tables below). The priority code is calculated by multiplying the facility code by
eight and then adding the severity code.
The PRI part is not written to file by many Syslog loggers. In that case, each log entry begins with
NOTE
the HEADER.
The HEADER part contains two fields: TIMESTAMP and HOSTNAME. The TIMESTAMP provides the local time when
the message was generated in Mmm dd hh:mm:ss format, with no year or time zone specified; the HOSTNAME is
the name of the host where the message was generated.
The MSG part contains two fields: TAG and CONTENT. The TAG is the name of the program or process that
generated the message, and contains only alphanumeric characters. Any other character will represent the
beginning of the CONTENT field. The CONTENT field often contains the process ID enclosed by brackets ([]), a
colon (:), a space, and then the actual message. In the log sample above, the MSG part begins with myserver
sshd[26459]: Accepted publickey; in this case, the TAG is ssh and the CONTENT field begins with [26459].
The CONTENT field can contain only ASCII printable characters (32-126).
622
Facility Description
Code
0 kernel messages
1 user-level messages
2 mail system
3 system daemons
4 security/authorization messages
8 UUCP subsystem
9 clock daemon
10 security/authorization messages
11 FTP daemon
12 NTP subsystem
13 log audit
14 log alert
15 scheduling daemon
Severity Description
Code
0 Emergency: system is unusable
623
100.1.2. BSD Syslog Transport
According to RFC 3164, the BSD Syslog protocol uses UDP as its transport layer. Each UDP packet carries a single
log entry. BSD Syslog implementations often also support plain TCP and TLS transports, though these are not
covered by RFC 3164.
• The transport defined by RFC 3164 uses UDP and provides no mechanism to ensure reliable delivery,
integrity, or confidentiality of log messages.
• Many undocumented variations exist among implementations.
• The timestamp indicates neither the year nor the timezone, and does not provide precision greater than the
second.
• The PRI field (and therefore the facility and severity codes) are not retained by many Syslog loggers when
writing to log files.
• The entire length of the log entry is limited to 1024 bytes.
• Only ASCII characters 32-126 are allowed, no Unicode or line breaks.
Log Sample
<165>1 2003-10-11T22:14:15.003Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3"
eventSource="Application" eventID="1011"] An application event log entry...↵
• HOSTNAME
• APP-NAME: device or application that generated the message
• PROCID: ID of the process that generated the message
• MSGID: message type (for example, "TCPIN" for incoming TCP traffic and "TCPOUT" for outgoing)
The PRI field is not written to file by many Syslog loggers. In that case, each log entry begins with
NOTE
the VERSION field.
The STRUCTURED-DATA part is optional. If it is omitted, then a hyphen acts as a placeholder. Otherwise, it is
624
surrounded by brackets. It contains an ID of the block and a list of space-separated "key=value" pairs.
The MSG part is optional and contains a free-form, single-line message. If the message is encoded in UTF-8, then
it may be preceded by a Unicode byte order mark (BOM).
RFC 5425 also documents the octet-framing method that is used for TLS transport and provides support for
multi-line messages. Octet-framing can be used with plain TCP also, TLS is not required. The message length is
pre-pended as in the following example which shows the raw data that is sent over TCP/TLS.
In practice IETF Syslog is commonly transferred without octet-framing over TCP or TLS. In this case the newline
(\n) character is used as the record separator, similarly to how BSD Syslog is transferred over TCP or TLS.
Make sure NXLog has permission to read log files in /var/log. See Reading Rsyslog Log Files for more
information.
625
Example 418. Reading Syslog From File
This configuration reads log messages from file and parses them using the parse_syslog() procedure.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/log/messages'
8 Exec parse_syslog();
9 </Input>
The parse_syslog() procedure parses the log entry as either BSD or IETF format (the
NOTE
parse_syslog_bsd() and parse_syslog_ietf() procedures can be used alternatively).
◦ modify the configuration to disable reading from /dev/log (for example, remove $ModLoad imuxsock
from /etc/rsyslog.conf and restart Rsyslog).
3. Restart NXLog.
626
Example 419. Reading From /dev/log
With this configuration, NXLog uses the im_uds module to read messages from /dev/log, and the
parse_syslog() procedure to parse them.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_uds
7 UDS /dev/log
8 FlowControl FALSE
9 Exec parse_syslog();
10 </Input>
1. Configure NXLog with im_udp, im_tcp, or im_ssl. See the examples below.
2. For NXLog to listen for messages on port 514, the local Syslog agent must not be listening on that port. It
may be necessary to either
◦ disable the service entirely (for example, systemctl disable rsyslogd) or
◦ modify the configuration to disable listening on port 514 (for example, remove input(type="imudp"
port="514") from /etc/rsyslog.conf and restart Rsyslog).
3. Restart NXLog.
This configuration accepts either BSD or IETF Syslog from the local system only, via UDP.
The UDP transport can lose log entries and is therefore not recommended for
WARNING
receiving logs over the network.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_udp
7 Host localhost
8 Port 514
9 Exec parse_syslog();
10 </Input>
627
Example 421. Receiving Syslog via TCP
This configuration accepts either BSD or IETF Syslog via TCP, without supporting octet-framing.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 Exec parse_syslog();
10 </Input>
This configuration accepts IETF Syslog via TCP, with support for octet-framing.
Though this is for plain TCP, the Syslog_TLS directive is required because it refers to the
NOTE
octet-framing method described by RFC 5425.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 InputType Syslog_TLS
10 Exec parse_syslog_ietf();
11 </Input>
This configuration accepts IETF Syslog via TLS, with support for octet-framing.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_ssl
7 Host 0.0.0.0
8 Port 6514
9 CAFile %CERTDIR%/ca.pem
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 InputType Syslog_TLS
13 Exec parse_syslog_ietf();
14 </Input>
628
100.4. Filtering Syslog
Filtering Syslog messages means keeping or discarding messages based on their contents. Filtering can be
carried out using conditional statements and values from event record fields. For more details about fields, see
the Event Records and Fields section.
The configuration below reads user-space messages from the /dev/log socket using the im_uds module.
In the Exec block, messages are parsed using the parse_syslog_bsd() procedure from the xm_syslog
module.
Using the conditional statement, values from the $SyslogSeverityValue field are checked and messages
with severity level over 6 are discarded using the drop() procedure.
The remaining messages are converted to JSON using to_json() procedure from the xm_json module.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input from_uds>
6 Module im_uds
7 UDS /dev/log
8 <Exec>
9 parse_syslog_bsd();
10 if NOT ($SyslogSeverityValue < 6)
11 {
12 drop();
13 }
14 to_json();
15 </Exec>
16 </Input>
629
Example 425. Filtering by Various Values
The configuration below reads log messages from the /dev/log socket using the im_uds module. In the
Exec block, messages are parsed using the parse_syslog_bsd() procedure from the xm_syslog module.
Using the conditional statement, complex filtering is carried out as per the following parameters:
In case a message does not meet at least one filtering condition, it is discarded using the drop() procedure.
Otherwise, it is converted to JSON using to_json() procedure from the xm_json module.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input from_uds>
6 Module im_uds
7 UDS /dev/log
8 <Exec>
9 parse_syslog_bsd();
10 if NOT (
11 ($SyslogSeverityValue < 6) OR
12 ($SyslogFacility IN ('AUTHPRIV', 'AUTH', 'MAIL', 'CRON')) OR
13 ($SourceName IN ('apt','nxlog','osqueryd'))
14 )
15 {
16 drop();
17 }
18 to_json();
19 </Exec>
20 </Input>
Syslog messages can be filtered by the $Message field values using regular expressions.
630
Example 426. Filtering by Message Field
The configuration below reads log messages from the Linux kernel using the im_kernel module. In the Exec
block, messages are parsed using the parse_syslog_bsd() procedure from the xm_syslog module.
Using the conditional statement, messages without the mount options string in the $Message field are
discarded using the drop() procedure.
The remaining messages are converted to JSON using to_json() procedure from the xm_json module.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input from_kernel>
6 Module im_kernel
7 <Exec>
8 parse_syslog_bsd();
9 if NOT ($Message =~ /mount options/)
10 {
11 drop();
12 }
13 to_json();
14 </Exec>
15 </Input>
• write it to file,
• send it to the local syslog daemon via the /dev/log Unix domain socket, or
• forward it to another destination over the network (via UDP, TCP, or TLS).
In each case, the to_syslog_bsd() and to_syslog_ietf() procedures are used to generate the $raw_event field from
the corresponding fields in the event record.
631
Example 427. Writing BSD Syslog to File
This configuration write logs to the specified file in the BSD Syslog format.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/syslog"
8 Exec to_syslog_bsd();
9 </Output>
NXLog can be configured to write BSD Syslog to a file without the PRI part, emulating traditional Syslog
implementations.
This configuration includes a regular expression for removing the PRI part from the $raw_event field after
it is generated by the to_syslog_bsd() procedure.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_file
7 File "/var/log/syslog"
8 Exec to_syslog_bsd(); $raw_event =~ s/^\<\d+\>//;
9 </Output>
This configuration sends BSD Syslog to the Syslog daemon via /dev/log.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_uds
7 UDS /dev/log
8 Exec to_syslog_bsd();
9 </Output>
632
100.5.3. Sending Syslog to a Remote Logger via UDP, TCP, or TLS
The om_udp, om_tcp, and om_ssl modules can be used for sending Syslog over the network.
This configuration sends logs in BSD Syslog format to the specified host, via UDP port 514.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_udp
7 Host 192.168.1.1
8 Port 514
9 Exec to_syslog_bsd();
10 </Output>
This configuration sends logs in BSD format to the specified host, via TCP port 1514.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_tcp
7 Host 192.168.1.1
8 Port 1514
9 Exec to_syslog_bsd();
10 </Output>
633
Example 432. Forwarding IETF Syslog via TLS
With this configuration, NXLog sends logs in IETF format to the specified host, via port 6514.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Output out>
6 Module om_ssl
7 Host 192.168.1.1
8 Port 6514
9 CAFile %CERTDIR%/ca.pem
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 OutputType Syslog_TLS
13 Exec to_syslog_ietf();
14 </Output>
Log Sample
<13>1 2016-10-13T14:23:11.000000-06:00 myserver - - - [NXLOG@14506 Purpose="test"] This is a test
message.↵
NXLog can parse IETF Syslog with the parse_syslog() procedure provided by the xm_syslog extension module.
634
Example 433. Parsing IETF Syslog With Structured-Data
With this configuration, NXLog will parse the input IETF Syslog format from file, convert it to JSON, and
output the result to file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/log/messages'
12 Exec parse_syslog();
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/var/log/json'
18 Exec to_json();
19 </Output>
20
21 <Route r>
22 Path in => out
23 </Route>
Output Sample
{
"EventReceivedTime": "2016-10-13 15:23:12",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"SyslogFacilityValue": 1,
"SyslogFacility": "USER",
"SyslogSeverityValue": 5,
"SyslogSeverity": "NOTICE",
"SeverityValue": 2,
"Severity": "INFO",
"EventTime": "2016-10-13 15:23:11",
"Hostname": "myserver",
"Purpose": "test",
"Message": "This is a test log message."
}
NXLog can also generate IETF Syslog with a Structured-Data part, using the to_syslog_ietf() procedure provided by
the xm_syslog extension module.
635
Example 434. Generating IETF Syslog With Structured-Data
With the following configuration, NXLog will parse the input JSON from file, convert it to IETF Syslog format,
and output the result to file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/log/json'
12 Exec parse_json();
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/var/log/ietf'
18 Exec to_syslog_ietf();
19 </Output>
20
21 <Route r>
22 Path in => out
23 </Route>
Input Sample
{
"EventTime": "2016-09-13 11:23:11",
"Hostname": "myserver",
"Purpose": "test",
"Message": "This is a test log message."
}
Output Sample
<13>1 2016-09-13T11:23:11.000000-05:00 myserver - - - [NXLOG@14506 EventReceivedTime="2016-09-
13 11:23:12" SourceModuleName="in" SourceModuleType="im_file" Purpose="test] This is a test log
message.↵
636
Example 435. Generating JSON with Syslog Header
With the following configuration, NXLog will read the Windows Event Log, convert it to JSON format, add a
Syslog header, and send the logs via UDP to a Syslog agent. NXLog log messages are also included (via the
im_internal module).
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input internal>
10 Module im_internal
11 </Input>
12
13 <Input eventlog>
14 Module im_msvistalog
15 </Input>
16
17 <Output out>
18 Module om_udp
19 Host 192.168.1.1
20 Port 514
21 Exec $Message = to_json(); to_syslog_bsd();
22 </Output>
23
24 <Route r>
25 Path internal, eventlog => out
26 </Route>
If Syslog compatibility is not a concern, JSON can be transported without the Syslog header
NOTE
(omit the to_syslog_bsd() procedure).
637
Chapter 101. Sysmon
NXLog can be configured to capture and process audit logs generated by the Sysinternals Sysmon utility. Sysmon
is a Windows system service and device driver that logs system activity to the Windows EventLog. Supported
events include (but are not limited to):
On Windows Vista and higher, Sysmon’s events are stored in the Microsoft-Windows-Sysmon/Operational event log.
On older systems, events are written to the System event log.
3. A complex configuration with filtering can be deployed by creating a custom XML configuration file for
Sysmon.
See SwiftOnSecurity Sysmon configuration, or IONStorm Sysmon configuration on GitHub. Both provide
good information for understanding what is possible with Sysmon and include many examples.
> sysmon -u
638
Example Sysmon EventLog Entry
<EventData>
<Data Name="UtcTime">2015.04.27. 13:23</Data>
<Data Name="ProcessGuid">{00000000-3862-553E-0000-001051D40527}</Data>
<Data Name="ProcessId">25848</Data>
<Data Name="Image">c:\Program Files (x86)\nxlog\nxlog.exe</Data>
<Data Name="CommandLine">"c:\Program Files (x86)\nxlog\nxlog.exe" -f</Data>
<Data Name="User">WIN-OUNNPISDHIG\Administrator</Data>
<Data Name="LogonGuid">{00000000-568E-5453-0000-0020D5ED0400}</Data>
<Data Name="LogonId">0x4edd5</Data>
<Data Name="TerminalSessionId">2</Data>
<Data Name="IntegrityLevel">High</Data>
<Data Name="HashType">SHA1</Data>
<Data Name="Hash">1DCE4B0F24C40473Ce7B2C57EB4F7E9E3E14BF94</Data>
<Data Name="ParentProcessGuid">{00000000-3862-553E-0000-001088D30527}</Data>
<Data Name="ParentProcessId">26544</Data>
<Data Name="ParentImage">C:\msys\1.0\bin\sh.exe</Data>
<Data Name="ParentCommandLine">C:\msys\1.0\bin\sh.exe</Data>
</EventData>
Sysmon audit log data can be collected with im_msvistalog (or other modules, see Windows Event Log). The Data
tags will be automatically parsed, and the values will be available as fields in the event records. The log data can
then be forwarded to a log analytics system to allow identification of malicious or anomalous activity.
Here, the im_msvistalog module will collect all Sysmon events from the EventLog. A sample event is shown
below.
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 </Input>
639
Output Sample
{
"EventTime": "2015-04-27 15:23:46",
"Hostname": "WIN-OUNNPISDHIG",
"Keywords": -9223372036854776000,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 1,
"SourceName": "Microsoft-Windows-Sysmon",
"ProviderGuid": "{5770385F-C22A-43E0-BF4C-06F5698FFBD9}",
"Version": 3,
"Task": 1,
"OpcodeValue": 0,
"RecordNumber": 2335906,
"ProcessID": 1680,
"ThreadID": 1728,
"Channel": "Microsoft-Windows-Sysmon/Operational",
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "SYSTEM",
"AccountType": "Well Known Group",
"Message": "Process Create:\r\nUtcTime: 2015.04.27. 13:23\r\nProcessGuid: {00000000-3862-
553E-0000-001051D40527}\r\nProcessId: 25848\r\nImage: c:\\Program Files (x86)\\nxlog\\
nxlog.exe\r\nCommandLine: \"c:\\Program Files (x86)\\nxlog\\nxlog.exe\" -f\r\nUser: WIN-
OUNNPISDHIG\\Administrator\r\nLogonGuid: {00000000-568E-5453-0000-0020D5ED0400}\r\nLogonId:
0x4edd5\r\nTerminalSessionId: 2\r\nIntegrityLevel: High\r\nHashType: SHA1\r\nHash:
1DCE4B0F24C40473CE7B2C57EB4F7E9E3E14BF94\r\nParentProcessGuid: {00000000-3862-553E-0000-
001088D30527}\r\nParentProcessId: 26544\r\nParentImage: C:\\msys\\1.0\\bin\\sh.exe
\r\nParentCommandLine: C:\\msys\\1.0\\bin\\sh.exe",
"Opcode": "Info",
"UtcTime": "2015.04.27. 13:23",
"ProcessGuid": "{00000000-3862-553E-0000-001051D40527}",
"Image": "c:\\Program Files (x86)\\nxlog\\nxlog.exe",
"CommandLine": "\"c:\\Program Files (x86)\\nxlog\\nxlog.exe\" -f",
"User": "WIN-OUNNPISDHIG\\Administrator",
"LogonGuid": "{00000000-568E-5453-0000-0020D5ED0400}",
"LogonId": "0x4edd5",
"TerminalSessionId": "2",
"IntegrityLevel": "High",
"HashType": "SHA1",
"Hash": "1DCE4B0F24C40473CE7B2C57EB4F7E9E3E14BF94",
"ParentProcessGuid": "{00000000-3862-553E-0000-001088D30527}",
"ParentProcessId": "26544",
"ParentImage": "C:\\msys\\1.0\\bin\\sh.exe",
"ParentCommandLine": "C:\\msys\\1.0\\bin\\sh.exe",
"EventReceivedTime": "2015-04-27 15:23:47",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog"
}
Sysmon configuration
Sysmon supports filtering tags that can be used to avoid logging unwanted events. See Setting up Sysmon
640
above and the Sysmon page for details about the available tags. This method is the most efficient because it
avoids creating the unwanted log entries in the first place.
The following example shows a query that collects only events that have an event ID of 1 (process
creation).
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Sysmon/Operational">
7 *[System[(EventID='1')]]
8 </Select>
9 </Query>
10 </QueryList>
11 </QueryXML>
12 </Input>
NXLog language
Finally, the built-in filtering capabilities of NXLog can be used, which may be easier to write than the XML
query syntax provided by the EventLog API.
This example discards all network connection events (event ID 3) regarding HTTP network connections
to a particular server and port, and all process creation and termination events (event IDs 1 and 5) for
conhost.exe.
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 <Exec>
11 if ($EventID in (1, 5) and
12 $Image == "C:\\Windows\\System32\\conhost.exe") or
13 ($EventID == 3 and
14 $DestinationPort == 80 and
15 $DestinationIp == 10.0.0.1)
16 drop();
17 </Exec>
18 </Input>
641
Chapter 102. Ubiquiti UniFi
Ubiquiti UniFi is an enterprise solution for managing wireless networks. The UniFi infrastructure is managed by
the UniFi Controller, which can be configured to send logs to a remote Syslog server via UDP. As a central
management point, it will make sure that logs from all access points, including client authentication messages,
are logged to the Syslog server.
More information about configuring the UniFi Controller can be found in the corresponding user guide.
The steps below have been tested with UniFi Controller v4 and should work for other versions
NOTE
also.
1. Configure NXLog for receiving Syslog log entries via UDP (see the examples below). Then restart NXLog.
2. Make sure the NXLog agent is accessible from the server with the Controller software.
3. Log in to the Controller’s web interface.
4. Go to Settings › Site.
5. Select Enable remote syslog server and specify the IP address and UDP port that the NXLog agent is
listening on. If necessary, also select Enable debug level syslog. Then click [ Apply ].
By default, the UniFi Controller sends a lot of low level information which may complicate field extraction if
additional intelligence is required. The Syslog level can be adjusted individually for each access point from the
Controller server by changing the syslog.level value in the system.cfg file. The location of this file varies
depending on the host operating system. If the Controller software is running on Windows, the file can be found
642
under C:\Ubiquiti UniFi\data\devices\uap\<AP_MAC_ADDRESS>
Unfortunately, once configured with remote Syslog address, the Controller only sends log messages that
originate from access points. The Controller’s own log is located on the server where it is installed. The location
of this file depends on the host operating system, on Windows it can be found at C:\Ubiquiti
UniFi\logs\server.log. If needed, this file can be parsed with the om_file module.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in_syslog_udp>
10 Module im_udp
11 Host 0.0.0.0
12 Port 514
13 Exec parse_syslog();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/unifi.log"
19 Exec to_json();
20 </Output>
Output Sample
{
"MessageSourceAddress": "192.168.10.147",
"EventReceivedTime": "2017-04-27 19:38:55",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "192.168.10.147",
"EventTime": "2017-04-27 19:40:44",
"Message": "(\"U7P,0418d6809ce2,v3.7.11.5131\") hostapd: ath4: STA 34:02:86:45:8e:e0 IEEE
802.11: disassociated"
}
643
Example 440. Extracting Additional Fields
Additional fields can be extracted from the Syslog messages with a configuration like the one below.
nxlog.conf
1 <Input in_syslog_udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 <Exec>
6 parse_syslog();
7 if $Message =~ / ([a-z]*): (.*)$/
8 {
9 $UFProcess = $1;
10 $UFMessage = $2;
11 if $UFMessage =~ /^([a-z0-9]*): (.*)$/
12 {
13 $UFSubsys = $1;
14 $UFMessage = $2;
15 if $UFMessage =~ /^STA (.*) ([A-Z0-9. ]*): (.*)$/
16 {
17 $UFMac = $1;
18 $UFProto = $2;
19 $UFMessage = $3;
20 }
21 }
22 }
23 </Exec>
24 </Input>
Output Sample
{
"MessageSourceAddress": "192.168.10.149",
"EventReceivedTime": "2017-05-01 20:30:13",
"SourceModuleName": "in_syslog_udp",
"SourceModuleType": "im_udp",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 6,
"SyslogSeverity": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "192.168.10.149",
"EventTime": "2017-05-01 20:32:11",
"Message": "(\"U7P,0418d6809b78,v3.7.11.5131\") hostapd: ath2: STA 80:19:34:97:62:a6 RADIUS:
stopped accounting session 5907CFDD-00000002",
"UFProcess": "hostapd",
"UFSubsys": "ath2",
"UFMac": "80:19:34:97:62:a6",
"UFProto": "RADIUS",
"UFMessage": "stopped accounting session 5907CFDD-00000002"
}
644
Chapter 103. VMware vCenter
NXLog can be used to capture and process logs from VMware vCenter. This guide explains how to do this with
vCenter 5.5 installed on Windows Server 2008 R2.
• NXLog can be installed directly on the vCenter host machine and configured to collect all logs locally. This
method provides more feedback and more detailed logs, and is the recommended method. See Local
vCenter Logging.
• Alternatively, vCenter logs can be collected remotely using the vSphere Perl SDK. This option is less flexible,
but may be the only feasible option in some environments due to security restrictions. See Remote vCenter
Logging.
4. Click [ OK ] to save your changes. vCenter will now start writing detailed logs. The location of the logs
depends on the version of vCenter you are running.
◦ vCenter Server 5.x and earlier versions on Windows XP, 2000, and 2003:
%ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\
◦ vCenter Server 5.x and earlier versions on Windows Vista, 7, and 2008:
C:\ProgramData\VMware\VMware VirtualCenter\Logs\
If vCenter is running under a specific user account, then the logs may be located in the
NOTE
profile directory of that user, instead of %ALLUSERSPROFILE%.
645
5. Determine which log files you want to parse and collect.
vpxd-profiler.log, Profiled metrics for operations performed in vCenter Server. Used by the VPX
profiler.log Operational Dashboard (VOD) accessible at https://VCHost/vod/index.html.
vpxd-alert.log Non-fatal information logged about the vpxd process.
cim-diag.log and vws.log Common Information Model monitoring information, including communication
between vCenter Server and managed hosts’ CIM interface
drmdump (directory) Actions proposed and taken by VMware Distributed Resource Scheduler (DRS),
grouped by the DRS-enabled cluster managed by vCenter Server. These logs are
compressed.
ls.log Health reports for the Licensing Services extension, connectivity logs to vCenter
Server.
vimtool.log Dump of string used during the installation of vCenter Server with hashed
information for DNS, username and output for JDBC creation.
stats.log Provides information about the historical performance data collection from the
ESXi/ESX hosts
sms.log Health reports for the Storage Monitoring Service extension, connectivity logs to
vCenter Server, the vCenter Server database and the xDB for vCenter Inventory
Service.
eam.log Health reports for the ESX Agent Monitor extension, connectivity logs to vCenter
Server.
catalina.date.log and Connectivity information and status of the VMware Webmanagement Services.
localhost.date.log
jointool.log Health status of the VMwareVCMSDS service and individual ADAM database
objects, internal tasks and events, and replication logs between linked-mode
vCenter Servers.
The various log files use different formats. You must examine your chosen file in order to
NOTE
determine how to parse its entries.
The main log file, vpxd.log, contains all login and management information. This file will be used as an
example. The file has the general format of timestamp [tag-1] [optional-tag-2] message, and the
message part might contain a multi-line trace.
vpxd.log Sample
2014-06-13T22:44:46.878-07:00 [04372 info 'Default' opID=DACDA564-00000004-7c] [Auth]: User
Administrator↵
2014-06-13T23:15:07.222-07:00 [04136 error 'vpxdvpxdMain'] [Vpxd::ServerApp::Init] Init failed:
VpxdVdb::Init(VpxdVdb::GetVcVdbInstId(), false, false, NULL)↵
--> Backtrace:↵
--> backtrace[00] rip 000000018018a8ca↵
--> backtrace[01] rip 0000000180102f28↵
--> backtrace[02] rip 000000018010423e↵
--> backtrace[03] rip 000000018008e00b↵
--> backtrace[04] rip 00000000003c5c2c↵
-->↵
646
6. Configure and restart NXLog.
In the configuration below, the xm_multiline extension module is used with the HeaderLine directive to
parse log entries even when they span multiple lines. An Exec directive is used to drop all empty lines. A
regular expression with matching groups adds fields to the event record from each log message, and the
resulting log entries are sent to another host via TCP in JSON format.
nxlog.conf
1 <Extension vcenter>
2 Module xm_multiline
3 HeaderLine /(?x)(\d+-\d+-\d+T\d+:\d+:\d+).\d+-\d+:\d+\s+\[(.*?)\]\s+ \
4 (?:\[(.*?)\]\s+)?(.*)/
5 Exec if $raw_event =~ /^\s+$/ drop();
6 </Extension>
7
8 <Extension _json>
9 Module xm_json
10 </Extension>
11
12 <Input in>
13 Module im_file
14 File "C:\ProgramData\VMware\VMware VirtualCenter\Logs\vpxd*.log"
15 InputType vcenter
16 <Exec>
17 if $raw_event =~ /(?x)(\d+-\d+-\d+T\d+:\d+:\d+.\d+-\d+:\d+)\s+\[(.*?)\]\s+
18 (?:\[(.*?)\]\s+)?((.*\s*)*)/
19 {
20 $EventTime = parsedate($1);
21 $Tag1 = $2;
22 $Tag2 = $3;
23 $Message = $4;
24 }
25 </Exec>
26 </Input>
27
28 <Output out>
29 Module om_tcp
30 Host 192.168.1.1
31 Port 1514
32 Exec to_json();
33 </Output>
647
Output Sample
{
"EventReceivedTime": "2017-04-29 13:46:49",
"SourceModuleName": "vcenter_in1",
"SourceModuleType": "im_file",
"EventTime": "2014-06-14 07:44:46",
"Tag1": "04372 info 'Default' opID=DACDA564-00000004-7c",
"Tag2": "",
"Message": "[Auth]: User Administrator"
}
{
"EventReceivedTime": "2017-04-29 13:46:49",
"SourceModuleName": "vcenter_in1",
"SourceModuleType": "im_file",
"EventTime": "2014-06-14 08:15:07",
"Tag1": "04136 error 'vpxdvpxdMain'",
"Tag2": "Vpxd::ServerApp::Init",
"Message": "Init failed: VpxdVdb::Init(VpxdVdb::GetVcVdbInstId(), false, false, NULL)\n-->
Backtrace:\n--> backtrace[00] rip 000000018018a8ca\n--> backtrace[01] rip 0000000180102f28\n-->
backtrace[02] rip 000000018010423e\n--> backtrace[03] rip 000000018008e00b\n--> backtrace[04]
rip 00000000003c5c2c\n-->\n"
}
1. Download and install the latest Perl runtime and the vSphere SDK for Perl. For Windows, the vSphere CLI is
recommended instead, because it includes the required Perl runtime environment and VIperl libraries.
2. The script will use a timestamp file to store the timestamp of the most recently downloaded log entry. The
timestamp ensures that even if the vCenter server is restarted, NXLog can correctly resume log collection.
The timestamp file will be created automatically. However, to specify a timestamp manually, create a file with
a timestamp in yyyy-mm-ddThh-mm format (for example, 2017-01-19T18:00). Then use the -r option to
specify the location of the timestamp file. Any logs with earlier timestamps will be skipped.
3. To test the vcenter.pl script with the vCenter host, run the script as shown below. Substitute the correct
server IP address and credentials for the vCenter server. The -t argument is optional and can be used to
adjust the time between polls (the default of 60 seconds is the minimum recommended). The -r argument is
also optional and can be used to specify a custom location for the timestamp file. Events such as connection
or authentication errors are logged to standard output.
Because the script connects to vCenter remotely, we recommend setting up a dedicated user in
NOTE
vCenter as a security measure.
This configuration uses the im_exec module to run the Perl script and accept logs from its standard output.
The xm_json module is used to parse the JSON event data. The $EventTime field is converted to a datetime
value.
648
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input in>
6 Module im_exec
7 # For users who have the VMware CLI installed:
8 Command "C:\Program Files (x86)\VMware\VMware vSphere CLI\Perl\bin\perl.exe"
9 # For Linux and regular Perl users this would be sufficient:
10 #Command perl
11 Arg "C:\scripts\vcenter.pl"
12 Arg -u
13 Arg <username>
14 Arg -p
15 Arg <password>
16 Arg -s
17 Arg <server_ip_addr>
18 <Exec>
19 # Parse JSON into fields for later processing if required
20 parse_json();
21
22 # Parse EventTime field as timestamp
23 $EventTime = parsedate($EventTime);
24 </Exec>
25 </Input>
Event Samples
{
"EventTime": "2014-06-20T18:00:00.163Z",
"Message": "User Administrator@192.168.71.1 logged in as VI Perl",
"UserName": "Administrator"
}
{
"EventTime": "2014-06-20T10:56:23",
"Message": "Error: Cannot complete login due to an incorrect user name or password.",
"UserName": "Administrator"
}
vcenter.pl (truncated)
#!/usr/bin/perl -w
use Encode;
use VMware::VIRuntime;
use VMware::VILib;
use Getopt::Long;
use IO::File;
use POSIX;
my $startTime;
my $stopTime;
my $server;
my $sleepTime = 60;
my $userName;
my $passWord;
my $timeStamp;
my $timeStampFile = "timestamp.txt";
my $timeNow;
[...]
649
Chapter 104. Windows AppLocker
Windows AppLocker allows administrators to create rules restricting which executables, scripts, and other files
users are allowed to run. For more information, see What Is AppLocker? on Microsoft Docs.
AppLocker logs events to the Windows Event Log. There are four logs available, shown in the Event Viewer under
Applications and Services Logs > Microsoft > Windows > Applocker:
NXLog can collect these events with the im_msvistalog module or other Windows Event Log modules.
The following configuration uses the im_msvistalog module to collect Applocker events from the four
EventLog logs listed above. The xm_xml parse_xml() procedure is used to further parse the UserData XML
portion of the event.
nxlog.conf
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 <Input in>
6 Module im_msvistalog
7 <QueryXML>
8 <QueryList>
9 <Query Id="0">
10 <Select Path="Microsoft-Windows-AppLocker/MSI and Script">
11 *</Select>
12 <Select Path="Microsoft-Windows-AppLocker/EXE and DLL">
13 *</Select>
14 <Select Path="Microsoft-Windows-AppLocker/Packaged app-Deployment">
15 *</Select>
16 <Select Path="Microsoft-Windows-AppLocker/Packaged app-Execution">
17 *</Select>
18 </Query>
19 </QueryList>
20 </QueryXML>
21 Exec if $UserData parse_xml($UserData);
22 </Input>
Output Sample
{
"EventTime": "2019-01-09T22:34:44.164099+01:00",
"Hostname": "Host.DOMAIN.local",
"Keywords": "9223372036854775808",
"EventType": "ERROR",
"SeverityValue": 4,
"Severity": "ERROR",
"EventID": 8004,
"SourceName": "Microsoft-Windows-AppLocker",
"ProviderGuid": "{CBDA4DBF-8D5D-4F69-9578-BE14AA540D22}",
"Version": 0,
650
"TaskValue": 0,
"OpcodeValue": 0,
"RecordNumber": 40,
"ExecutionProcessID": 5612,
"ExecutionThreadID": 5220,
"Channel": "Microsoft-Windows-AppLocker/EXE and DLL",
"Domain": "DOMAIN",
"AccountName": "admin",
"UserID": "S-1-5-21-314323950-2314161084-4234690932-1002",
"AccountType": "User",
"Message": "%PROGRAMFILES%\\WINDOWS NT\\ACCESSORIES\\WORDPAD.EXE was prevented from
running.",
"Opcode": "Info",
"UserData": "<RuleAndFileData
xmlns='http://schemas.microsoft.com/schemas/event/Microsoft.Windows/1.0.0.0'><PolicyNameLength>
3</PolicyNameLength><PolicyName>EXE</PolicyName><RuleId>{4C8E638D-3DE8-4DCB-B0E4-
B0597074D06B}</RuleId><RuleNameLength>113</RuleNameLength><RuleName>WORDPAD.EXE, in MICROSOFT®
WINDOWS® OPERATING SYSTEM, from O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON,
C=US</RuleName><RuleSddlLength>179</RuleSddlLength><RuleSddl>D:(XD;;FX;;;S-1-1-0;((Exists
APPID://FQBN) && ((APPID://FQBN) >= ({\"O=MICROSOFT CORPORATION, L=REDMOND,
S=WASHINGTON, C=US\\MICROSOFT® WINDOWS® OPERATING SYSTEM\\WORDPAD.EXE
\",0}))))</RuleSddl><TargetUser>S-1-5-21-314323950-2314161084-4234690932-
1002</TargetUser><TargetProcessId>7964</TargetProcessId><FilePathLength>49</FilePathLength><Fil
ePath>%PROGRAMFILES%\\WINDOWS NT\\ACCESSORIES
\\WORDPAD.EXE</FilePath><FileHashLength>0</FileHashLength><FileHash></FileHash><FqbnLength>118<
/FqbnLength><Fqbn>O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\\MICROSOFT® WINDOWS®
OPERATING SYSTEM\\WORDPAD.EXE\\6.3.9600.19060</Fqbn></RuleAndFileData>",
"EventReceivedTime": "2019-01-09T22:34:45.773240+01:00",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog",
"RuleAndFileData.PolicyNameLength": "3",
"RuleAndFileData.PolicyName": "EXE",
"RuleAndFileData.RuleId": "{4C8E638D-3DE8-4DCB-B0E4-B0597074D06B}",
"RuleAndFileData.RuleNameLength": "113",
"RuleAndFileData.RuleName": "WORDPAD.EXE, in MICROSOFT® WINDOWS® OPERATING SYSTEM, from
O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US",
"RuleAndFileData.RuleSddlLength": "179",
"RuleAndFileData.RuleSddl": "D:(XD;;FX;;;S-1-1-0;((Exists APPID://FQBN) && ((APPID://FQBN) >=
({\"O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\\MICROSOFT® WINDOWS® OPERATING
SYSTEM\\WORDPAD.EXE\",0}))))",
"RuleAndFileData.TargetUser": "S-1-5-21-314323950-2314161084-4234690932-1002",
"RuleAndFileData.TargetProcessId": "7964",
"RuleAndFileData.FilePathLength": "49",
"RuleAndFileData.FilePath": "%PROGRAMFILES%\\WINDOWS NT\\ACCESSORIES\\WORDPAD.EXE",
"RuleAndFileData.FileHashLength": "0",
"RuleAndFileData.FqbnLength": "118",
"RuleAndFileData.Fqbn": "O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\\MICROSOFT®
WINDOWS® OPERATING SYSTEM\\WORDPAD.EXE\\6.3.9600.19060"
}
651
Chapter 105. Windows Command Line Auditing
Command line auditing implies monitoring the process with the name A new process has been created on
Windows operating systems, and it is carried out for the following processes:
This monitoring featureis available starting from Windows Server 2012 R2, see the Command line process
auditing section on the Microsoft website. For more information about security, see also the Security Monitoring
Recommendations on the Microsoft website.
NXLog can be configured to collect and parse command line auditing logs.
Monitoring of process creation with command line is also available through utilizing Sysmon,
NOTE although the native command line auditing solution may be more preferable since it does not
require installation of any third-party software.
The command line process auditing writes events to the Windows Event Log, which can be monitored by
capturing event entries with the Event ID 4688.
2. To enable audit process creation, go to Computer Configuration > Windows Settings > Security Settings >
Advanced Audit Policy Configuration > System Audit Policies > Detailed Tracking and open the Audit
652
Process Creation setting, then check the Configure the following audit events and Success checkboxes.
3. To enable command line process creation, go to Computer Configuration > Administrative Templates >
System > Audit Process Creation, click the Include command line in process creation event setting, then
select the Enabled radio button.
For more information about enabling the command line auditing, see How to Determine What Just Ran on
Windows Console section on Microsoft website.
The configuration below demonstrates how to collect Windows Event Log entries with the ID 4688 of the
Security channel to log the activity of the C:\Windows\System32\ftp.exe application. First, it drops entries
653
without the ftp.exe substring in the NewProcessName field. After that, the Message field from the selected
entries is deleted to make the example output shorter. Finally, the logs are converted to JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input from_eventlog>
6 Module im_msvistalog
7 <QueryXML>
8 <QueryList>
9 <Query Id="0">
10 <Select Path="Security">
11 *[System[Level=0 and (EventID=4688)]]
12 </Select>
13 </Query>
14 </QueryList>
15 </QueryXML>
16 <Exec>
17 if not ($NewProcessName =~ /.*ftp.exe/) drop();
18 delete($Message);
19 json->to_json();
20 </Exec>
21 </Input>
654
Output Sample
{
"EventTime": "2020-04-18T16:26:48.737490+03:00",
"Hostname": "WIN-IVR26CIVSF6",
"Keywords": "9232379236109516800",
"EventType": "AUDIT_SUCCESS",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 4688,
"SourceName": "Microsoft-Windows-Security-Auditing",
"ProviderGuid": "{54849625-5478-4994-A5BA-3E3B0328C30D}",
"Version": 2,
"TaskValue": 13312,
"OpcodeValue": 0,
"RecordNumber": 18112,
"ExecutionProcessID": 4,
"ExecutionThreadID": 3720,
"Channel": "Security",
"Category": "Process Creation",
"Opcode": "Info",
"SubjectUserSid": "S-1-5-21-2751412651-3826291291-1936999150-500",
"SubjectUserName": "Administrator",
"SubjectDomainName": "WIN-IVR26CIVSF6",
"SubjectLogonId": "0x23d19",
"NewProcessId": "0xa24",
"NewProcessName": "C:\\Windows\\System32\\ftp.exe",
"TokenElevationType": "%%1936",
"ProcessId": "0x2a8",
"CommandLine": "ftp -s:ftp.txt",
"TargetUserSid": "S-1-0-0",
"TargetUserName": "-",
"TargetDomainName": "-",
"TargetLogonId": "0x0",
"ParentProcessName": "C:\\Windows\\System32\\cmd.exe",
"MandatoryLabel": "S-1-16-12288",
"EventReceivedTime": "2020-04-18T16:26:50.674636+03:00",
"SourceModuleName": "from_eventlog",
"SourceModuleType": "im_msvistalog"
}
655
Chapter 106. Windows Event Log
This section discusses the various details of Windows Event Logs.
Unlike other event logs, such as the UNIX Syslog, Windows Event Log is not stored as a plain text file, but in a
proprietary binary format. It is not possible to view Windows Event Log in a text editor, nor is it possible to send
it as a Syslog event while retaining its original format. However, the raw event data can be translated into XML
using the Windows Event Log API and forwarded in that format.
The EVTX format includes many new features and enhancements: a number of new event properties, the use of
channels to publish events, a new Event Viewer, a rewritten Windows Event Log service, and support for the
Extensible Markup Language (XML) format. From a log processing perspective, the added support for XML is the
most important addition, as it provides the possibility to share or further process the event data in a structured
format.
For the built in channels, Windows automatically saves the corresponding EVTX file into the
C:\Windows\System32\winevt\Logs\ directory. Events can also be saved manually from the Event Viewer MMC
snap-in, in four different formats: EVTX, XML, TXT, and CSV.
NXLog can directly read EVTX and EVT files using the im_msvistalog File directive. In addition, the
CaptureEventXML directive of the same module can be used to store and send raw XML-formatted event data in
the $EventXML field.
The Event Viewer includes three views for displaying the data for a selected event. These are shown on the
preview pane or in the Event Properties window when an event is opened.
• The general view is shown by default. It includes the full message rendered from template and the "System"
set of key/value pairs.
• The Friendly View is available on the Details tab. It shows a hierachical view of the System properties and
additional EventData properties defined by the event provider. It does not show a rendered message.
• The XML View can be selected under the Details tab. It shows the event properties in XML format. It does
not show a rendered message.
656
A Windows Event Log event in XML format
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-Security-Auditing"
Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />
<EventID>4624</EventID>
[...]
<Channel>Security</Channel>
<Computer>USER-WORKSTATION</Computer>
<Security />
</System>
<EventData>
<Data Name="SubjectUserSid">S-1-5-18</Data>
[...]
</EventData>
</Event>
Events can be accessed through the Event Log API (see Windows Event Log Functions on Microsoft Docs). In
particular:
• EvtQuery() fetches events from a given channel or log file that match a given query—see Querying for
Events.
• EvtFormatMessage() generates a message string for an event using the event properties and the localized
message template—see Formatting Event Messages.
• The Windows Logs group contains a set of exactly five channels, which are used for Windows system events.
• The Applications and Services Logs group contains channels created for individual applications or
components. These channels are further organized in a folder hierarchy.
There are two channel types indicating how the events are handled:
• Serviced channels offer relatively low volume, reliable delivery of events. Events in these channels may be
forwarded to another system, and these channels may be subscribed to.
• Direct channels are for high-performance collection of events. It is not possible to subscribe to a a direct
channel. By default, these channels are disabled. To see these channels in the Event Viewer, check Show
Analytic and Debug Logs in the View menu. To enable logging for one of these channels, select the channel,
open the Action menu, click Properties, and check Enable logging on the General tab.
Each of the above is subdivided into two more channel types according to the the intended audience for the
events collected by that channel:
• Administrative channels collects events for end users, administrators, and support. This is a serviced
channel type.
• Operational channels collect events used for diagnosing problems. This is a serviced channel type.
• Analytic channels are for events that describe program operation. These channels often collect a high
volume of events. This is a direct channel type.
• Debug channels are intended to be used by developers only. This is a direct channel type.
657
Channel Groups Channels Channel Type
Application Administrative (serviced)
The im_msvistalog module can be configured to collect events from a specific channel with the Channel directive.
For more information about event channels, see these two pages on Microsoft Docs: Event Logs and Event Logs
and Channels in Windows Event Log.
106.1.4. Providers
Event log providers write events to event logs. An event log provider can be a service, driver, or program that runs
on the computer and has the necessary instrumentation to write to the event log.
For more information on providers, see the Providers section in the Microsoft Windows documentation.
• The im_msvistalog module is available on Windows only, and captures event log data from Windows
2008/Vista and later. It can be configured to collect event log data from the local system or from a remote
system via MSRPC (MSRPC is supported by NXLog Enterprise Edition only). See Local Collection With
im_msvistalog and Remote Collection With im_msvistalog.
• The im_wseventing module is available on both Linux and Windows (NXLog Enterprise Edition only). With it,
event log data can be received from remote Windows systems using Windows Event Forwarding. This is the
recommended module for most cases where remote capturing is required, because it is not necessary to
specify each host that EventLog data will be captured from. See Remote Collection With im_wseventing.
• The im_mseventlog module is available on Windows only, and captures event log data locally from Windows
XP, Windows 2000, and Windows 2003. See Local Collection With im_mseventlog.
658
106.2.2. Local Collection With im_msvistalog
The im_msvistalog module can capture EventLog data from the local system running Windows 2008/Vista or
later.
Because the Windows Event Log subsystem does not support subscriptions to the Debug and
NOTE
Analytic channels, these types of events can not be collected with the im_msvistalog module.
In this example, NXLog reads all events from the local Windows EventLog. The data is converted to JSON
format and written to a local file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input eventlog>
6 Module im_msvistalog
7 </Input>
8
9 <Output file>
10 Module om_file
11 File 'C:\test\sysmon.json'
12 Exec to_json();
13 </Output>
For information about filtering events, particularly when using im_msvistalog, see Filtering Events.
Because the Windows EventLog subsystem does not support subscriptions to the Debug and
NOTE
Analytic channels, these types of events can not be collected with the im_msvistalog module.
659
Example 446. Receiving EventLog Data over MSRPC
In this example configuration, the im_msvistalog module is used to get events from a remote server named
mywindowsbox using MSRPC.
To replicate this example in your environment, modify the RemoteServer, RemoteUser, RemoteDomain, and
RemotePassword to reflect the access credentials for the target machine.
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id='1'>
6 <Select Path='Application'>*</Select>
7 <Select Path='Security'>*[System/Level=4]</Select>
8 <Select Path='System'>*</Select>
9 </Query>
10 </QueryList>
11 </QueryXML>
12 RemoteServer mywindowsbox
13 RemoteUser Administrator
14 RemoteDomain Workgroup
15 RemotePassword secret
16 </Input>
The module looks up the available EventLog sources stored under the registry key
SYSTEM\CurrentControlSet\Services\Eventlog, and polls logs from each of these or only the sources
defined with the Sources directive of the NXLog configuration.
This example shows the most basic configuration of the im_mseventlog module. This configuration
forwards all EventLog sources listed in the Windows registry over the network to a remote NXLog instance
at the IP address 192.168.1.1.
nxlog.conf
1 <Input eventlog>
2 Module im_mseventlog
3 </Input>
4
5 <Output tcp>
6 Module om_tcp
7 Host 192.168.1.1
8 Port 514
9 </Output>
660
Example 448. Receiving Windows EventLog Data using the im_wseventing Module
This configuration receives data from all source computers, by listening on port 5985 for connections from
all sources. It also shows a configured HTTPS certificate, used to secure the transfer of EventLog data.
nxlog.conf
1 <Input in>
2 Module im_wseventing
3 ListenAddr 0.0.0.0
4 Port 5985
5 Address https://linux.corp.domain.com:5985/wsman
6 HTTPSCertFile %CERTDIR%/server-cert.pem
7 HTTPSCertKeyFile %CERTDIR%/server-key.pem
8 HTTPSCAFile %CERTDIR%/ca.pem
9 <QueryXML>
10 <QueryList>
11 <Query Id="0" Path="Application">
12 <Select Path="Application">*</Select>
13 <Select Path="Microsoft-Windows-Winsock-AFD/Operational">*</Select>
14 <Select Path="Microsoft-Windows-Wired-AutoConfig/Operational">
15 *
16 </Select>
17 <Select Path="Microsoft-Windows-Wordpad/Admin">*</Select>
18 <Select Path="Windows PowerShell">*</Select>
19 </Query>
20 </QueryList>
21 </QueryXML>
22 </Input>
A query for specific hosts can be set by adding an additional QueryXML block with a <Computer> tag. This
tag contains a pattern that NXLog matches against the name of the connecting Windows client. Computer
names not matching the pattern will use the default QueryXML block (containing no <Computer> tag). The
following QueryXML block, if added to the above configuration, would provide an alternate query for
computer names matching the pattern foo*.
nxlog.conf
1 <QueryXML>
2 <QueryList>
3 <Computer>foo*</Computer>
4 <Query Id="0" Path="Application">
5 <Select Path="Application">*</Select>
6 </Query>
7 </QueryList>
8 </QueryXML>
• A specific channel can be specified with the Channel directive to collect all the events written to a single
channel.
• An XPath query can be given with the QueryXML block (or Query directive). The specified query is then used
to subscribe to events. An XPath query can be used to subscribe to multiple channels and/or limit events by
various attributes. However, XPath queries have a maximum length, limiting the possibilities for detailed
event subscriptions. See XPath Filtering below.
661
• A log file can be read by setting the File directive, in which case im_msvistalog will read all events from the
file (for example, Security.evtx). This is intended primarily for forensics purposes, such as with nxlog-
processor.
• After being read from the source, events can be discarded by matching events in an Exec block and
discarding them selectively with the drop() procedure.
Subscribing to a restricted set of events with an XPath query can offer a performance advantage because the
events are never received by NXLog. However, XPath queries have a maximum length and limited filtering
capabilities, so in some cases it is necessary to combine an XPath query with Exec block filtering in an
im_msvistalog configuration. For examples, see examples in Event IDs to Monitor.
The Event Viewer offers the most practical way to write and test query strings. An XPath query can be generated
and/or tested by filtering the current log or creating a custom view.
1. In the Event Viewer, click an event channel to open it, then right-click the channel and choose Filter Current
Log from the context menu. Or, click Create Custom View in the context menu. Either way, a dialog box will
open and options for basic filtering will be shown in the Filter tab.
2. Specify the desired criteria. The corresponding XPath query on the XML tab will be updated automatically.
3. To view the query string, switch to the XML tab. This string can be copied into the im_msvistalog QueryXML
directive.
4. If required, advanced filtering can be done by selecting the Edit query manually checkbox and editing the
query. The query can then be tested to be sure it matches the correct events and finally copied to the NXLog
configuration with the QueryXML block.
662
Figure 5. A Custom View Querying the Application Channel for Events With ID 1008
Sometimes it is helpful to use a query with sources that may not be available. In this case, set the
TolerateQueryErrors directive to TRUE to ensure that the module will continue to collect logs.
Here, NXLog queries the local Windows EventLog for operational events only.
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 </Input>
This query collects System channel events with levels below 4 (Critical, Error, and Warning).
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path='System'>*[System/Level<4]</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 </Input>
663
106.3.2. Exec Block Filtering
NXLog’s built-in filtering capabilities can also be used to filter events, by matching events and using the drop()
procedure. Events can be matched against any of the im_msvistalog fields.
This example discards all Sysmon network connection events (event ID 3) regarding HTTP network
connections to a particular server and port, and all process creation and termination events (event IDs 1
and 5) for conhost.exe.
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Sysmon/Operational">*</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 <Exec>
11 if ($EventID in (1, 5) and
12 $Image == "C:\\Windows\\System32\\conhost.exe") or
13 ($EventID == 3 and
14 $DestinationPort == 80 and
15 $DestinationIp == 10.0.0.1)
16 drop();
17 </Exec>
18 </Input>
Event IDs are unique per source but are not globally unique. The same event ID may be used by
NOTE
different sources to identify unrelated occurrences.
• The Microsoft Events and Errors page on Microsoft Docs provides a directory of events grouped by area.
Start by navigating through the areas listed in the Available Documentation section.
• Palantir has published a Windows Event Forwarding Guidance repository, which contains a comprehensive WEF
Event Mappings table with categorized event IDs and details.
• The NSA Spotting the Adversary with Windows Event Log Monitoring paper provides event IDs for security
monitoring. See the example configuration here.
• The JPCERT/CC Detecting Lateral Movements Tool Analysis resource provides a collection of event codes that are
observed to indicate lateral movements. See the example configuration here.
664
• See the NXLog User Guide on Active Directory Domain Services for a list and configuration sample of security
event IDs relevant to Active Directory.
The table below displays a small sample of important events to monitor in the Windows Server Security Log for a
local server. See the Security-focused Event IDs to Monitor section for the configuration file holding these event
IDs.
Event Description
ID
1102 The audit log was cleared.
4946 A change has been made to Windows Firewall exception list. A rule was added.
6424 The installation of this device was allowed, after having previously been forbidden by policy.
The example configurations in this section are likely to require further modifications to suit each
NOTE
individual deployment.
Due to a bug or limitation of the Windows Event Log API, 23 or more clauses in a query will
result in a failure with the following error message: ERROR failed to subscribe to
NOTE
msvistalog events, the Query is invalid: This operator is unsupported by this
implementation of the filter.; [error code: 15001]
Event IDs are globally applied to all providers of a given XPath expression so events that match
NOTE these IDs will be collected. You should tweak your chosen dashboard or alerting system to
ensure that the right Event IDs and its subsequent providers are appropriately associated.
665
Example 452. Basic Configuration Example of Security-focused Event IDs to Monitor
This configuration provides a basic example of Windows Security events to monitor. Since only a small
number of IDs are presented, this configuration explicitly provides the actual event IDs to be collected.
nxlog.conf
1 <Input MonitorWindowsSecurityEvents>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Security">*[System[(Level=1 or Level=2 or Level=3 or Level=4
or Level=0) and (EventID=1102 or EventID=4719 or EventID=4704 or EventID=4717 or EventID=4738
or EventID=4798 or EventID=4705 or EventID=4674 or EventID=4697 or EventID=4648 or EventID=4723
or EventID=4946 or EventID=4950 or EventID=6416 or EventID=6424 or EventID=4732)]]</Select>
7 </Query>
8 </QueryList>
9 </QueryXML>
10 </Input>
This extended configuration provides a much wider scope of log collection. Note that this approach for
specifying the event IDs requires defining the event IDs based on groups of events first. The QueryXML
paths are added in the QueryXML block in bulk. Then the Exec block will filter for the defined event IDs, but
only within the paths specified. It also drops event IDs that are not defined.
nxlog.conf (truncated)
1 # define Account Usage Events
2 define AccountUsage 4740, 4648, 4781, 4733, 1518, 4776, 5376, 5377, \
3 4625, 300, 4634, 4672, 4720, 4722, 4782, 4793, \
4 4731, 4735, 4766, 4765, 4624, 1511, 4726, 4725, \
5 4767, 4728, 4732, 4756, 4704
6
7 # define Application Crash Events
8 define AppCrashes 1000, 1002, 1001
9
10 # define Application Whitelisting Events
11 define AppWhitelisting 8023, 8020, 8002, 8003, 8004, 8006, 8007, 4688, \
12 4689, 8005, 865, 866, 867, 868, 882
13
14 # define Boot Events
15 define BootEvents 13, 12
16
17 # define Certificate Services Events
18 define CertServices 95, 4886, 4890, 4874, 4873, 4870, 4887, 4885, \
19 4899, 4896, 1006, 1004, 1007, 1003, 1001, 1002
20
21 # define Clearing Event Logs Events
22 define ClearingLogs 1100, 104, 1102
23
24 # define DNS and Directory Services Events
25 define DNSDirectoryServ 5137, 5141, 5136, 5139, 5138, 3008, 3020
26
27 # define External Media Detection events
28 [...]
666
Example 454. Configuration Example of Event IDs Corresponding to Lateral Movements
This configuration, similar to the extended configuration above, lists event IDs associated with the detection
of malicious lateral movements. It is based on the security research conducted by the CERT (Computer
Emergency Response Team) cybersecurity researchers on Detecting Lateral Movement through Tracking
Event Logs.
nxlog.conf (truncated)
1 # define Security Events
2 define SecurityEvents 4624, 4634, 4648, 4656, 4658, 4660, 4663, 4672, \
3 4673, 4688, 4689, 4698, 4720, 4768, 4769, 4946, \
4 5140, 5142, 5144, 5145, 5154, 5156, 5447, 8222
5
6 # define Sysmon Events
7 define SysmonEvents 1, 2, 5, 8, 9
8
9 # define Application Management event
10 define ApplicationMgmt 104
11
12 # define Windows Remote Management Events
13 define WRMEvents 80, 132, 143, 166, 81
14
15 # define Task Scheduler - Operational Events
16 define TaskSchedEvents 106, 129, 200, 201
17
18 # define Local Session Manager - Operational Events
19 define LocalSessionMgrEvents 21, 24
20
21 #define BitsClient Events
22 define BitsClientsEvents 60
23
24 <Input LateralMovementEvents>
25 Module im_msvistalog
26 TolerateQueryErrors TRUE
27 <QueryXML>
28 <QueryList>
29 [...]
Event descriptions in EventLog data may contain tabs and newlines, but these are not supported by some
formats like BSD Syslog. In this case, a regular expression can be used to remove them.
This input instance is configured to modify the $Message field (the event description) by replacing all tab
characters and newline sequences with spaces.
nxlog.conf
1 <Input in>
2 Module im_mseventlog
3 Exec $Message =~ s/(\t|\R)/ /g;
4 </Input>
667
106.5.1. Forwarding EventLog in BSD Syslog Format
EventLog data is commonly sent in the BSD Syslog format. This can be generated with the to_syslog_bsd()
procedure provided by the xm_syslog module. For more information, see Sending Syslog to a Remote Logger via
UDP, TCP, or TLS.
This example configuration removes tab characters and newline sequences from the $Message field,
converts the event record to BSD Syslog format, and forwards the event via UDP.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input eventlog>
6 Module im_msvistalog
7 Exec $Message =~ s/(\t|\R)/ /g; to_syslog_bsd();
8 </Input>
9
10 <Output udp>
11 Module om_udp
12 Host 10.10.1.1
13 Port 514
14 </Output>
NOTE The to_syslog_bsd() procedure will use only a subset of the EventLog fields.
Output Sample
<14>Jan 2 10:21:16 win7host Service_Control_Manager[448]: The Computer Browser service entered
the running state.↵
668
Example 457. Sending EventLog in JSON Format
This example configuration converts the event record to JSON format and forwards the event via TCP.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input eventlog>
6 Module im_msvistalog
7 Exec to_json();
8 </Input>
9
10 <Output tcp>
11 Module om_tcp
12 Host 192.168.10.1
13 Port 1514
14 </Output>
Output Sample
{
"EventTime": "2017-01-02 10:21:16",
"Hostname": "win7host",
"Keywords": -9187343239835812000,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 7036,
"SourceName": "Service Control Manager",
"ProviderGuid": "{525908D1-A6D5-5695-8E2E-26921D2011F3}",
"Version": 0,
"Task": 0,
"OpcodeValue": 0,
"RecordNumber": 2629,
"ProcessID": 448,
"ThreadID": 2872,
"Channel": "System",
"Message": "The Computer Browser service entered the running state.",
"param1": "Computer Browser",
"param2": "running",
"EventReceivedTime": "2017-01-02 10:21:17",
"SourceModuleName": "eventlog",
"SourceModuleType": "im_msvistalog"
}
For compatibility with logging systems that require BSD Syslog, the JSON format can be used with a BSD Syslog
header.
669
Example 458. Encapsulating JSON EventLog in BSD Syslog
This example configuration converts the event record to JSON, adds a BSD Syslog header, and forwards the
event via UDP.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message = to_json(); to_syslog_bsd();
12 </Input>
13
14 <Output udp>
15 Module om_udp
16 Host 192.168.2.1
17 Port 514
18 </Output>
Output Sample
<14>Jan 2 10:21:16 win7host Service_Control_Manager[448]: {"EventTime":"2017-01-02
10:21:16","Hostname":"win7host","Keywords":-
9187343239835811840,"EventType":"INFO","SeverityValue":2,"Severity":"INFO","EventID":7036,"Sour
ceName":"Service Control Manager","ProviderGuid":"{525908D1-A6D5-5695-8E2E-
26921D2011F3}","Version":0,"Task":0,"OpcodeValue":0,"RecordNumber":2629,"ProcessID":448,"Thread
ID":2872,"Channel":"System","Message":"The Computer Browser service entered the running
state.","param1":"Computer Browser","param2":"running","EventReceivedTime":"2017-01-02
10:21:17","SourceModuleName":"eventlog","SourceModuleType":"im_msvistalog"}↵
670
Example 459. Sending EventLog in Snare Format
This example configuration removes tab characters and newline sequences from the $Message field,
converts the event record to the Snare over Syslog format, and forwards the event via UDP.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input eventlog>
6 Module im_msvistalog
7 Exec $Message =~ s/(\t|\R)/ /g; to_syslog_snare();
8 </Input>
9
10 <Output snare>
11 Module om_udp
12 Host 192.168.1.1
13 Port 514
14 </Output>
Output Sample
<14>Jan 2 10:21:16 win7host MSWinEventLog ⇥ 1 ⇥ System ⇥ 193 ⇥ Mon Jan 02 10:21:16 2017 ⇥
7036 ⇥ Service Control Manager ⇥ N/A ⇥ N/A ⇥ Information ⇥ win7host ⇥ N/A ⇥ ⇥ The Computer
Browser service entered the running state. ⇥ 2773↵
671
Chapter 107. Windows Firewall
Windows Firewall provides local protection from network attacks that might pass through your perimeter
network or originate inside your organization. It also provides computer-to-computer connection security by
allowing you to require authentication and data protection for communications.
Log Sample
#Software: Microsoft Windows Firewall↵
#Time Format: Local↵
#Fields: date time action protocol src-ip dst-ip src-port dst-port size tcpflags tcpsyn tcpack
tcpwin icmptype icmpcode info path↵
↵
2018-10-16 08:20:36 ALLOW UDP 127.0.0.1 127.0.0.1 54348 53 0 - - - - - - - SEND↵
2018-10-16 08:20:36 ALLOW UDP 127.0.0.1 127.0.0.1 54348 53 0 - - - - - - - RECEIVE↵
2018-10-16 08:20:36 ALLOW 250 127.0.0.1 127.0.0.1 - - 0 - - - - - - - SEND↵
There are several different actions that can be logged in the action field: DROP for dropping a connection, OPEN
for opening a connection, CLOSE for closing a connection, OPEN-INBOUND for an inbound session opened to the
local computer, and INFO-EVENTS-LOST for events processed by the Windows Firewall but which were not
recorded in the Security Log.
For information about configuring the Windows Firewall Security log, please refer to Configure the Windows
Defender Firewall with Advanced Security Log on Microsoft Docs.
This example configuration collects and parses firewall logs using the im_file and xm_w3c modules.
nxlog.conf
1 define EMPTY_EVENT_REGEX /(^$|^\s+$)/
2
3 <Extension w3c_parser>
4 Module xm_w3c
5 </Extension>
6
7 <Input pfirewall>
8 Module im_file
9 File 'C:\Windows\system32\LogFiles\Firewall\pfirewall.log'
10 InputType w3c_parser
11 Exec if $raw_event =~ %EMPTY_EVENT_REGEX% drop();
12 </Input>
672
There are several ways to enable Windows Firewall audit logging.
With auditpol.exe
Finally, the following command can be used to enable Windows Firewall audit logs.
After audit logging is enabled, audit events can be viewed in the Security event log or collected with NXLog. For a
full list of Windows Security Audit events, download the Windows security audit events spreadsheet from the
Microsoft Download Center.
Example 461. Collecting Windows Firewall and Advanced Security Events from the EventLog
This example collects Windows Firewall events from the EventLog using the im_msvistalog module.
1 <Input WinFirewallEventLog>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0">
6 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
7 Security/ConnectionSecurity">*</Select>
8 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
9 Security/ConnectionSecurityVerbose">*</Select>
10 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
11 Security/Firewall">*</Select>
12 <Select Path="Microsoft-Windows-Windows Firewall With Advanced
13 Security/FirewallVerbose">*</Select>
14 <Select Path="Network Isolation Operational">*</Select>
15 </Query>
16 </QueryList>
17 </QueryXML>
18 </Input>
673
Event Tracing on Microsoft Docs.
Example 462. Collecting Windows Firewall and Advanced Security Traces from ETW
This configuration uses the im_etw module to collect Windows Firewall related traces from Event Tracing for
Windows.
nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-Firewall
4 </Input>
5
6 <Input etw2>
7 Module im_etw
8 Provider Microsoft-Windows-Windows Firewall With Advanced Security
9 </Input>
674
Chapter 108. Windows Group Policy
Windows Group Policy allows the centralized management and administration of user and computer accounts in
a Microsoft Active Directory environment.
There are several ways Group Policy related logs can be acquired.
• Group Policy Operational logs and Security logs from Windows Event Log
• Event Tracing for Windows (ETW)
• File-based logs found in the file system
This topic covers the methods that can be used to collect these logs with NXLog.
The Group Policy Operational logs are displayed in the Operational object under the Applications and Services
> Microsoft > Windows > GroupPolicy directory in Event Viewer.
Group Policy stores some events in the Security channel of the Windows Event Log. These events are related to
the access, deletion, modification and creation of objects.
Example 463. Collecting Group Policy Logs from Windows Event Log
The following configuration uses the im_msvistalog module to collect Group Policy logs from the Security
channel. It includes a custom query that will filter for events based on specified EventIDs.
nxlog.conf
1 <Input in>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0" Path="Security">
6 <Select Path="Security">
7 *[System[(EventID=4663 or EventID=5136 or \
8 EventID=5137 or EventID=5141)]]
9 </Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>
The Microsoft-Windows-GroupPolicy provider supplies Group Policy related logs via an event tracing session
that can be collected via ETW. It gets the logs from the same source as Windows Event Log provides in the
previous example, however the im_etw module is capable of collecting ETW trace data then forwarding it without
saving the data to disk, which results in improved efficiency. Also, there are slight differences in the level of
verbosity, therefore it is worth considering both options and picking the one best suits your environment.
The following configuration uses the im_etw module to collect Group Policy logs from an ETW provider.
nxlog.conf
1 <Input in>
2 Module im_etw
3 Provider Microsoft-Windows-GroupPolicy
4 </Input>
Group Policy stores Group Policy Client Service (GPSVC) and Group Policy Management Console (GPMC) logs, in
675
the %windir%\debug\usermode directory.
The following configuration uses the im_file module to collect GPMC and GPSVC logs from the above
mentioned %windir%\debug\usermode directory. Since these logs are encoded in UTF-16LE, they need to
be converted into UTF-8 using the xm_charconv extension module.
nxlog.conf (truncated)
1 <Extension _charconv>
2 Module xm_charconv
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 define GroupPolicy /(?x)\w+\((?<PID>[\w\d]{3,4}). \
10 (?<TID>[\w\d]{3,4})\)\s+ \
11 (?<time>[\d\:]+)\s+ \
12 (?<Message>.*)/
13
14 <Input in>
15 Module im_file
16 File 'C:\Windows\debug\usermode\gpsvc.log'
17 File 'C:\Windows\debug\usermode\gpmc.log'
18 <Exec>
19 #Query the current filename
20 if file_name() =~ /^.*\\(.*)$/ $FileName = $1;
21
22 # Convert character encoding from UTF-16LE to UTF-8
23 $raw_event = convert($raw_event, 'UTF-16LE', 'UTF-8');
24
25 #Parse $raw_event
26 if $raw_event =~ %GroupPolicy%
27
28 #Query year, month and day details from the current system
29 [...]
676
Chapter 109. Windows Management Instrumentation
(WMI)
The Windows Management Instrumentation (WMI) system is an implementation of the Web-Based Enterprise
Management (WBEM) and Common Information Model (CIM) standards. It provides an infrastructure for
managing remote systems and providing management data. For more information about WMI, see Windows
Management Instrumentation on Microsoft Docs.
WMI event logging uses Event Tracing for Windows (ETW). These logs can be collected via Windows EventLog or
ETW. For Windows versions prior to Windows Vista and Windows Server 2008, it is also possible to read from
WMI log files.
• 5857: Operation_StartedOperational
• 5858: Operation_ClientFailure
• 5859: Operation_EssStarted
• 5860: Operation_TemporaryEssStarted
• 5861: Operation_ESStoConsumerBinding
The following configuration will collect and parse these events from Microsoft-Windows-WMI-
Activity/Operational using the im_msvistalog module. The xm_xml module is used to further parse the
XML data in the $UserData field.
nxlog.conf
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 <Input in>
6 Module im_msvistalog
7 <QueryXML>
8 <QueryList>
9 <Query Id="0">
10 <Select Path="Microsoft-Windows-WMI-Activity/Operational">*</Select>
11 </Query>
12 </QueryList>
13 </QueryXML>
14 Exec if $UserData parse_xml($UserData);
15 </Input>
677
Output Sample
{
"EventTime": "2019-02-24T21:19:36.603548+01:00",
"Hostname": "Host.DOMAIN.local",
"Keywords": "4611686018427387904",
"EventType": "ERROR",
"SeverityValue": 4,
"Severity": "ERROR",
"EventID": 5858,
"SourceName": "Microsoft-Windows-WMI-Activity",
"ProviderGuid": "{1418EF04-B0B4-4623-BF7E-D74AB47BBDAA}",
"Version": 0,
"TaskValue": 0,
"OpcodeValue": 0,
"RecordNumber": 7314,
"ActivityID": "{3459A8FD-CC70-0000-47C6-593470CCD401}",
"ExecutionProcessID": 1020,
"ExecutionThreadID": 8840,
"Channel": "Microsoft-Windows-WMI-Activity/Operational",
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "S-1-5-18",
"AccountType": "User",
"Message": "Id = {3459A8FD-CC70-0000-47C6-593470CCD401}; ClientMachine = HOST; User = NT
AUTHORITY\\SYSTEM; ClientProcessId = 3640; Component = Unknown; Operation = Start
IWbemServices::ExecQuery - root\\cimv2 : Select * from Win32_Service Where Name = 'MpsSvc';
ResultCode = 0x80041032; PossibleCause = Unknown",
"Opcode": "Info",
"UserData": "<Operation_ClientFailure
xmlns='http://manifests.microsoft.com/win/2006/windows/WMI'><Id>{3459A8FD-CC70-0000-47C6-
593470CCD401}</Id><ClientMachine>HOST</ClientMachine><User>NT AUTHORITY
\\SYSTEM</User><ClientProcessId>3640</ClientProcessId><Component>Unknown</Component><Operation>
Start IWbemServices::ExecQuery - root\\cimv2 : Select * from Win32_Service Where Name =
'MpsSvc'</Operation><ResultCode>0x80041032</ResultCode><PossibleCause>Unknown</PossibleCause></
Operation_ClientFailure>",
"EventReceivedTime": "2019-02-24T21:19:38.104568+01:00",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog",
"Operation_ClientFailure.Id": "{3459A8FD-CC70-0000-47C6-593470CCD401}",
"Operation_ClientFailure.ClientMachine": "HOST",
"Operation_ClientFailure.User": "NT AUTHORITY\\SYSTEM",
"Operation_ClientFailure.ClientProcessId": "3640",
"Operation_ClientFailure.Component": "Unknown",
"Operation_ClientFailure.Operation": "Start IWbemServices::ExecQuery - root\\cimv2 : Select *
from Win32_Service Where Name = 'MpsSvc'",
"Operation_ClientFailure.ResultCode": "0x80041032",
"Operation_ClientFailure.PossibleCause": "Unknown"
}
678
Example 467. Collecting WMI Logs With im_etw
The following configuration uses the im_etw module to collect ETW logs from the Microsoft-Windows-
WMI-Activity provider.
nxlog.conf
1 <Input etw_in>
2 Module im_etw
3 Provider Microsoft-Windows-WMI-Activity
4 </Input>
Output Sample
{
"SourceName": "Microsoft-Windows-WMI-Activity",
"ProviderGuid": "{1418EF04-B0B4-4623-BF7E-D74AB47BBDAA}",
"EventId": 100,
"Version": 0,
"Channel": 18,
"OpcodeValue": 0,
"TaskValue": 0,
"Keywords": "2305843009213693952",
"EventTime": "2019-03-04T19:48:48.842576+01:00",
"ExecutionProcessID": 1500,
"ExecutionThreadID": 8104,
"ActivityID": "{AF4CFCDC-66C1-4A9A-B7D7-13ECD1AAE01A}",
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "S-1-5-18",
"AccountType": "User",
"ComponentName": "MI_Client",
"MessageDetail": "Operation Enumerate Instances: session=0000008F1C752638,
operation=0000008F1D03DCF0, internal-operation=0000008F1D63ED90, namespace=root\\Microsoft
\\Windows\\Storage\\SM, classname=MSFT_SMStorageVolume",
"FileName": "admin\\wmi\\wmiv2\\client\\api\\operation.c:2008",
"EventReceivedTime": "2019-03-04T19:48:49.888767+01:00",
"SourceModuleName": "etw_in",
"SourceModuleType": "im_etw"
}
679
Example 468. Collecting and Parsing WMI Provider Log Files
This configuration collects and parses events from the three WMI log files.
nxlog.conf
1 <Input in>
2 Module im_file
3 File 'C:\WINDOWS\system32\wbem\Logs\wmiprov.log'
4 File 'C:\WINDOWS\system32\wbem\Logs\ntevt.log'
5 File 'C:\WINDOWS\system32\wbem\Logs\dsprovider.log'
6 <Exec>
7 file_name() =~ /(?<Filename>[^\\]+)$/;
8 if $raw_event =~ /^\((?<EventTime>.+)\.\d{7}\) : (?<Message>.+)$/
9 $EventTime = strptime($EventTime, "%a %b %d %H:%M:%S %Y");
10 </Exec>
11 </Input>
680
Chapter 110. Windows PowerShell
PowerShell is an command-line shell based on the .NET framework.
By default, Windows services run under the NT AUTHORITY\SYSTEM user account. Depending on the purpose of
a PowerShell script, its operations may require additional permissions. In this case, either change the NXLog
service account (see Running Under a Custom Account on Windows) or add permissions as required to the
SYSTEM account.
This configuration uses the im_exec module to execute powershell.exe with the specified arguments,
including the path to the script. The script creates an event and writes it to standard output in JSON format.
The xm_json parse_json() procedure is used to parse the JSON so all the fields are available in the event
record.
The script shows header examples for running the script under a different architecture than the NXLog
agent. Also, a simple file-based position cache is included to demonstrate how a script can resume from the
previous position when the agent or module instance is stopped and started again.
Because the end value of one poll and the start value of the next poll are equal, an actual source read
should not include exact matches for both start and end values (to prevent reading duplicate events). For
example, either the start value should be excluded ($start < $event ≤ $end) or the end value ($start ≤
$event < $end).
This example requires PowerShell 3 or later to transport structured data in JSON format. If
NOTE structured data is required with an earlier version of PowerShell, CSV format could be used
instead; see the next example.
681
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 envvar systemroot
6 <Input powershell>
7 Module im_exec
8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
9 # Use "-Version" to select a specific PowerShell version.
10 #Arg "-Version"
11 #Arg "3"
12 # Bypass the system execution policy for this session only.
13 Arg "-ExecutionPolicy"
14 Arg "Bypass"
15 # Skip loading the local PowerShell profile.
16 Arg "-NoProfile"
17 # This specifies the path to the PowerShell script.
18 Arg "-File"
19 Arg "C:\ps_input.ps1"
20 # Any additional arguments are passed to the PowerShell script.
21 Arg "arg1"
22 <Exec>
23 # Parse JSON
24 parse_json();
25
26 # Convert $EventTime field to datetime
27 $EventTime = parsedate($EventTime);
28 </Exec>
29 </Input>
ps_input.ps1 (truncated)
#Requires -Version 3
# Use this if you need 64-bit PowerShell (has no effect on 32-bit systems).
#if ($env:PROCESSOR_ARCHITEW6432 -eq "AMD64") {
# Write-Debug "Running 64-bit PowerShell."
# &"$env:SYSTEMROOT\SysNative\WindowsPowerShell\v1.0\powershell.exe" `
# -NonInteractive -NoProfile -ExecutionPolicy Bypass `
# -File "$($myInvocation.InvocationName)" $args
# exit $LASTEXITCODE
#}
PowerShell 2 does not support JSON. Instead, events can be formatted as CSV and parsed with an xm_csv
module instance.
682
Example 470. Using PowerShell 2 as Input
In this example, the PowerShell script generates output strings in CSV format. The xm_csv parse_csv()
procedure is used to parse the CSV strings into fields in the event record. Note that the fields must be
provided, sorted by name, in the xm_csv Fields directive (and corresponding types should be provided via
the FieldTypes directive).
For best results with structured data, use JSON with PowerShell 3 or later (see the
WARNING
previous example).
nxlog.conf
1 <Extension csv_parser>
2 Module xm_csv
3 Fields Arguments, EventTime, Message
4 FieldTypes string, datetime, string
5 </Extension>
6
7 envvar systemroot
8 <Input powershell>
9 Module im_exec
10 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
11 Arg "-Version"
12 Arg "2"
13 Arg "-ExecutionPolicy"
14 Arg "Bypass"
15 Arg "-NoProfile"
16 Arg "-File"
17 Arg "C:\ps2_input.ps1"
18 Exec csv_parser->parse_csv();
19 </Input>
ps2_input.ps1 (truncated)
#Requires -Version 2
$count = 0
while($true) {
$count += 1
$now = [System.DateTime]::UtcNow
683
Example 471. Using PowerShell to Forward Logs
This configuration uses om_exec to execute powershell.exe with the specified arguments, including the
path to the script. The script reads events on standard input.
This configuration requires PowerShell 3 or later for its JSON support and to correctly read
NOTE
lines from standard input.
See the Using PowerShell to Generate Logs example above for more details about
TIP powershell.exe arguments and PowerShell code for explicitly specifying a 32-bit or 64-bit
environment.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 envvar systemroot
6 <Output powershell>
7 Module om_exec
8 Command "%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe"
9 Arg "-ExecutionPolicy"
10 Arg "Bypass"
11 Arg "-NoProfile"
12 Arg "-File"
13 Arg "C:\ps_output.ps1"
14 Exec to_json();
15 </Output>
ps_output.ps1
#Requires -Version 3
while($line = [Console]::In.ReadLine()) {
# Convert from JSON
$record = $line | ConvertFrom-Json
Because include_stdout does not support arguments, it is simplest to use a batch/PowerShell polyglot script for
this purpose. For another example, see Automatic Retrieval of IIS Site Log Locations.
The Command Prompt may print '@' is not recognized if a Unicode byte order mark
TIP (BOM) is included in the batch file. To fix this, use Notepad and select the ANSI encoding when
saving the file.
684
Example 472. Using PowerShell and include_stdout
This configuration uses PowerShell code to generate the File directive for the Input instance.
nxlog.conf
1 <Input in>
2 Module im_file
3 include_stdout C:\include.cmd
4 </Input>
include.cmd (truncated)
@( Set "_= (
REM " ) <#
)
@Echo Off
SetLocal EnableExtensions DisableDelayedExpansion
set powershell=powershell.exe
REM Use this if you need 64-bit PowerShell (has no effect on 32-bit systems).
REM if defined PROCESSOR_ARCHITEW6432 (
REM set powershell=%SystemRoot%\SysNative\WindowsPowerShell\v1.0\powershell.exe
REM )
In addition to the sections below, see Securing PowerShell in the Enterprise, Greater Visibility Through
PowerShell Logging, and PowerShell ♥ the Blue Team. Also see the Command line process auditing article on
Microsoft Docs, the Windows Command Line Auditing and Sysmon chapters, which can be used to generate
events for command line process creation (but not for commands executed through the PowerShell engine).
Module logging can be enabled by setting the LogPipelineExecutionDetails property of a module to True. Or this
property can be enabled for selected modules through Group Policy as follows.
685
3. Select Enabled. Then click the [ Show… ] button and enter the modules for which to enable logging. Use an
asterisk (*) to enable logging for all modules.
This configuration collects all events with ID 4103 from the Windows PowerShell Operational channel. First,
the key-value pairs from the ContextInfo field are parsed to remove the \n and \r\n characters where
required, after that, the ContextInfo_ prefix is added to enhance visibility. In addition, the original
Message and ContextInfo fields are removed with their corresponding content as they are available
elsewhere in the output. Finally, the logs are converted to JSON.
686
nxlog.conf
1 <Extension kvp>
2 Module xm_kvp
3 KVPDelimiter ,
4 KVDelimiter =
5 </Extension>
6
7 <Extension json>
8 Module xm_json
9 </Extension>
10
11 <Input in>
12 Module im_msvistalog
13 <QueryXML>
14 <QueryList>
15 <Query Id="0" Path="Microsoft-Windows-PowerShell/Operational">
16 <Select Path="Microsoft-Windows-PowerShell/Operational">
17 *[System[EventID=4103]]</Select>
18 </Query>
19 </QueryList>
20 </QueryXML>
21 <Exec>
22 if defined($ContextInfo)
23 {
24 $ContextInfo = replace($ContextInfo, "\r\n", ",");
25 $ContextInfo = replace($ContextInfo, "\n", ",");
26 $ContextInfo = replace($ContextInfo, " ", "");
27 kvp->parse_kvp($ContextInfo, "ContextInfo_");
28 delete($ContextInfo);
29 delete($Message);
30 }
31 json->to_json();
32 </Exec>
33 </Input>
687
Output Sample
{
"EventTime": "2020-01-29T05: 30: 45.727799-08: 00",
"Hostname": "NXLog-Server",
"Keywords": "0",
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 4103,
"SourceName": "Microsoft-Windows-PowerShell",
"ProviderGuid": "{A0C1853B-5C40-4B15-8766-3CF1C58F985A}",
"Version": 1,
"TaskValue": 106,
"OpcodeValue": 20,
"RecordNumber": 170,
"ActivityID": "{9C1FE60B-D6F2-0000-3316-209CF2D6D501}",
"ExecutionProcessID": 3648,
"ExecutionThreadID": 1060,
"Channel": "Microsoft-Windows-PowerShell/Operational",
"Domain": "NXLog-Server",
"AccountName": "Administrator",
"UserID": "S-1-5-21-2463765617-934790487-2583750676-500",
"AccountType": "User",
"Category": "Executing Pipeline",
"Opcode": "To be used when operation is just executing a method",
"Payload": "Update-Help has completed successfully.",
"EventReceivedTime": "2020-01-29T05: 30: 47.161585-08: 00",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog",
"ContextInfo_Severity": "Informational",
"ContextInfo_Host Name": "ConsoleHost",
"ContextInfo_Host Version": "5.1.17763.592",
"ContextInfo_Host ID": "67d049eb-f3d6-4718-8cd2-b9dae30c4c7b",
"ContextInfo_Host Application": "C: \\Windows\\System32\\WindowsPowerShell\\v1.0
\\powershell.exe",
"ContextInfo_Engine Version": "5.1.17763.592",
"ContextInfo_Runspace ID": "3145a9e1-18e3-4fa1-8700-fc78c783684b",
"ContextInfo_Pipeline ID": 6,
"ContextInfo_Command Name": "Update-Help",
"ContextInfo_Command Type": "Cmdlet",
"ContextInfo_Script Name": null,
"ContextInfo_Command Path": null,
"ContextInfo_Sequence Number": 79,
"ContextInfo_User": "NXLog-Server\\Administrator",
"ContextInfo_Connected User": null,
"ContextInfo_Shell ID": "Microsoft.PowerShell"
}
688
2. Go to Computer Configuration › Administrative Templates › Windows Components › Windows
PowerShell and open the Turn on PowerShell Script Block Logging setting.
3. Select Enabled. Optionally, check the Log script block invocation start/stop events option (this will
generate a high volume of event logs).
689
Example 474. Collecting Script Block Logging Events
The following configuration collects events with IDs 4104, 4105, and 4106 from the Windows PowerShell
Operational channel. Verbose level events are excluded.
nxlog.conf
1 <Input script_block_logging>
2 Module im_msvistalog
3 <QueryXML>
4 <QueryList>
5 <Query Id="0" Path="Microsoft-Windows-PowerShell/Operational">
6 <Select Path="Microsoft-Windows-PowerShell/Operational">
7 *[System[(Level=0 or Level=1 or Level=2 or Level=3 or Level=4)
8 and ((EventID >= 4104 and EventID <= 4106))]]
9 </Select>
10 </Query>
11 </QueryList>
12 </QueryXML>
13 </Input>
110.2.3. Transcription
PowerShell provides "over-the-shoulder" transcription of PowerShell sessions with the Start-Transcript cmdlet.
With PowerShell 5, system-wide transcription can be enabled via Group Policy; this is equivalent to calling the
Start-Transcript cmdlet on each PowerShell session. Transcriptions are written to the current user’s Documents
directory unless a system-level output directory is set in the policy settings.
690
System-wide transcription can be enabled through Group Policy as follows.
This configuration reads and parses transcript files written to the TRANSCRIPTS_DIR directory (which
should be set appropriately). Headers, footers, and commands are parsed as separate events. $File and
$EventTime fields are set for each event (invocation headers must be enabled for command timestamps).
$Command and $Output fields are added for command events. Fields from the header entries are parsed
with xm_kvp and added to the event record. Finally, the logs are converted to JSON format and forwarded
via TCP.
The HeaderLine below must be changed if invocation headers are not enabled. See the
NOTE
comment in the configuration.
691
nxlog.conf (truncated)
1 define TRANSCRIPTS_DIR C:\powershell
2
3 <Extension transcript_parser>
4 Module xm_multiline
5 # Use this if invocation headers are ON (recommended)
6 HeaderLine /^\*{22}$/
7 # Use this if invocation headers are OFF (not recommended)
8 #HeaderLine /^(\*{22}$|PS[^>]*>)/
9 <Exec>
10 $raw_event =~ s/^\xEF\xBB\xBF//;
11 if get_var('include_next_record') and $raw_event =~ /^\*{22}$/
12 {
13 set_var('include_next_record', FALSE);
14 $raw_event =~ s/^\*//;
15 }
16 else if $raw_event =~ /^Command start time: \d{14}$/
17 set_var('include_next_record', TRUE);
18 </Exec>
19 </Extension>
20
21 <Extension transcript_header_parser>
22 Module xm_kvp
23 KVPDelimiter \n
24 </Extension>
25
26 <Input transcription>
27 Module im_file
28 File '%TRANSCRIPTS_DIR%\\*PowerShell_transcript.*'
29 [...]
The following output shows the first two events of the log sample above.
692
Output Sample
{
"EventReceivedTime": "2017-10-30 22:32:49",
"SourceModuleName": "transcription",
"SourceModuleType": "im_file",
"File": "C:\\powershell\\\\20171030\\PowerShell_transcript.WIN-
FT17VBNL4B2.LcxuCZbr.20171030223248.txt",
"Message": "Windows PowerShell transcript start\r\nStart time: 20171030223248\r\nUsername:
WIN-FT17VBNL4B2\\Administrator\r\nRunAs User: WIN-FT17VBNL4B2\\Administrator\r\nMachine: WIN-
FT17VBNL4B2 (Microsoft Windows NT 10.0.14393.0)\r\nHost Application: C:\\Windows\\system32
\\WindowsPowerShell\\v1.0\\PowerShell.exe\r\nProcess ID: 4268\r\nPSVersion: 5.1.14393.1770
\r\nPSEdition: Desktop\r\nPSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.1770
\r\nBuildVersion: 10.0.14393.1770\r\nCLRVersion: 4.0.30319.42000\r\nWSManStackVersion: 3.0
\r\nPSRemotingProtocolVersion: 2.3\r\nSerializationVersion: 1.1.0.1",
"Start time": "20171030223248",
"Username": "WIN-FT17VBNL4B2\\Administrator",
"RunAs User": "WIN-FT17VBNL4B2\\Administrator",
"Machine": "WIN-FT17VBNL4B2 (Microsoft Windows NT 10.0.14393.0)",
"Host Application": "C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\PowerShell.exe",
"Process ID": "4268",
"PSVersion": "5.1.14393.1770",
"PSEdition": "Desktop",
"PSCompatibleVersions": "1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.1770",
"BuildVersion": "10.0.14393.1770",
"CLRVersion": "4.0.30319.42000",
"WSManStackVersion": "3.0",
"PSRemotingProtocolVersion": "2.3",
"SerializationVersion": "1.1.0.1",
"EventTime": "2017-10-30 22:32:48"
}
{
"EventReceivedTime": "2017-10-30 22:32:56",
"SourceModuleName": "transcription",
"SourceModuleType": "im_file",
"File": "C:\\powershell\\\\20171030\\PowerShell_transcript.WIN-
FT17VBNL4B2.LcxuCZbr.20171030223248.txt",
"Command": "echo test",
"EventTime": "2017-10-30 22:32:55",
"Output": "test",
"Message": "Command start time: 20171030223255\r\n**********************\r\nPS C:\\Users
\\Administrator> echo test\r\ntest"
}
693
Chapter 111. Microsoft Windows Update
Windows Update is a Windows system service that manages the updates for the Windows operating system.
Updates and patches are scheduled to be released through Windows Update on every second Tuesday of the
month.
The event logs related to Windows Update are accessible in two ways depending on the version of your
operating system:
• Via Event Tracing for Windows (ETW), for Windows 10, Windows Server 2016 and Windows Server 2019.
• Via the file system, in the the earlier versions of Windows.
The following configuration collects Windows Update logs using the im_etw module. The collected logs are
then converted to JSON using the xm_json extension module.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input in_etw>
6 Module im_etw
7 Provider Microsoft-Windows-WindowsUpdateClient
8 Exec to_json();
9 </Input>
Output Sample
{
"SourceName": "Microsoft-Windows-WindowsUpdateClient",
"ProviderGuid": "{945A8954-C147-4ACD-923F-40C45405A658}",
"EventId": 38,
"Version": 0,
"Channel": 16,
"OpcodeValue": 17,
"TaskValue": 1,
"Keywords": "4611686018427388544",
"EventTime": "2019-06-06T15:08:01.098200+02:00",
"ExecutionProcessID": 820,
"ExecutionThreadID": 2440,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "S-1-5-18",
"AccountType": "User",
"EventReceivedTime": "2019-06-06T15:08:01.847001+02:00",
"SourceModuleName": "in_etw",
"SourceModuleType": "im_etw"
}
694
File-based Log Collection
Prior to the release of Windows Server 2016 and Windows 10, all Windows Update logs were stored in the
WindowsUpdate.log file under the %SystemRoot% directory.
Although this log file is deprecated, it can still be generated as described in the Generating
NOTE
WindowsUpdate.log Microsoft article.
Example 477. Collecting Windows Update Logs from Microsoft Windows Server 2008 and 2012
The following configuration collects and parses logs using the im_file module. The parser section is based
on the description of the Windows Update log files section of the Microsoft documentation.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 define windowsupdate /(?x)(?<Date>([\d\-]+))\s+ \
6 (?<Time>([\d\:]+))\s+ \
7 (?<PID>\d{3,5})\s+ \
8 (?<TID>([\d\w]+))\s+ \
9 (?<Category>(\w+))\s+ \
10 (?<Message>(.*)) /
11
12 <Input windowsupdate>
13 Module im_file
14 File 'C:\Windows\WindowsUpdate.log'
15 <Exec>
16 $raw_event =~ %windowsupdate%;
17 $EventTime = ($Date + ' ' + $Time);
18 to_json();
19 </Exec>
20 </Input>
Input Sample
2019-06-06 18:22:14:390 1012 1080 DnldMgr PurgeContentForPatchUpdates removing unused
directory "b7c04a03c3650087ddea456a018dba62"
Output Sample
{
"EventReceivedTime": "2019-06-06T18:22:14.843037+02:00",
"SourceModuleName": "windowsupdate",
"SourceModuleType": "im_file",
"Category": "DnldMgr",
"Date": "2019-06-06",
"Message": "PurgeContentForPatchUpdates removing unused directory
\"b7c04a03c3650087ddea456a018dba62\"",
"PID": "1012",
"TID": "1080",
"Time": "18:22:14:390",
"EventTime": "2019-06-06 18:22:14:390"
}
695
Chapter 112. Windows USB Auditing
Portable devices provide the user easy access to company related data in a corporate environment. As the usage
of USB devices increase, so do the risks associated with them.
This section discusses the possibilities of collecting USB related events in a Microsoft Windows environment
using NXLog.
There are four ways USB activity related events can be tracked down.
They are generated every time when a device is plugged in. Tracking these USB related events are useful for
Audit purposes.
They can be used to monitor object manipulation, such as creation, deletion as well as other changes. This can
be useful for monitoring for possible data leaks.
These two events can be turned on in the Local Security Policy or by the auditpol tool with the command below
in Windows PowerShell.
696
The following command could be used to check the status of subcategories if necessary.
Event 4663 is the most useful. It is the event that tells what exactly happened on the object. What has been
accessed, what process did it and what kind of operation it was.
DriverFrameworks-UserMode (not Connection 1003, 1004, 2000, 2001, 2003, 2004, 2005, 2006, 2010,
enabled by default) 2100, 2101, 2105, 2016
Ejection 1006, 1008, 2100, 2101, 2102, 2105, 2106, 2900, 2901
In Event Viewer (eventvwr) under Applications and Services Logs › Microsoft › Windows › DriverFrameworks-
UserMode\Operational, right-click on Operational and select Enable Log.
1. Enable a Remote Administration exception on the firewall of the client computers via a GPO. The following
needs to be enabled. [Computer Configuration\Administrative Templates\Network\Network
Connections\Windows Firewall\Domain Profile\Windows Firewall: Allow inbound remote
administration exception]
2. Prepare a text file for the client computer names. For example, c:\computers.txt.
697
The following PowerShell command checks the status of logging:
This configuration uses the im_msvistalog module to collect USB events. EventIDs that are useful from the
audit perspective are listed in the configuration define lines.
nxlog.conf (truncated)
1 <Extension _xml>
2 Module xm_xml
3 </Extension>
4
5 # StorSvc Diagnostic
6 define ID1 1001
7 # PnP detailed tracking
8 define ID2 6416
9 # Partition Diagnostic
10 define ID3 1006
11 # NTFS
12 define ID4 142
13 # DriverFw preconnection
14 define ID5 1003
15 # DriverFw connection-related
16 define ID6 2003
17 # DriverFw removal-related
18 define ID7 1008
19 # System: DriverFramework-Usermode
20 define ID8 10000
21 # System: UserPNP
22 define ID9 20001
23 #Object Access Audit
24 define ID10 4656
25
26 <Input in>
27 Module im_msvistalog
28 # For Windows 2003 and earlier, use the im_mseventlog module.
29 [...]
698
Output Sample
{
"EventTime": "2019-10-19T20:41:06.700337+02:00",
"Hostname": "Host",
"Keywords": "9223372036854775808",
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"EventID": 1008,
"SourceName": "Microsoft-Windows-DriverFrameworks-UserMode",
"ProviderGuid": "{2E35AAEB-857F-4BEB-A418-2E6C0E54D988}",
"Version": 1,
"TaskValue": 18,
"OpcodeValue": 2,
"RecordNumber": 42756,
"ExecutionProcessID": 908,
"ExecutionThreadID": 504,
"Channel": "Microsoft-Windows-DriverFrameworks-UserMode/Operational",
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "S-1-5-18",
"AccountType": "User",
"Message": "The host process ({1208e11e-4339-4c06-86bb-7430fd254ee6}) has been shutdown.",
"Category": "Shutdown of a driver host process.",
"Opcode": "Stop",
"UserData": "<UMDFDriverManagerHostShutdown
xmlns='http://www.microsoft.com/DriverFrameworks/UserMode/Event'><LifetimeId>{1208e11e-4339-
4c06-86bb-
7430fd254ee6}</LifetimeId><TerminateStatus>0</TerminateStatus><ExitCode>0</ExitCode></UMDFDrive
rManagerHostShutdown>",
"EventReceivedTime": "2019-10-19T20:41:08.115696+02:00",
"SourceModuleName": "in",
"SourceModuleType": "im_msvistalog",
"UMDFDriverManagerHostShutdown.LifetimeId": "{1208e11e-4339-4c06-86bb-7430fd254ee6}",
"UMDFDriverManagerHostShutdown.TerminateStatus": "0",
"UMDFDriverManagerHostShutdown.ExitCode": "0"
}
Provider Details
Microsoft-Windows-USB-USBHUB Provides USB2 hub events
Provider Details
Microsoft-Windows-USB-USBHUB3 Provides USB3 hub events
699
Provider Details
Microsoft-Windows-USB-USBXHCI Provides USB XHCI events
Provider Details
Microsoft-Windows-USB-CCID Monitors Smart Card readers using USB to connect to the computer
Microsoft-Windows-Smartcard-Trigger Triggers a log when inserting and removing a USB smart card reader
This configuration uses the im_etw module to collect logs when a USB Smart Card reader is inserted.
nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-Smartcard-Trigger
4 </Input>
Output Sample
{
"SourceName": "Microsoft-Windows-Smartcard-Trigger",
"ProviderGuid": "{AEDD909F-41C6-401A-9E41-DFC33006AF5D}",
"EventId": 1000,
"Version": 0,
"ChannelID": 0,
"OpcodeValue": 0,
"TaskValue": 0,
"Keywords": "0",
"EventTime": "2019-12-05T14:12:11.453805+01:00",
"ExecutionProcessID": 13180,
"ExecutionThreadID": 7608,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Domain": "NT AUTHORITY",
"AccountName": "LOCAL SERVICE",
"UserID": "S-1-5-19",
"AccountType": "Well Known Group",
"Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
"ScDeviceEnumGuid": "{5a236687-d307-44e2-9241-e1c6c27ceb28}",
"EventReceivedTime": "2019-12-05T14:12:13.457624+01:00",
"SourceModuleName": "etw",
"SourceModuleType": "im_etw"
}
This information is stored in the registry keys under the following three registry paths.
• "HKLM\SYSTEM\CurrentControlSet\Enum\USB\"
700
• "HKLM\SYSTEM\CurrentControlSet\Enum\USBSTOR\"
• "HKLM\SYSTEM\CurrentControlSet\Control\DeviceClasses\"
The first two stores information about the plugged in USB devices. The third on stores additional information as
USB drives are recognized as disks and mounted as a drive volume in the system. For more information, see the
USB Device Registry Entries documentation from Microsoft.
TIP These events could be correlated based on the serial numbers of the USB devices.
This configuration uses the im_regmon module to collect USB related events from the Windows Registry. It
scans the registry every 60 second.
nxlog.conf
1 <Input in>
2 Module im_regmon
3 RegValue 'HKLM\SYSTEM\CurrentControlSet\Control\DeviceClasses\*'
4 RegValue 'HKLM\SYSTEM\CurrentControlSet\Enum\USB\*'
5 RegValue 'HKLM\SYSTEM\CurrentControlSet\Enum\USBSTOR\*'
6 Recursive TRUE
7 ScanInterval 60
8 </Input>
Output Sample
{
"EventTime": "2019-10-20T11:07:56.473658+02:00",
"Hostname": "Host",
"EventType": "CHANGE",
"RegistryValueName": "HKLM\\SYSTEM\\CurrentControlSet\\Enum\\USBSTOR
\\Disk&Ven_Kingston&Prod_DataTraveler_3.0&Rev_\\60A44C413A8CF320B9110053&0\\Properties\\{83da63
26-97a6-4088-9453-a1923f573b29}\\0066\\",
"PrevValueSize": 8,
"ValueSize": 8,
"DigestName": "SHA1",
"PrevDigest": "a477f34abec7da133ad5ff2dcf67b3b7e089d2d6",
"Digest": "e47f5d5668fa31237f198a2e4cb9bc78003f3cc8",
"Severity": "WARNING",
"SeverityValue": 3,
"EventReceivedTime": "2019-10-20T11:07:56.473658+02:00",
"SourceModuleName": "in",
"SourceModuleType": "im_regmon"
}
The file is located in the C:\Windows\INF directory. NXLog can read, parse and forward the logs contained in this
file.
701
This configuration uses the im_file module to read the events from the SetupAPI.dev.log file.
nxlog.conf
1 <Input in>
2 Module im_file
3 File 'C:\Windows\INF\SetupAPI.dev.log'
4 </Input>
702
Chapter 113. Zeek (formerly Bro) Network Security
Monitor
NXLog can be configured to collect events generated by Zeek formerly known as the Bro Network Security
Monitor, a powerful open source Intrusion Detection System (IDS) and network traffic analysis framework. The
Zeek engine captures traffic and converts it to a series of high-level events. These events are then analyzed
according to customizable policies. Zeek supports real-time alerts, data logging for further investigation, and
automatic program execution for detected anomalies. Zeek is able to analyze different protocols, including HTTP,
FTP, SMTP, and DNS; as well as run host and port scans, detect signatures, and discover syn-floods.
File Description
conn.log TCP/UDP/ICMP connections
Zeek produces human-readable logs in a format similar to W3C. Each log file uses a different set of fields.
dns.log Sample
#separator \x09
#set_separator ,
#empty_field (empty)
#unset_field -
#path dns
#open 2020-05-27-22-00-01
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto
trans_id rtt query qclass qclass_name qtype qtype_name rcode rcode_name
AA TC RD RA Z answers TTLs rejected
#types time string addr port addr port enum count interval string
count string count string count string bool bool bool bool count
vector[string] vector[interval] bool
1590634800.248362 C1ggH7liCnwAfLjw9 192.168.1.7 53743 192.168.1.1 53 udp
18876 - 250.255.255.239.in-addr.arpa 1 C_INTERNET 12 PTR 3
NXDOMAIN F F T F 0 - - F
1590634800.259227 C1ggH7liCnwAfLjw9 192.168.1.7 53743 192.168.1.1 53 udp
18876 - 250.255.255.239.in-addr.arpa 1 C_INTERNET 12 PTR 3
NXDOMAIN F F T F 0 - - F
1590634800.274483 CTQxOg2sSOuUO5AZy8 192.168.1.7 47182 192.168.1.1 53 udp
48442 - 7.1.168.192.in-addr.arpa 1 C_INTERNET 12 PTR 3
NXDOMAIN F F T F 0 - - F
703
For more information about Zeek logging, see the Zeek Manual.
NOTE The following configurations have been tested with Zeek version 3.0.6 LTS.
704
Example 480. Using xm_w3c to Parse Zeek Logs
This configuration reads Zeek logs from a directory, parses with xm_w3c, and writes out events in JSON
format.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension w3c_parser>
6 Module xm_w3c
7 </Extension>
8
9 <Input zeek>
10 Module im_file
11 File '/opt/zeek/logs/current/*.log'
12 InputType w3c_parser
13 </Input>
14
15 <Output zeek_json>
16 Module om_file
17 File '/tmp/zeek_logs.json'
18 Exec to_json();
19 </Output>
The following output from this configuration represents a sample event logged by Zeek after being parsed
by NXLog and converted to JSON format. Spacing and line breaks have been added for readability.
Output sample
{
"ts": "1590636144.680688",
"uid": "C1InwK3K6fhY6YdvRe",
"id.orig_h": "192.168.1.7",
"id.orig_p": "45500",
"id.resp_h": "35.222.85.5",
"id.resp_p": "80",
"version": "1",
"cipher": "GET",
"curve": "connectivity-check.ubuntu.com",
"server_name": "/",
"resumed": null,
"last_alert": "1.1",
"next_protocol": null,
"established": null,
"cert_chain_fuids": "0",
"client_cert_chain_fuids": "0",
"subject": "204",
"issuer": "No Content",
"client_subject": null,
"client_issuer": null,
"validation_status": "(empty)",
"EventReceivedTime": "2020-05-27T22:22:26.917647-05:00",
"SourceModuleName": "zeek",
"SourceModuleType": "im_file"
}
The xm_w3c module is recommended because it supports reading the field list from the W3C-style log file
705
header. For NXLog Community Edition, the xm_csv module could be used instead to parse Zeek logs. A separate
instance of xm_csv must be configured for each log type.
This example has separate xm_csv module instances for the DNS and DHCP log types. Additional CSV
parsers could be added for the remaining Zeek log types.
nxlog.conf (truncated)
1 <Extension csv_parser_dns>
2 Module xm_csv
3 Fields ts, uid id.orig_h, id.orig_p, id.resp_h, id.resp_p, proto, \
4 trans_id, rtt query, qclass, qclass_name, qtype, qtype_name, \
5 rcode, rcode_name, AA, TC, RD, RA, Z, answers, TTLs, rejected
6 Delimiter \t
7 </Extension>
8
9 <Extension csv_parser_dhcp>
10 Module xm_csv
11 Fields ts, uid, id.orig_h, id.orig_p, id.resp_h, id.resp_p, mac, \
12 assigned_ip, lease_time, trans_id
13 Delimiter \t
14 </Extension>
15
16 # xm_fileop provides the `file_basename()` function
17 <Extension _fileop>
18 Module xm_fileop
19 </Extension>
20
21 <Extension json>
22 Module xm_json
23 </Extension>
24
25 <Input zeek>
26 Module im_file
27 File '/opt/zeek/spool/zeek/*.log'
28 <Exec>
29 [...]
The following output from this configuration represents a sample event logged by Zeek after being parsed
by NXLog and converted to JSON format. Spacing and line breaks have been added for readability.
706
Output sample
{
"EventReceivedTime": "2020-05-29 10:55:51",
"SourceModuleName": "zeek",
"SourceModuleType": "im_file",
"ts": "1590767749.877652",
"uid": "CAhAIX1Dl5KFfnhKbi",
"id.orig_h": "192.168.1.7",
"id.orig_p": "42157",
"id.resp_h": "192.168.1.1",
"id.resp_p": "53",
"proto": "udp",
"trans_id": "56765",
"rtt": "0.051801",
"query": "zeek.org",
"qclass": "1",
"qclass_name": "C_INTERNET",
"qtype": "1",
"qtype_name": "A",
"rcode": "0",
"rcode_name": "NOERROR",
"AA": "F",
"TC": "F",
"RD": "T",
"RA": "T",
"Z": "0",
"answers": "192.0.78.212,192.0.78.150",
"TTLs": "60.000000,60.000000",
"rejected": "F"
}
707
Troubleshooting
708
Chapter 114. Internal Logs
When issues arise while configuring or maintaining an NXLog instance, a stepwise troubleshooting approach
(moving from the most likely and simple cases to the more complex and rare ones) generally yields favorable
results. The first step is always to inspect the internal log which NXLog generates.
These internal messages are written to the file defined in the LogFile directive in nxlog.conf. On Windows that
file is C:\Program Files\nxlog\data\nxlog.log; on Linux, /opt/nxlog/var/log/nxlog/nxlog.log. If this
directive is not specified, internal logging is disabled.
Some Windows applications (WordPad, for example) cannot open the log file while the NXLog
NOTE process is running because of exclusive file locking. Use a viewer that does not lock the file, like
Notepad.
• On all systems, set the LogLevel directive to DEBUG, then restart NXLog.
709
Example 482. Writing Specific Fields and Values to the Internal Log
This configuration uses the log_info() procedure to send values to the internal log. Log messages are
accepted over UDP on port 514. If keyword is found in the unparsed message, an INFO level message will
be generated.
nxlog.conf
1 <Input in>
2 Module im_udp
3 Port 514
4 <Exec>
5 if $raw_event =~ /keyword/
6 log_info("FOUND KEYWORD IN MSG: [" + $raw_event + "]");
7 </Exec>
8 </Input>
710
Example 483. Send All Fields to the Internal Log
In this configuration, the to_json() procedure from the xm_json module is used to send all the fields to the
internal log.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog_bsd();
15
16 # Dump $raw_event
17 log_info("raw_event is: " + $raw_event);
18
19 # Dump fields in JSON
20 log_info("Other fields are: " + to_json());
21 </Exec>
22 </Input>
Output Sample
{
"MessageSourceAddress": "127.0.0.1",
"EventReceivedTime": "2012-05-18 13:11:35",
"SourceModuleName": "in",
"SourceModuleType": "im_tcp",
"SyslogFacilityValue": 3,
"SyslogFacility": "DAEMON",
"SyslogSeverityValue": 3,
"SyslogSeverity": "ERR",
"SeverityValue": 4,
"Severity": "ERROR",
"Hostname": "host",
"EventTime": "2010-10-12 12:49:06",
"SourceName": "app",
"ProcessID": "12345",
"Message": "test message"
}
711
On Windows, send the service control command "200" to the application:
The status is the most important piece of information in the dumped log entries. A status of
PAUSED means the input module is not able to send because the output module queue is full. In
NOTE
such a case the queuesize for the corresponding output(s) would be over 99. A status of
STOPPED means the module is fully stopped, usually due to an error.
Local logging is more fault-tolerant than routed logging, and is therefore recommended for
TIP
troubleshooting.
It is not possible to use a log level higher than INFO with the im_internal module. DEBUG level
NOTE
messages can only be written to the local log file.
712
In this example configuration, the file_write() procedure (from the xm_fileop module) is used to dump
information to an external file.
nxlog.conf
1 <Extension _fileop>
2 Module xm_fileop
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog_bsd();
15
16 # Debug $SyslogSeverity and $Hostname fields
17 file_write("/tmp/debug.txt",
18 "Severity: " + $SyslogSeverity +
19 ", Hostname: " + $Hostname + "\n");
20 </Exec>
21 </Input>
713
Chapter 115. Common Issues
Common issues are easily resolved by internal logs to identify typical symptoms, finding the corresponding
description of the symptom below, and then following the suggested remediation steps.
This issue occurs because the NXLog configuration file has been saved in either UTF-16 text encoding, or UTF-8
text encoding with a BOM header.
Open the configuration file in a text editor and save it using ASCII encoding or plain UTF-8.
TIP On Windows, you can use Notepad to correct the text encoding of this file.
This error occurs because NXLog does not have permission to read the file with the configured User and Group.
See the Reading Rsyslog Log Files for more information about using NXLog to read files from the /var/log
directory.
2013-01-10 13:43:30 WARNING ignoring source as it cannot be subscribed to (error code: 5)↵
If this occurs, use the wevutil utility to grant the new user access to Windows Event Log. See this TechNet article
for more details about the procedure: Giving Non Administrators permission to read Event Logs Windows 2003
and Windows 2008.
• Check that no firewall, gateway, or other network issue is blocking the connection
• Verify that the system can resolve the host name used in the Host directive of the configuration file
714
115.4. Log Format Errors
115.4.1. Log Entries are Concatenated With Logstash
If you are using Logstash and find that log entries are concatenated, make sure that you are using the
json_lines codec in your Logstash server configuration.
The default json codec in Logstash sometimes fails to parse log entries passed from NXLog. Switch to the
json_lines codec for better reliability.
This error occurs when attempting to access a field from the Exec directive of a Schedule block. The log data is
not available in the current context. Log data is never available to a scheduled Exec directive because its
execution is not triggered by a log message.
An attempt to access a field can occur directly with a field assignment, or indirectly by calling a function or
procedure that accesses log data.
In some cases, it is preferred for NXLog to continue sending logs to the remaining active output and discard logs
for the failed output. The simplest solution is to disable flow control. This can be done globally with the global
FlowControl directive, or for the corresponding Input (and Processor, if any) modules only, with the module
FlowControl directive.
With flow control disabled, an Input or Processor module will continue to process logs even if
NOTE
the next module’s buffers are full (and the logs will be dropped).
To retain the improved message durability provided by flow control, it is possible to instead explicitly specify
when to drop logs by using a separate route for each output that may fail. Add a pm_buffer module instance to
that route, and configure the buffer to drop logs when it reaches a certain size. The output that follows the buffer
can fail without causing any Input or Processor module before the buffer to pause processing. For examples, see
the Using Buffers section.
715
On Linux, run the following command to see, for example, which files NXLog has open:
$ lsof -u nxlog
On Systems not using /proc, check the system’s open file limit with:
$ sysctl kern.maxfiles
or with:
$ sysctl fs.file-max
There is no Windows equivalent to lsof. You can use Windows Process Explorer from
NOTE
Microsoft’s Windows Sysinternals to inspect which program has files or directories open.
This scenario requires edits to the service file or an override. To check NXLog’s system limits use the following
command:
On Systems not using /proc, check the system’s open file limit:
$ sysctl kern.maxfiles
1 [Service]
2 LimitNOFILE=100000
$ systemctl daemon-reload
716
• Open the log file with an application that does not use exclusive locking (such as Notepad)
or
717
Chapter 116. Debugging NXLog
When other troubleshooting fails to identify (or resolve) an issue, inspecting the NXLog agent itself can prove
useful. Some techniques are outlined below.
1. Remove the User and Group directives from the configuration. NXLog needs to be running as root:root to
produce a core dump.
2. Use ulimit to remove the core file size limit.
# ulimit -c unlimited
# /opt/nxlog/bin/nxlog -f
4. Find the NXLog process and kill it with the SIGABRT signal.
# ls -l /opt/nxlog/var/spool/nxlog/
total 26708
-rw------- 1 root root 27348992 Oct 30 08:51 core
6. If the core dump file was created successfully, run NXLog again as root in order to catch the next crash.
# /opt/nxlog/bin/nxlog -f
NOTE ProcDump runs on Windows Vista and higher, and Windows Server 2008 and higher.
For example, run the following to write a full dump of the nxlog process when its handle count exceeds 10,000:
718
116.2.1. Inspecting Memory Leaks on Linux
We recommend using Valgrind on GNU/Linux to debug memory leaks.
The NXLog debug symbols package is currently only available for Linux. This package is not
NOTE
included with NXLog by default, but can be provided on request.
2. Install Valgrind.
3. Set the NoFreeOnExit directive to TRUE in the NXLog configuration file. This directive ensures that modules
are not unloaded when NXLog is stopped, which allows Valgrind to properly resolve backtraces into modules.
4. Start NXLog under Valgrind with the following command. If User is set to nxlog in the configuration, then the
command must be executed with su, otherwise Valgrind will not be able to create the massif.out file at the
end of the sampling process.
# cd /tmp
# su -lc "valgrind --tool=massif --pages-as-heap=yes /opt/nxlog/bin/nxlog -f" nxlog
5. Let NXLog run for a while until the Valgrind process shows the memory increase, then interrupt it with
Ctrl+C. The output is written to /tmp/massif.out.xxxx.
6. Send the massif.out.xxxx file with a bug report.
7. Optionally, create a report from the massif.out.xxxx file with the ms_print command:
# ms_print massif.out.xxxx
The output of the ms_print report contains an ASCII chart at the top showing the increase in memory usage.
The chart shows the sample number with the highest memory usage, marked with (peak). This is normally
at the end of the chart (the last sample). The backtrace from this sample indicates where the most memory is
allocated.
Once a potential source of excessive memory use has been determined, use DebugView from Microsoft
Sysinternals to inspect the application’s debug output.
719
Enterprise Edition Reference Manual
720
Chapter 117. Man Pages
117.1. nxlog(8)
NAME
nxlog - collects, processes, converts, and forwards event logs in many different formats
SYNOPSIS
nxlog [-c conffile] [-f]
DESCRIPTION
NXLog can process high volumes of event logs from many different sources. Supported types of log processing
include rewriting, correlating, alerting, filtering, and pattern matching. Additional features include scheduling, log
file rotation, buffering, and prioritized processing. After processing, NXLog can store or forward event logs in any
of many supported formats. Inputs, outputs, and processing are implemented with a modular architecture and a
powerful configuration language.
While the details provided here apply to NXLog installations on Linux and other UNIX-style operating systems in
particular, a few Windows-specific notes are included.
OPTIONS
-c conffile, --conf conffile
Specify an alternate configuration file conffile. To change the configuration file used by the NXLog service on
Windows, modify the service parameters.
-f, --foreground
Run in foreground, do not daemonize.
-q, --quiet
Suppress output to STDOUT/STDERR.
-h, --help
Print help.
-r, --reload
Reload configuration of a running instance.
-s, --stop
Send stop signal to a running instance.
-v, --verify
Verify configuration file syntax.
SIGNALS
Various signals can be used to control the NXLog process. Some corresponding Windows control codes are also
available; these are shown in parentheses where applicable.
721
SIGHUP
This signal causes NXLog to reload the configuration and restart the modules. On Windows, "sc stop nxlog"
and "sc start nxlog" can be used instead.
SIGUSR1 (200)
This signal generates an internal log message with information about the current state of NXLog and its
configured module instances. The message will be generated with INFO log level, written to the log file (if
configured with LogFile), and available via the im_internal module.
SIGUSR2 (201)
This signal causes NXLog to switch to the DEBUG log level. This is equivalent to setting the LogLevel directive
to DEBUG but does not require NXLog to be restarted.
SIGINT/SIGQUIT/SIGTERM
NXLog will exit if it receives one of these signals. On Windows, "sc stop nxlog" can be used instead.
On Linux/UNIX, a signal can be sent with the kill command. The following, for example, sends the SIGUSR1
signal:
On Windows, a signal can be sent with the sc command. The following, for example, sends the 200 signal:
FILES
/opt/nxlog/bin/nxlog
The main NXLog executable
/opt/nxlog/bin/nxlog-stmnt-verifier
This tool can be used to check NXLog Language statements. All statements are read from standard input and
then validated. If a statement is invalid, the tool prints an error to standard error and exits non-zero.
/opt/nxlog/etc/nxlog.conf
The default configuration file
/opt/nxlog/lib/nxlog/modules
The NXLog modules are located in this directory, by default. See the ModuleDir directive.
/opt/nxlog/spool/nxlog
If PersistLogqueue is set to TRUE, module queues are stored in this directory. See also LogqueueDir and
SyncLogqueue.
/opt/nxlog/spool/nxlog/configcache.dat
This is the position cache file where positions are saved. See the NoCache directive, in addition to CacheDir,
CacheFlushInterval, and CacheSync.
/opt/nxlog/var/run/nxlog/nxlog.pid
The process ID (PID) of the currently running NXLog process is written to this file. See the PidFile directive.
ENVIRONMENT
To access environment variables in the NXLog configuration, use the envvar directive.
722
SEE ALSO
nxlog-processor(8)
COPYRIGHT
Copyright © Copyright © NXLog Ltd. 2020
The NXLog Community Edition is licensed under the NXLog Public License. The NXLog Enterprise Edition is not
free and has a commercial license.
117.2. nxlog-processor(8)
NAME
nxlog-processor - performs batch log processing
SYNOPSIS
nxlog-processor [-c conffile] [-v]
DESCRIPTION
The nxlog-processor tool is similar to the NXLog daemon and uses the same configuration file. However, it runs
in the foreground and exits after all input log data has been processed. Common input sources are files and
databases. This tool is useful for log processing tasks such as:
While the details provided here apply to NXLog installations on Linux and other UNIX-style operating systems in
particular, a few Windows-specific notes are included.
OPTIONS
-c conffile, --conf conffile
Specify an alternate configuration file conffile.
-h, --help
Print help.
-v, --verify
Verify configuration file syntax.
FILES
723
/opt/nxlog/bin/nxlog-processor
The main NXLog-processor executable
/opt/nxlog/bin/nxlog-stmnt-verifier
This tool can be used to check NXLog Language statements. All statements are read from standard input and
then validated. If a statement is invalid, the tool prints an error to standard error and exits non-zero.
/opt/nxlog/etc/nxlog.conf
The default configuration file
/opt/nxlog/spool/nxlog/configcache.dat
This is the position cache file where positions are saved. To disable position caching, as may be desirable
when using nxlog-processor, set the NoCache directive to TRUE.
ENVIRONMENT
To access environment variables in the NXLog configuration, use the envvar directive.
SEE ALSO
nxlog(8)
COPYRIGHT
Copyright © Copyright © NXLog Ltd. 2020
The NXLog Community Edition is licensed under the NXLog Public License. The NXLog Enterprise Edition is not
free and has a commercial license.
724
Chapter 118. Configuration
An NXLog configuration consists of global directives, module instances, and routes. The following sections list the
core NXLog directives provided. Additional directives are provided at the module level.
A configuration is valid without any module instances specified, however for NXLog to process data the
configuration should contain at least one input module instance and at least one output module instance. If no
route is specified, a route will be automatically generated; this route will connect all input module instances and
all output module instances in a single path.
A module instance name may contain letters, digits, periods (.), and underscores (_). The first character in a
module instance name must be a letter or an underscore. The corresponding regular expression is [a-zA-
Z_][a-zA-Z0-9._]*.
A route instance name may contain letters, digits, periods (.), and underscores (_). The first character in a route
instance name must be a letter, a digit, or an underscore. The corresponding regular expression is [a-zA-Z0-
9_][a-zA-Z0-9._]*.
Inserting comments within a configuration is accomplished exactly as it is in shell scripts; any text written on the
line after the hash mark (#) is ignored and treated as a comment, including the backslash (\). Multi-line
comments need a # on each line.
define
Use this directive to configure a constant or macro to be used later. Refer to a define by surrounding the
name with percent signs (%). Enclose a group of statements with curly braces ({}).
This configuration shows three example defines: BASEDIR is a constant, IMPORTANT is a statement, and
WARN_DROP is a group of statements.
nxlog.conf
1 define BASEDIR /var/log
2 define IMPORTANT if $raw_event =~ /important/ \
3 $Message = 'IMPORTANT ' + $raw_event;
4 define WARN_DROP { log_warning("dropping message"); drop(); }
5
6 <Input messages>
7 Module im_file
8 File '%BASEDIR%/messages'
9 </Input>
10
11 <Input proftpd>
12 Module im_file
13 File '%BASEDIR%/proftpd.log'
14 <Exec>
15 %IMPORTANT%
16 if $raw_event =~ /dropme/ %WARN_DROP%
17 </Exec>
18 </Input>
725
envvar
This directive works like define, except that the value is retrieved from the environment.
This example is like the previous one, but BASEDIR is fetched from the environment instead.
nxlog.conf
1 envvar BASEDIR
2
3 <Input in>
4 Module im_file
5 File '%BASEDIR%/messages'
6 </Input>
include
This directive allows a specified file or files to be included in the current NXLog configuration. Wildcarded
filenames are supported.
The SpoolDir directive only takes effect after the configuration is parsed, so relative paths
NOTE specified with the include directive must be relative to the working directory NXLog was
started from.
The examples below provide various ways of using the include directive.
nxlog.conf
1 include modules/module1.conf
In the case, when multiple .conf files are to be defined, they can be saved in the nxlog.d directory and then
automatically included in the NXLog configuration along with the nxlog.conf file. Adding .conf files into the
nxlog.d directory extends the NXLog configuration, while no modification for the nxlog.conf file is needed.
This example includes all matching files from the nxlog.d directory and uses absolute paths on Unix-
like systems and Windows.
nxlog.conf
1 include /etc/nxlog.d/*.conf
nxlog.conf
1 include C:\Program Files\nxlog\conf\nxlog.d\*.conf
726
include_stdout
This directive accepts the name of an external command or script. Configuration content will be read from
the command’s standard output. Command arguments are not supported.
This directive executes the custom script, which fetches the configuration.
nxlog.conf
1 include_stdout /opt/nxset/etc/fetch_conf.sh
BatchFlushInterval
This directive specifies the timeout, in seconds, before a record-batch will be forwarded to the next module in
the route, even if the batch has accumulated fewer than the maximum number of records given by the
BatchSize directive. If this directive is not specified, it defaults to 0.1 (100 milliseconds). It can also be
overriden per-module, by the BatchFlushInterval module level directive.
CacheDir
This directive specifies a directory where the cache file (configcache.dat) should be written. This directive
has a compiled-in value which is used by default.
CacheFlushInterval
This directive specifies how often the in-memory position cache should be flushed to the cache file. The value
of 0 indicates that the cache should only be flushed to the file when the agent shuts down; if the server or
agent crashes, the current position cache will be lost. A positive integer indicates the length of the interval
between flushes of the cache, in seconds. The string always specifies that the cache should be flushed to file
immediately when a module sets a value. If this directive is not specified the default value of 5 seconds is
used. See also the CacheSync directive below.
CacheInvalidationTime
NXLog persists saved positions in cache that is written the disk. To prevent the cache growing indefinitely an
invalidation period is used. This directive defines the invalidation period. If the last modification time of an
entry exceeds the value set with this directive, the entry is discarded when the cache is read from disk. This
directive accepts a positive Integer value. If the directive is not specified, the default value of 864000 (10 days)
is used.
CacheSync
When the in-memory position cache is flushed to the cache file, the cache may not be immediately written to
the disk due to file system buffering. When this directive is set to TRUE, the cache file is synced to disk
immediately when it is written. The default is FALSE. CacheSync has no effect if CacheFlushInterval is set to
the default of 0. Setting this to TRUE when CacheFlushInterval is set to always greatly reduces performance,
though only this guarantees crash-safe operation.
DateFormat
This directive can be used to change the default date format as it appears in the LogFile, in the $raw_event
generated by the modules, and when a datetime type value is converted to a string. The following values are
accepted (corresponding to the formats accepted by the NXLog strftime() function):
727
• YYYY-MM-DD hh:mm:ss (the default)
• YYYY-MM-DDThh:mm:ssTZ
• YYYY-MM-DDThh:mm:ss.sTZ
• YYYY-MM-DD hh:mm:ssTZ
• YYYY-MM-DD hh:mm:ss.sTZ
• YYYY-MM-DDThh:mm:ssUTC
• YYYY-MM-DDThh:mm:ss.sUTC
• YYYY-MM-DD hh:mm:ssUTC
• YYYY-MM-DD hh:mm:ss.sUTC
EscapeGlobPatterns
This boolean directive specifies whether the backslash (\) character in glob patterns or wildcarded entries
should be enabled as an escape sequence. If set to TRUE, this directive implies that the backslash character (
\) needs to be escaped by another backslash character (\\). File and directory patterns on Windows do not
require escaping and are processed as non-escaped even if this directive is set to TRUE. The default is FALSE.
This directive is used in im_file, im_fim, and im_regmon modules.
FlowControl
This optional boolean directive specifies the flow control default for input and processor module instances.
Output module instances do not inherit from this directive. By default, the global FlowControl value is TRUE.
See the description of the module level FlowControl directive for more information.
FlowControlFIFO
This bolean directive, when set to TRUE, which is also the default, enables FIFO mode for modules that have
flow control disabled. In this mode, when the log queue of a module is full, older records will be dropped in
order to make room for newer ones. When set to FALSE, the old behavior is in effect: while the log queue is
full, no records will be dropped, but new incoming records will be discarded instead.
GenerateDateInUTC
If set to TRUE, this boolean directive specifies that UTC should be used when generating dates in the format
YYYY-MM-DD hh:mm:ss. If set to FALSE, local time will be used when generating dates in this format. The
default is FALSE. See also ParseDateInUTC.
Group
Similar to User, NXLog will set the group ID to run under. The group can be specified by name or numeric ID.
This directive has no effect when running on the Windows platform or with nxlog-processor(8).
IgnoreErrors
If set to FALSE, NXLog will stop when it encounters a problem with the configuration file (such as an invalid
module directive) or if there is any other problem which would prevent all modules functioning correctly. If
set to TRUE, NXLog will start after logging the problem. The default value is TRUE.
LogFile
NXLog will write its internal log to this file. If this directive is not specified, self logging is disabled. Note that
the im_internal module can also be used to direct internal log messages to files or different output
destinations, but this does not support log level below INFO. This LogFile directive is especially useful for
debugging.
LogLevel
This directive has five possible values: CRITICAL, ERROR, WARNING, INFO, and DEBUG. It will set both the logging
728
level used for LogFile and the standard output if NXLog is started in the foreground. The default LogLevel is
INFO. This directive can also be used at the module level.
LogqueueDir
This directive specifies the directory where the files of the persistent queues are stored, for Processor and
Output module instances. Even if PersistLogqueue is set to FALSE, NXLog will persist in-memory queues to
the LogqueueDir on shutdown. If not specified, the default is the value of CacheDir. This directive can also be
used at the module level to specify a log queue directory for a specific module instance.
LogqueueSize
This directive controls the size of the log queue for all Processor and Output module instances. The default is
100 record batches. See the module-level LogQueueSize for more information.
ModuleDir
By default the NXLog binaries have a compiled-in value for the directory to search for loadable modules. This
can be overridden with this directive. The module directory contains sub-directories for each module type
(extension, input, output, and processor), and the module binaries are located in those.
NoCache
Some modules save data to a cache file which is persisted across a shutdown/restart. Modules such as im_file
will save the file position in order to continue reading from the same position after a restart as before. This
caching mechanism can be explicitly turned off with this directive. This is mostly useful with nxlog-
processor(8) in offline mode. If this boolean directive is not specified, it defaults to FALSE (caching is enabled).
Note that many input modules, such as im_file, provide a SavePos directive that can be used to disable the
position cache for a specific module instance. SavePos has no effect if the cache is disabled globally with
NoCache TRUE.
NoFreeOnExit
This directive is for debugging. When set to TRUE, NXLog will not free module resources on exit, allowing
valgrind to show proper stack trace locations in module function calls. The default value is FALSE.
Panic
A panic condition is a critical state which usually indicates a bug. Assertions are used in NXLog code for
checking conditions where the code will not work unless the asserted condition is satisfied, and for security.
Failing assertions result in a panic and suggest a bug in the code. A typical case is checking for NULL pointers
before pointer dereference. This directive can take three different values: HARD, SOFT, or OFF. HARD will cause
an abort in case the assertion fails. This is how most C based programs work. SOFT will cause an exception to
be thrown at the place of the panic/assertion. In case of NULL pointer checks this is identical to a
NullPointerException in Java. It is possible that NXLog can recover from exceptions and can continue to
process log messages, or at least the other modules can. In case of assertion failure the location and the
condition is printed at CRITICAL log level in HARD mode and ERROR log level in SOFT mode. If Panic is set to
OFF, the failing condition is printed in the logs but the execution will continue on the normal code path. Most
of the time this will result in a segmentation fault or other undefined behavior, though in some cases turning
off a buggy assertion or panic will solve the problems caused by it in HARD/SOFT mode. The default value for
Panic is SOFT.
ParseDateInUTC
If set to TRUE, this boolean directive specifies that dates in the format YYYY-MM-DD hh:mm:ss should be
parsed as UTC. If set to FALSE, dates in this format are assumed to be in local time. The default is FALSE. See
also GenerateDateInUTC.
PersistLogqueue
This boolean directive specifies that log queues of Processor and Output module instances should be disk-
based. See the module level PersistLogqueue directive for more information.
PidFile
Under Unix operating systems, NXLog writes a PID file as other system daemons do. The default PID file can
729
be overridden with this directive in case multiple daemon instances need to be running. This directive has no
effect when running on the Windows platform or with nxlog-processor(8).
ReadTimeout
This directive is specific to nxlog-processor(8). It controls the exit condition of nxlog-processor(8). Its value is a
timeout in seconds. If nxlog-processor(8) gets no data to process during this period then it will stop waiting
for more data and will exit. The default value is 0.05 s. For any non-negative value which is less than 0.05 this
will be 0.05. In case nxlog-processor(8) is configured to read data from the network it is recommended to set
this to higher value.
RootDir
NXLog will set its root directory to the value specified with this directive. If SpoolDir is also set, this will be
relative to the value of RootDir (chroot() is called first). This directive has no effect when running on the
Windows platform or with the nxlog-processor(8).
SpoolDir
NXLog will change its working directory to the value specified with this directive. This is useful with files
created through relative filenames (for example, with om_file) and in case of core dumps. This directive has
no effect with the nxlog-processor(8).
StringLimit
To protect against memory exhaustion (and possibly a denial-of-service) caused by over-sized strings, there is
a limit on the length of each string (in bytes). The default value is 5242880 bytes (strings will be truncated at 5
MiB).
SuppressRepeatingLogs
Under some circumstances it is possible for NXLog to generate an extreme amount of internal logs consisting
of the same message due to an incorrect configuration or a software bug. In this case, the LogFile can quickly
consume the available disk space. With this directive, NXLog will write at most 2 lines per second if the same
message is generated successively, by logging "last message repeated n times" messages. If this boolean
directive is not specified, it defaults to TRUE (suppression of repeating messages is enabled).
SyncLogqueue
When this directive is set to TRUE and PersistLogqueue is enabled, the disk-based queues of Processor and
Output module instances will be immediately synced after each new entry is added to the queue. This greatly
reduces performance but makes the queue more reliable and crash-safe. This directive can also be used at
the module level.
Threads
This directive specifies the number of worker threads to use. The number of the worker threads is calculated
and set to an optimal value if this directive is not defined. Do not set this unless you know what you are
doing.
User
NXLog will drop to the user specified with this directive. This is useful if NXLog needs privileged access to
some system resources (such as kernel messages or to bind a port below 1024). On Linux systems NXLog will
use capabilities to access these resources. In this case NXLog must be started as root. The user can be
specified by name or numeric ID. This directive has no effect when running on the Windows platform or with
nxlog-processor(8).
Module
This mandatory directive specifies which binary should be loaded. The module binary has a .so extension on
Unix and a .dll on Windows platforms and resides under the ModuleDir location. Each module binary name
730
is prefixed with im_, pm_, om_, or xm_ (for input, processor, output, and extension, respectively). It is possible for
multiple instances to use the same loadable binary. In this case the binary is only loaded once but
instantiated multiple times. Different module instances may have different configurations.
BatchSize
For input and processor modules, specifies how many records will be collected by the module, and forwarded
together as a batch to the next module in the route. See the description of the global BatchSize directive for
more information.
BatchFlushInterval
For input and processor modules, specifies the timeout before a record-batch is forwarded to the next
module in the route. See the description of the global BatchFlushInterval directive for more information.
BufferSize
This directive specifies the size of the read or write buffer (in bytes) used by the Input or Output module,
respectively. The BufferSize directive is only valid for Input and Output module instances. The default buffer
size is 65000 bytes.
FlowControl
This optional boolean directive specifies whether the module instance should use flow control. FlowControl
is only valid for input, processor, and output modules. For input and processor modules, the FlowControl
default is inherited from the global FlowControl directive (which defaults to TRUE). To maintain backward
compatibility, the FlowControl default for output modules is TRUE regardless of the global FlowControl value.
Under normal conditions, when all module instances are operating normally and buffers are not full, flow
control has no effect. However, if a module becomes blocked and is unable to process events, flow control
will automatically suspend module instances as necessary to prevent events from being lost. For example,
consider a route with im_file and om_tcp. If a network error blocks the output, NXLog will stop reading events
from file until the network error is resolved. If flow control is disabled, NXLog will continue reading from file
and processing events; the events will be discarded when passed to the output module.
In most cases, flow control should be left enabled, but it may be necessary to disable it in certain scenarios. It
is recommended to leave flow control enabled globally and only specify the FlowControl directive in two
cases. First, set FlowControl FALSE on any input module instance that should never be suspended. Second,
if a route contains multiple output instances, it may be desirable to continue sending events to other outputs
even if one output becomes blocked—set FlowControl FALSE on the output(s) where events can be
discarded to prevent the route from being blocked.
Internally, flow control takes effect when the log queue of the next module instance in the route becomes full.
If flow control is enabled, the instance suspends. If flow control is disabled, events are discarded. If the next
module in the route is an output module, both FlowControl values are consulted—flow control is enabled
only if both are TRUE.
InputType
This directive specifies the name of the registered input reader function to be used for parsing raw events
from input data. Names are treated case insensitively. This directive is only available for stream oriented
input modules: im_file, im_exec, im_ssl, im_tcp, im_udp, im_uds, and im_pipe. These modules work by filling
an input buffer with data read from the source. If the read operation was successful (there was data coming
from the source), the module calls the specified callback function. If this is not explicitly specified, the module
731
default will be used. Note that im_udp may only work properly if log messages do not span multiple packets
and are within the UDP message size limit. Otherwise the loss of a packet may lead to parsing errors.
Modules may provide custom input reader functions. Once these are registered into the NXLog core, the
modules listed above will be capable of using these. This makes it easier to implement custom protocols
because these can be developed without concern for the transport layer.
The following input reader functions are provided by the NXLog core:
Binary
The input is parsed in the NXLog binary format, which preserves the parsed fields of the event records.
The LineBased reader will automatically detect event records in the binary NXLog format, so it is only
recommended to configure InputType to Binary if compatibility with other logging software is not
required.
Dgram
Once the buffer is filled with data, it is considered to be one event record. This is the default for the
im_udp input module, since UDP Syslog messages arrive in separate packets.
LineBased
The input is assumed to contain event records separated by newlines. It can handle both CRLF (Windows)
and LF (Unix) line-breaks. Thus if an LF (\n) or CRLF (\r\n) is found, the function assumes that it has
reached the end of the event record. If the input begins with the UTF-8 byte order mark (BOM) sequence
(0xEF,0xBB,0xBF), that sequence is automatically skipped.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Port 2345
4 InputType Binary
5 </Input>
With the im_file module, this directive also supports one or several stream processors to process input data
before reading.
The input log data is processed from the left-most to the right-most processor like stream-name-one → …
→ stream-name-n. The example syntax of the InputType directive with stream processors is shown below.
• The module-name is the name of an extension module instance which implements stream processing.
Currently, this is supported by the xm_crypto and xm_zlib modules.
• The stream-name is the name of a stream processor which is implemented by the module-name. If not
specified, the first available stream will be used by the module-name instance.
• The InputType directive can contain several instances of the same extension module.
For more details, see the the documentation to the m_crypto and xm_zlib modules.
732
Example 490. Decompression and Decryption of Data
This configuration contains one instance of the xm_zlib module to decompress the input data and one
instance of the m_crypto module to decrypt it.
The result is read by the im_file module using the LineBased function.
nxlog.conf
1 <Extension crypto>
2 Module xm_crypto
3 UseSalt TRUE
4 PasswordFile /tmp/passwordfile
5 </Extension>
6
7 <Extension zlib>
8 Module xm_zlib
9 Format gzip
10 CompressionLevel 9
11 CompBufSize 16384
12 DecompBufsize 16384
13 </Extension>
14
15 <Input from_file>
16 Module im_file
17 File '/tmp/input'
18 InputType crypto.aes_decrypt, zlib.decompress, LineBased
19 </Input>
LogLevel
This directive can be used to override the value of the global LogLevel. This can be useful for debugging
purposes when DEBUG is set at the module level, or a noisy module can be silenced when set to CRITICAL or
ERROR.
1 <Input fim>
2 Module im_fim
3 LogLevel Debug
4 ...
5 </Input>
LogqueueDir
This directive specifies the directory where the files of the persistent queue are stored. LogqueueDir is only
valid for Processor and Output module instances. See the description of the global LogqueueDir for more
information.
LogqueueSize
Every Processor and Output instance has an input log queue for events waiting to be processed by that
module. The size of the queue is measured in batches of event records, and can be set with this
directive—the default is 100 batches. When the log queue of a module instance is full and FlowControl is
enabled for the preceeding module, the preceeding module will be suspended. If flow control is not enabled
for the preceeding module, events will be discarded. This directive is only valid for Processor and Output
module instances. This directive can be used at the global level to affect all modules.
OutputType
This directive specifies the name of the registered output writer function to be used for formatting raw events
733
when storing or forwarding output. Names are treated case insensitively. This directive is only available for
stream oriented output modules: om_exec, om_file, om_pipe, om_redis, om_ssl, om_tcp, om_udp,
om_udpspoof, om_uds, and om_zmq. These modules work by filling the output buffer with data to be written
to the destination. The specified callback function is called before the write operation. If this is not explicitly
specified, the module default will be used.
Modules may provide custom output formatter functions. Once these are registered into the NXLog core, the
modules listed above will be capable of using these. This makes it easier to implement custom protocols
because these can be developed without concern for the transport layer.
The following output writer functions are provided by the NXLog core:
Binary
The output is written in the NXLog binary format which preserves parsed fields of the event records.
Dgram
Once the buffer is filled with data, it is considered to be one event record. This is the default for the
om_udp, om_udpspoof, om_redis, and om_zmq output modules.
LineBased
The output will contain event records separated by newlines. The record terminator is CRLF (\r\n) on
Windows and LF (\n) on Unix.
LineBased_CRLF
The output will contain event records separated by Windows style newlines where the record terminator is
CRLF (\r\n).
LineBased_LF
The output will contain event records separated by Unix style newlines where the record terminator is LF
(\n).
nxlog.conf
1 <Output tcp>
2 Module om_tcp
3 Port 2345
4 Host localhost
5 OutputType Binary
6 </Output>
With the om_file module, this directive also supports one or several stream processors to process output data
after writing.
The output log data is processed from the left-most to the right-most processor like stream-name-one → …
→ stream-name-n. The example syntax of the OutputType directive with stream processors is displayed
below.
• The module-name is the name of an extension module instance which implements stream processing.
Currently, this is supported by the xm_crypto and xm_zlib modules.
734
• The stream-name is the name of a stream processor which is implemented by the module-name. If not
specified, the first available stream will be used by the module-name instance.
• The OutputType directive can contain several instances of the same extension module.
NOTE Rotation of files is done automatically when encrypting log data with the xm_crypto module.
For more details, see the the documentation of the xm_crypto and xm_zlib modules.
This configuration writes the data using the LineBased function of the om_file module. It also contains
one instance of the xm_zlib module to compress the output data and one instance of the xm_crypto
module to encrypt them.
nxlog.conf
1 <Extension zlib>
2 Module xm_zlib
3 Format gzip
4 CompressionLevel 9
5 CompBufSize 16384
6 DecompBufsize 16384
7 </Extension>
8
9 <Extension crypto>
10 Module xm_crypto
11 UseSalt TRUE
12 PasswordFile /tmp/passwordfile
13 </Extension>
14
15 <Input from_tcp>
16 Module im_tcp
17 Host 192.168.31.11
18 Port 10500
19 </Input>
20
21 <Output to_file>
22 Module om_file
23 File '/tmp/output'
24 OutputType LineBased, zlib.compress, crypto.aes_encrypt
25 </Input>
PersistLogqueue
When a module passes an event to the next module along the route, it puts it into the next module’s queue.
This queue can be either a memory-based or disk-based (persistent) queue. When this directive is set to
TRUE, the module will use a persistent (disk-based) queue. With the default value of FALSE, the module’s
incoming log queue will not be persistent (will be memory-based); however, in-memory log queues will still be
persisted to disk on shutdown. This directive is only valid for Processor and Output module instances. This
directive can also be used at the global level.
SyncLogqueue
When this directive is set to TRUE and PersistLogqueue is enabled, the disk-based queue will be immediately
synced after each new entry is added to the queue. This greatly reduces performance but makes the queue
more reliable and crash-safe. This directive is only valid for Processor and Output module instances. This
directive can be used at the global level to affect all modules.
735
118.3.1. Exec
The Exec directive/block contains statements in the NXLog language which are executed when a module receives
a log message. This directive is available in all input, processor, and output modules. It is not available in most
extension modules because these do not handle log messages directly (the xm_multiline and xm_rewrite
modules do provide Exec directives).
This statement assigns a value to the $Hostname field in the event record.
nxlog.conf
1 Exec $Hostname = 'myhost';
Each directive must be on one line unless it contains a trailing backslash (\) character.
nxlog.conf
1 Exec if $Message =~ /something interesting/ \
2 log_info("found something interesting"); \
3 else \
4 log_debug("found nothing interesting");
More than one Exec directive or block may be specified. They are executed in the order of appearance. Each
Exec directive must contain a full statement. Therefore it is not possible to split the lines in the previous example
into multiple Exec directives. It is only possible to split the Exec directive if it contains multiple statements.
nxlog.conf
1 Exec log_info("first"); \
2 log_info("second");
nxlog.conf
1 Exec log_info("first");
2 Exec log_info("second");
The Exec directive can also be used as a block. To use multiple statements spanning more than one line, it is
recommended to use the <Exec> block instead. When using a block, it is not necessary to use the backslash (\)
character for line continuation.
736
Example 497. Using the Exec Block
This example shows two equivalent uses of Exec, first as a directive, then as a block.
nxlog.conf
1 Exec log_info("first"); \
2 log_info("second");
The following Exec block is equivalent. Notice the backslash (\) is omitted.
nxlog.conf
1 <Exec>
2 log_info("first");
3 log_info("second");
4 </Exec>
118.3.2. Schedule
The Schedule block can be used to execute periodic jobs, such as log rotation or any other task. Scheduled jobs
have the same priority as the module. The Schedule block has the following directives:
Every
In addition to the crontab format it is possible to schedule execution at periodic intervals. With the crontab
format it is not possible to run a job every five days for example, but this directive enables it in a simple way.
It takes a positive integer value with an optional unit. The unit can be one of the following: sec, min, hour,
day, or week. If the unit is not specified, the value is assumed to be in seconds. The Every directive cannot be
used in combination with RunCount 1.
Exec
The mandatory Exec directive takes one or more NXLog statements. This is the code which is actually being
scheduled. Multiple Exec directives can be specified within one Schedule block. See the module-level Exec
directive, this behaves the same. Note that it is not possible to use fields in statements here because
execution is not triggered by log messages.
First
This directive sets the first execution time. If the value is in the past, the next execution time is calculated as if
NXLog has been running since and jobs will not be run to make up for missed events in the past. The directive
takes a datetime literal value.
RunCount
This optional directive can be used to specify a maximum number of times that the corresponding Exec
statement(s) should be executed. For example, with RunCount 1 the statement(s) will only be executed once.
When
This directive takes a value similar to a crontab entry: five space-separated definitions for minute, hour, day,
month, and weekday. See the crontab(5) manual for the field definitions. It supports lists as comma
separated values and/or ranges. Step values are also supported with the slash. Month and week days are not
supported, these must be defined with numeric values. The following extensions are also supported:
737
@startup Run once when NXLog starts.
@reboot (Same as @startup)
@yearly Run once a year, "0 0 1 1 *".
@annually (Same as @yearly)
@monthly Run once a month, "0 0 1 * *".
@weekly Run once a week, "0 0 * * 0".
@daily Run once a day, "0 0 * * *".
@midnight (Same as @daily)
@hourly Run once an hour, "0 * * * *".
This example shows two scheduled Exec statements in a im_tcp module instance. The first is executed
every second, while the second uses a crontab(5) style value.
nxlog.conf
1 <Input in>
2 Module im_tcp
3 Port 2345
4
5 <Schedule>
6 Every 1 sec
7 First 2010-12-17 00:19:06
8 Exec log_info("scheduled execution at " + now());
9 </Schedule>
10
11 <Schedule>
12 When 1 */2 2-4 * *
13 Exec log_info("scheduled execution at " + now());
14 </Schedule>
15 </Input>
Path
The data flow is defined by the Path directive. First the instance names of Input modules are specified. If
more than one Input reads log messages which feed data into the route, then these must be separated by
commas. The list of Input modules is followed by an arrow (=>). Either processor modules or output modules
follow. Processor modules must be separated by arrows, not commas, because they operate in series, unlike
Input and Output modules which work in parallel. Output modules are separated by commas. The Path must
specify at least an Input and an Output. The syntax is illustrated by the following:
Path INPUT1[, INPUT2...] => [PROCESSOR1 [=> PROCESSOR2...] =>] OUTPUT1[, OUTPUT2...]
738
Example 499. Specifying Routes
nxlog.conf
1 <Input in1>
2 Module im_null
3 </Input>
4
5 <Input in2>
6 Module im_null
7 </Input>
8
9 <Processor p1>
10 Module pm_null
11 </Processor>
12
13 <Processor p2>
14 Module pm_null
15 </Processor>
16
17 <Output out1>
18 Module om_null
19 </Output>
20
21 <Output out2>
22 Module om_null
23 </Output>
24
25 <Route 1>
26 # Basic route
27 Path in1 => out1
28 </Route>
29
30 <Route 2>
31 # Complex route with multiple input/output/processor modules
32 Path in1, in2 => p1 => p2 => out1, out2
33 </Route>
Priority
This directive takes an integer value in the range of 1-100 as a parameter, and the default is 10. Log messages
in routes with a lower Priority value will be processed before others. Internally, this value is assigned to each
module part of the route. The events of the modules are processed in priority order by the NXLog engine.
Modules of a route with a lower Priority value (higher priority) will process log messages first.
739
Example 500. Prioritized Processing
This configuration prioritizes the UDP route over the TCP route in order to minimize loss of UDP Syslog
messages when the system is busy.
nxlog.conf
1 <Input tcpin>
2 Module im_tcp
3 Host localhost
4 Port 514
5 </Input>
6
7 <Input udpin>
8 Module im_udp
9 Host localhost
10 Port 514
11 </Input>
12
13 <Output tcpfile>
14 Module om_file
15 File "/var/log/tcp.log"
16 </Output>
17
18 <Output udpfile>
19 Module om_file
20 File "/var/log/udp.log"
21 </Output>
22
23 <Route udp>
24 Priority 1
25 Path udpin => udpfile
26 </Route>
27
28 <Route tcp>
29 Priority 2
30 Path tcpin => tcpfile
31 </Route>
740
Chapter 119. Language
119.1. Types
The following types are provided by the NXLog language.
Unknown
This is a special type for values where the type cannot be determined at compile time and for uninitialized
values. The undef literal and fields without a value also have an unknown type. The unknown type can also be
thought of as "any" in case of function and procedure API declarations.
Boolean
A boolean value is TRUE, FALSE or undefined. Note that an undefined value is not the same as a FALSE value.
Integer
An integer can hold a signed 64 bit value in addition to the undefined value. Floating point values are not
supported.
String
A string is an array of characters in any character set. The binary type should be used for values where the
NUL byte can also occur. An undefined string is not the same as an empty string. Strings have a limited length
to prevent resource exhaustion problems, this is a compile-time value currently set to 1M.
Datetime
A datetime holds a microsecond value of time elapsed since the Epoch. It is always stored in UTC/GMT.
IP Address
The ipaddr type can store IP addresses in an internal format. This type is used to store both dotted-quad IPv4
addresses (for example, 192.168.1.1) and IPv6 addresses (for example,
2001:0db8:85a3:0000:0000:8a2e:0370:7334).
Regular expression
A regular expression type can only be used with the =~ or !~ operators.
Binary
This type can hold an array of bytes.
Variadic arguments
This is a special type only used in function and procedure API declarations to indicate variadic arguments.
119.2. Expressions
119.2.1. Literals
Undef
The undef literal has an unknown type. It can be also used in an assignment to unset the value of a field.
1 $ProcessID = undef;
741
Boolean
A boolean literal is either TRUE or FALSE. It is case-insensitive, so True, False, true, and false are also valid.
Integer
An integer starts with a minus (-) sign if it is negative. A "0X" or "0x" prepended modifier indicates a
hexadecimal notation. The "K", "M" and "G" modifiers are also supported; these mean Kilo (1024), Mega
(1024^2), or Giga (1024^3) respectively when appended.
This statement uses a modifier to set the $Limit field to 44040192 (42×1024^2).
1 $Limit = 42M;
String
String literals are quoted characters using either single or double quotes. String literals specified with double
quotes can contain the following escape sequences.
\\
The backslash (\) character.
\"
The double quote (") character.
\n
Line feed (LF).
\r
Carriage return (CR).
\t
Horizontal tab.
\b
Audible bell.
\xXX
A single byte in the form of a two digit hexadecimal number. For example the line-feed character can also
be expressed as \x0A.
String literals in single quotes do not process the escape sequences: "\n" is a single
NOTE character (LF) while '\n' is two characters. The following comparison is FALSE for this
reason: "\n" == '\n'.
Extra care should be taken with the backslash when using double quoted string literals to
NOTE specify file paths on Windows. For more information about the possible complications,
see this note for the im_file File directive.
742
Datetime
A datetime literal is an unquoted representation of a time value expressing local time in the format of YYYY-
MM-DD hh:mm:ss.
This statement sets the $EventTime field to the specified datetime value.
IP Address
An IP address literal can be expressed in the form of a dotted quad notation for IPv4 (192.168.1.1) or by
using 8 colon-separated (:) groups of 16-bit hexadecimal values for IPv6
(2001:0db8:85a3:0000:0000:8a2e:0370:7334).
Regular expressions must be used with one of the =~ and !~ operators, and must be quoted with slashes (/) as in
Perl. Captured sub-strings are accessible through numeric reference, and the full subject string is placed into $0.
If the regular expression matches the $Message field, the log_info() procedure is executed. The captured
sub-string is used in the logged string ($1).
It is also possible to use named capturing such that the resulting field name is defined in the regular expression.
This statement causes the same behavior as the previous example, but it uses named capturing instead.
Substitution is supported with the s/// operator. Variables and captured sub-string references cannot be used
inside the regular expression or the substitution operator (they will be treated literally).
g
The /g modifier can be used for global replacement.
743
Example 507. Replace Whitespace Occurrences
If any whitespace is found in the $SourceName field, it is replaced with underscores (_) and a log
message is generated.
s
The dot (.) normally matches any character except newline. The /s modifier causes the dot to match all
characters including line terminator characters (LF and CRLF).
The regular expression in this statement will match a message that begins and ends with the given
keywords, even if the message spans multiple lines.
m
The /m modifier can be used to treat the string as multiple lines (^ and $ match newlines within data).
i
The /i modifier does case insensitive matching.
119.2.3. Fields
See Fields for a list of fields provided by the NXLog core. Additional fields are available through input modules.
Fields are referenced in the NXLog language by prepending a dollar sign ($) to the field name.
Normally, a field name may contain letters, digits, the period (.), and the underscore (_). Additionally, field names
must begin with a letter or an underscore. The corresponding regular expression is:
[a-zA-Z_][a-zA-Z0-9._]*
However, those restrictions are relaxed if the field name is specified with curly braces ({}). In this case, the field
name may also contain hyphens (-), parentheses (()), and spaces. The field name may also begin with any one
of the allowed characters. The regular expression in this case is:
[a-zA-Z0-9._() -]+
This statement generates an internal log message indicating the time when the message was received by
NXLog.
This statement uses curly braces ({}) to refer to a field with a hyphenated name.
744
119.2.4. Operations
not
The not operator expects a boolean value. It will evaluate to undef if the value is undefined. If it receives an
unknown value which evaluates to a non-boolean, it will result in a run-time execution error.
defined
The defined operator will evaluate to TRUE if the operand is defined, otherwise FALSE.
If the $EventTime field has not been set (due perhaps to failed parsing), it will be set to the current time.
=~
This is the regular expression match operation as in Perl. This operation takes a string and a regular
expression operand and evaluates to a boolean value which will be TRUE if the regular expression matches
the subject string. Captured sub-strings are accessible through numeric reference (such as $1) and the full
subject string is placed into $0. Regular expression based string substitution is supported with the s///
operator. For more details, see Regular Expressions.
A log message will be generated if the $Message field matches the regular expression.
745
!~
This is the opposite of =~: the expression will evaluate to TRUE if the regular expression does not match on
the subject string. It can be also written as not LEFT_OPERAND =~ RIGHT_OPERAND. The s/// substitution
operator is supported.
A log message will be generated if the $Message field does not match the regular expression.
==
This operator compares two values for equality. Comparing a defined value with an undefined results in
undef.
!=
This operator compares two values for inequality. Comparing a defined value with an undefined results in
undef.
746
Example 515. Inequality
<
This operation will evaluate to TRUE if the left operand is less than the right operand, and FALSE otherwise.
Comparing a defined value with an undefined results in undef.
<=
This operation will evaluate to TRUE if the left operand is less than or equal to the right operand, and FALSE
otherwise. Comparing a defined value with an undefined results in undef.
>
This operation will evaluate to TRUE if the left operand is greater than the right operand, and FALSE
otherwise. Comparing a defined value with an undefined results in undef.
>=
This operation will evaluate to TRUE if the left operand is greater than or equal to the right operand, and
FALSE otherwise. Comparing a defined value with an undefined results in undef.
747
• datetime >= datetime = boolean
and
This operation evaluates to TRUE if and only if both operands are TRUE. The operation will evaluate to undef
if either operand is undefined.
A log message will be generated only if both $SeverityValue equals 1 and $FacilityValue equals 2.
or
This operation evaluates to TRUE if either operand is TRUE. The operation will evaluate to undef if both
operands are undefined.
+
This operation will result in an integer if both operands are integers. If either operand is a string, the result
will be a string where non-string typed values are converted to strings. In this case it acts as a concatenation
operator, like the dot (.) operator in Perl. Adding an undefined value to a non-string will result in undef.
• datetime + integer = datetime (Add the number of seconds in the right value to the datetime stored
in the left value.)
• integer + datetime = datetime (Add the number of seconds in the left value to the datetime stored
in the right value.)
748
Example 522. Concatenation
-
Subtraction. The result will be undef if either operand is undefined.
• datetime - datetime = integer (Subtract two datetime types. The result is the difference between to
two expressed in microseconds.)
• datetime - integer = datetime (Subtract the number of seconds from the datetime stored in the left
value.)
*
Multiply an integer with another. The result will be undef if either operand is undefined.
/
Divide an integer with another. The result will be undef if either operand is undefined. Since the result is an
integer, a fractional part is lost.
%
The modulo operation divides an integer with another and returns the remainder. The result will be undef if
either operand is undefined.
749
Example 526. Modulo
IN
This operation will evaluate to TRUE if the left operand is equal to any of the expressions in the list on the
right, and FALSE otherwise. Comparing a undefined value results in undef.
Example 527. IN
A log message will be generated if $EventID is equal to any one of the values in the list.
NOT IN
This operation is equivalent to NOT expr IN expr_list.
A log message will be generated if $EventID is not equal to any of the values in the list.
The $Important field is set to TRUE if $SeverityValue is greater than 2, or FALSE otherwise.
119.2.5. Functions
See Functions for a list of functions provided by the NXLog core. Additional functions are available through
modules.
This statement uses the now() function to set the field to the current time.
1 $EventTime = now();
750
Example 531. Calling a Function of a Specific Module Instance
This statement calls the file_name() and file_size() functions of a defined om_file instance named out in
order to log the name and size of its currently open output file.
119.3. Statements
The following elements can be used in statements. There is no loop operation (for or while) in the NXLog
language.
119.3.1. Assignment
The assignment operation is declared with an equal sign (=). It loads the value from the expression evaluated on
the right into a field on the left.
This statement sets the $EventReceivedTime field to the value returned by the now() function.
1 $EventReceivedTime = now();
119.3.2. Block
A block consists of one or more statements within curly braces ({}). This is typically used with conditional
statements as in the example below.
119.3.3. Procedures
See Procedures for a list of procedures provided by the NXLog core. Additional procedures are available through
modules.
751
Example 535. Calling a Procedure of a Specific Module Instance
This statement calls the parse_csv() procedure of a defined xm_csv module instance named csv_parser.
1 csv_parser->parse_csv();
119.3.4. If-Else
A conditional statement starts with the if keyword followed by a boolean expression and a statement. The else
keyword, followed by another statement, is optional. Brackets around the expression are also optional.
Like Perl, the NXLog language does not have a switch statement. Instead, this can be accomplished by using
conditional if-else statements.
The generated log message various based on the value of the $value field.
1 if ( $value == 1 )
2 log_info("1");
3 else if ( $value == 2 )
4 log_info("2");
5 else if ( $value == 3 )
6 log_info("3");
7 else
8 log_info("default");
NOTE The Perl elsif and unless keywords are not supported.
752
119.4. Variables
A module variable can only be accessed from the same module instance where it was created. A variable is
referenced by a string value and can store a value of any type.
COUNT
Added values are aggregated, and the value of the counter is increased if only positive integers are added
until the counter is destroyed or indefinitely if the counter has no expiry.
COUNTMIN
This calculates the minimum value of the counter.
COUNTMAX
This calculates the maximum value of the counter.
AVG
This algorithm calculates the average over the specified interval.
AVGMIN
This algorithm calculates the average over the specified interval, and the value of the counter is always the
lowest which was ever calculated during the lifetime of the counter.
AVGMAX
Like AVGMIN, but this returns the highest value calculated during the lifetime of the counter.
RATE
This calculates the value over the specified interval. It can be used to calculate events per second (EPS) values.
RATEMIN
This calculates the value over the specified interval, and returns the lowest rate calculated during the lifetime
of the counter.
RATEMAX
Like RATEMIN, but this returns the highest rate calculated during the lifetime of the counter.
GRAD
This calculates the change of the rate of the counter over the specified interval, which is the gradient.
GRADMIN
This calculates the gradient and returns the lowest gradient calculated during the lifetime of the counter.
GRADMAX
Like GRADMIN, but this returns the highest gradient calculated during the lifetime of the counter.
119.6. Fields
The following fields are used by core.
753
The data received from stream modules (im_file, im_tcp, etc.).
119.7. Functions
The following functions are exported by core.
boolean dropped()
Return TRUE if the currently processed event has already been dropped.
754
datetime fix_year(datetime datetime)
Return a corrected datetime value for a datetime which was parsed with a missing year, such as BSD Syslog or
Cisco timestamps. The current year is used unless it would result in a timestamp that is more than 30 days in
the future, in which case the previous year is used instead. If using the current year results in a timestamp
that is less than or equal to 30 days in the future, it is assumed that the source device’s clock is incorrect (and
the returned datetime value will be up to 30 days in the future).
integer get_rand()
Return a random integer value.
string get_uuid()
Return a UUID string.
ipaddr host_ip()
Return the first non-loopback IP address the hostname resolves to.
string hostname()
Return the hostname (short form).
string hostname_fqdn()
Return the FQDN hostname. This function will return the short form if the FQDN hostname cannot be
determined.
755
integer integer(unknown arg)
Parse and convert the string argument to an integer. For datetime type it returns the number of
microseconds since epoch.
datetime now()
Return the current time.
string nxlog_version()
Return the NXLog version string.
756
Nov 6 08:49:37
Nov 6 08:49:37
Nov 06 08:49:37
Nov 3 14:50:30.403
Nov 3 14:50:30.403
Nov 03 14:50:30.403
Nov 3 2005 14:50:30
Nov 3 2005 14:50:30
Nov 03 2005 14:50:30
Nov 3 2005 14:50:30.403
Nov 3 2005 14:50:30.403
Nov 03 2005 14:50:30.403
Nov 3 14:50:30 2005
Nov 3 14:50:30 2005
Nov 03 14:50:30 2005
RFC 1123
RFC 1123 compliant dates are also supported, including a couple others which are similar such as those
defined in RFC 822, RFC 850, and RFC 1036.
Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
Sun, 6 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sun, 06 Nov 94 08:49:37 GMT ; RFC 822
Sun, 6 Nov 94 08:49:37 GMT ; RFC 822
Sun, 6 Nov 94 08:49:37 GMT ; RFC 822
Sun, 06 Nov 94 08:49 GMT ; Unknown
Sun, 6 Nov 94 08:49 GMT ; Unknown
Sun, 06 Nov 94 8:49:37 GMT ; Unknown [Elm 70.85]
Sun, 6 Nov 94 8:49:37 GMT ; Unknown [Elm 70.85]
Mon, 7 Jan 2002 07:21:22 GMT ; Unknown [Postfix]
Sun, 06-Nov-1994 08:49:37 GMT ; RFC 850 with four digit years
The above formats are also recognized when the leading day of week and/or the timezone are omitted.
Apache/NCSA date
This format can be found in Apache access logs and other sources.
24/Aug/2009:16:08:57 +0200
1977-09-06 01:02:03
1977-09-06 01:02:03.004
1977-09-06T01:02:03.004Z
1977-09-06T01:02:03.004+02:00
2011-5-29 0:3:21
2011-5-29 0:3:21+02:00
2011-5-29 0:3:21.004
2011-5-29 0:3:21.004+02:00
Windows timestamps
20100426151354.537875
20100426151354.537875-000
20100426151354.537875000
3/13/2017 8:42:07 AM ; Microsoft DNS Server
757
Integer timestamp
This format is XXXXXXXXXX.USEC. The value is expressed as an integer showing the number of seconds
elapsed since the epoch UTC. The fractional microsecond part is optional.
1258531221.650359
1258531221
BIND9 timestamps
23-Mar-2017 06:38:30.143
23-Mar-2017 06:38:30
2017-Mar-23 06:38:30.143
2017-Mar-23 06:38:30
• YYYY-MM-DD hh:mm:ss,
• YYYY-MM-DDThh:mm:ssTZ,
• YYYY-MM-DDThh:mm:ss.sTZ,
• YYYY-MM-DD hh:mm:ssTZ,
• YYYY-MM-DD hh:mm:ss.sTZ,
• YYYY-MM-DDThh:mm:ssUTC,
758
• YYYY-MM-DDThh:mm:ss.sUTC,
• YYYY-MM-DD hh:mm:ssUTC,
• YYYY-MM-DD hh:mm:ss.sUTC, or
• a format string accepted by the C strftime() function (see the strftime(3) manual or the Windows strftime
reference for the format specification).
119.8. Procedures
The following procedures are exported by core.
759
add_to_route(string routename);
Copy the currently processed event data to the route specified. This procedure makes a copy of the data. The
original will be processed normally. Note that flow control is explicitly disabled when moving data with
add_to_route() and the data will not be added if the queue of the target module(s) is full.
This procedure with two parameters can only be used with COUNT, otherwise the interval parameter must be
specified (see below). This procedure will do nothing if a counter with the specified name already exists.
create_stat(string statname, string type, integer interval, datetime time, integer lifetime);
Create a module statistical counter with the specified name to be calculated over interval seconds and the
time value specified in the time argument. The statistical counter will expire after lifetime seconds.
create_stat(string statname, string type, integer interval, datetime time, datetime expiry);
Create a module statistical counter with the specified name to be calculated over interval seconds and the
time value specified in the time argument. The statistical counter will expire at expiry.
create_var(string varname);
Create a module variable with the specified name. The variable will be created with an infinite lifetime.
delete(unknown arg);
Delete the field from the event. For example, delete($field). Note that $field = undef is not the same,
though after both operations the field will be undefined.
delete(string arg);
Delete the field from the event. For example, delete("field").
delete_all();
Delete all of the fields from the event except raw_event field.
760
delete_stat(string statname);
Delete a module statistical counter with the specified name. This procedure will do nothing if a counter with
the specified name does not exist (e.g. was already deleted).
delete_var(string varname);
Delete the module variable with the specified name if it exists.
drop();
Drop the event record that is currently being processed. Any further action on the event record will result in a
"missing record" error.
duplicate_guard();
Guard against event duplication.
module_restart();
Issue module_stop and then a module_start events for the calling module.
module_start();
Issue a module_start event for the calling module.
module_stop();
Issue a module_stop event for the calling module.
reroute(string routename);
Move the currently processed event data to the route specified. The event data will enter the route as if it was
received by an input module there. Note that flow control is explicitly disabled when moving data with
reroute() and the data will be dropped if the queue of the target module(s) is full.
sleep(integer interval);
Sleep the specified number of microseconds. This procedure is provided for testing purposes primarily. It can
be used as a poor man’s rate limiting tool, though this use is not recommended.
761
Chapter 120. Extension Modules
Extension modules do not process log messages directly, and for this reason their instances cannot be part of a
route. These modules enhance the features of NXLog in various ways, such as exporting new functions and
procedures or registering additional I/O reader and writer functions (to be used with modules supporting the
InputType and OutputType directives). There are many ways to hook an extension module into the NXLog
engine, as the following modules illustrate.
Note that though the module can both initiate and accept connections, the direction of the HTTP requests is
always the same: requests are sent to the module and it returns HTTP responses.
See the list of installer packages that provide the xm_admin module in the Available Modules chapter of the
NXLog User Guide.
120.1.1. Configuration
The xm_admin module accepts the following directives in addition to the common module directives.
Connect
This directive instructs the module to connect to a remote socket. The argument must be an IP address when
SocketType is set to TCP or SSL. Otherwise it must be a name of a socket for UDS. Connect cannot be used
together with the Listen directive. Multiple xm_admin instances may be configured if multiple administration
ports are required.
Listen
This directive instructs the module to accept connections. The argument must be an IP address when
SocketType is TCP or SSL. Otherwise it must be the name of a socket for UDS. Listen cannot be used together
with the Connect directive. Multiple xm_admin instances may be configured if multiple administration ports
are required. If neither Listen nor Connect are specified, the module will listen with SocketType TCP on
127.0.0.1.
Port
This specifies the port number used with the Listen or Connect modes. The default port is 8080.
ACL
This block defines directories which can be used with the GetFile and PutFile web service requests. The name
of the ACL is used in these requests together with the filename. The filename can contain only characters [a-
zA-Z0-9-._], so these file operations will only work within the directory. Example of usage is in the Examples
section.
AllowRead
If set to TRUE, GetFile requests are allowed.
AllowWrite
If set to TRUE, PutFile requests are allowed.
762
Directory
The name of the directory where the files are saved to or loaded from.
AllowIP
This optional directive can be used to specify an IP address or a network that is allowed to connect. The
directive can be specified more than once to add different IPs or networks to the whitelist. The following
formats can be used:
AllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with unknown and self-signed certificates. The default value
is FALSE: all connections must present a trusted certificate. This directive is only valid if SocketType is set to
SSL.
CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. This directive is only valid if SocketType is set to SSL. A remote’s self-signed certificate (which
is not signed by a CA) can also be trusted by including a copy of the certificate in this directory.
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. This directive is only valid if SocketType is set to SSL. To trust a self-signed certificate
presented by the remote (which is not signed by a CA), provide that certificate instead.
CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive and the CADir and CAFile directives are mutually exclusive.
CertFile
This specifies the path of the certificate file to be used for SSL connections. This directive is only valid if
SocketType is set to SSL.
CertKeyFile
This specifies the path of the certificate key file to be used for SSL connections. This directive is only valid if
SocketType is set to SSL.
CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.
CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
763
OpenSSL hashed format. This directive is only valid if SocketType is set to SSL.
CRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket. This directive is only valid if SocketType is set to SSL.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is only valid if SocketType is set to SSL. This directive is not needed for password-less private keys.
Labels
This directive allows custom key value pairs to be defined with static or dynamic values. The directive is very
useful for providing supplementary details about agents (hostname, operating system, local contact
information, etc.). Labels are returned as part of the ServerInfo response from agents.
Label values can be set in a few ways: statically (with a string in the <labels> block, or a defined string, or an
environment variable); dynamically at start-up with a script run with the include_stdout directive, or at run-
time before each response is sent. Setting labels is demonstrated in the Examples section.
Reconnect
This directive has been deprecated as of version 2.4. The module will try to reconnect automatically at
increasing intervals on all errors.
RequireCert
This boolean directive specifies that the remote must present a certificate. If set to TRUE and there is no
certificate presented during the connection handshake, the connection will be refused. The default value is
TRUE: each connection must use a certificate. This directive is only valid if SocketType is set to SSL.
SocketType
This directive sets the connection type. It can be one of the following:
SSL
TLS/SSL for secure network connections
TCP
TCP, the default if SocketType is not explicitly specified
UDS
Unix domain socket
SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.
SSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (compression is disabled).
764
Some Linux packages (for example, Debian) use the OpenSSL library and may not support
the zlib compression mechanism. The module will emit a warning on startup if the
NOTE
compression support is missing. The generic deb/RPM packages are bundled with a zlib-
enabled libssl library.
SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
GetFile
Download a file from the NXLog agent. This will only work if the specified ACL exists.
GetLog
Download the log file from the NXLog agent.
ModuleInfo
Request information about a module instance.
ModuleRestart
Restart a module instance.
ModuleStart
Start a module instance.
ModuleStop
Stop a module instance.
PutFile
Upload a file to the NXLog agent. This will only work if the specified ACL exists. A file can be a configuration
file, certificate or certificate key, pattern database, correlation rule file, etc. Using this method enables NXLog
to be reconfigured from a remote host.
ServerInfo
Request information about the server. This will also return info about all module instances.
ServerRestart
Restart the server.
ServerStart
Start all modules of the server, the opposite of ServerStop.
ServerStop
Stop all modules of the server. Note that the NXLog process will not exit, otherwise it would be impossible to
make it come back online remotely. Extension modules are not stopped for the same reason.
The same SOAP methods where used to create an equivalent JSON Schema, so JSON Objects can be used instead
of SOAP methods. This is illustrated better in the following examples.
765
120.1.3. Request - Response Examples
This is a typical SOAP ServerInfo request and its response.
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<adm:serverInfo xmlns:adm="http://log4ensics.com/2010/AdminInterface"/>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
<SOAP-ENV:Envelope xmlns:SOAP-ENV='http://schemas.xmlsoap.org/soap/envelope/'>
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<adm:serverInfoReply xmlns:adm='http://log4ensics.com/2010/AdminInterface'>
<started>1508401312424622</started>
<load>0.2000</load>
<pid>15519</pid>
<mem>12709888</mem>
<version>3.99.2802</version>
<os>Linux</os>
<systeminfo>OS: Linux, Hostname: voyager, Release: 4.4.0-96-generic, Version: #119-Ubuntu SMP
Tue Sep 12 14:59:54 UTC 2017, Arch: x86_64, 4 CPU(s), 15.7Gb memory</systeminfo>
<hostname>voyager</hostname>
<servertime>1508405764586118</servertime>
</adm:serverInfoReply>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
{
"msg": {
"command": "serverInfo"
}
}
{
"response": "serverInfoReply",
"status": "success",
"data": {
"server-info": {
"started": 1508401312424622,
"load": 0.05999999865889549,
"pid": 15519,
"mem": 12742656,
"os": "Linux",
"version": "3.99.2802",
"systeminfo": "OS: Linux, Hostname: voyager, Release: 4.4.0-96-generic, Version: #119-Ubuntu
SMP Tue Sep 12 14:59:54 UTC 2017, Arch: x86_64, 4 CPU(s), 15.7Gb memory",
"hostname": "voyager",
"servertime": 1508406647673758,
"modules": {}
}
}
}
766
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<adm:putFile xmlns:adm="http://log4ensics.com/2010/AdminInterface">
<filetype>tmp</filetype>
<filename>test.tmp</filename>
<file>File Content
A newline
</file>
</adm:putFile>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
<SOAP-ENV:Envelope xmlns:SOAP-ENV='http://schemas.xmlsoap.org/soap/envelope/'>
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<adm:putFileReply xmlns:adm='http://log4ensics.com/2010/AdminInterface'/>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
{
"msg": {
"command": "putFile",
"params": {
"filetype": "tmp",
"filename": "test.tmp",
"file": "File content\nA newline\n"
}
}
}
{
"response": "putFileReply",
"status": "success",
"data": {}
}
120.1.4. Examples
Example 538. ACL Block Allowing Read and Write on Files in the Directory
This ACL is named "conf" and allows both GetFile and PutFile requests on the specified directory.
nxlog.conf
1 <ACL conf>
2 Directory /var/run/nxlog/configs
3 AllowRead TRUE
4 AllowWrite TRUE
5 </ACL>
767
Example 539. Setting Values for Labels
Static configuration is set with the define string, environment variable envvar, and describing key value
pairs inside the <labels> block.
Dynamic configuration is achieved via the start-up script of the include_stdout directive and run-time
function set with the host label.
nxlog.conf
1 define BASE /opt/nxlog_new
2 envvar NXLOG_OS
3
4 <Extension admin>
5 Module xm_admin
6 ...
7 <labels>
8 os_name "Debian"
9 agent_base %BASE%
10 os %NXLOG_OS%
11 include_stdout /path/to/labels.sh
12 host hostname_fqdn()
13 </labels>
14 </Extension>
768
Example 540. Configuration for Multiple Admin Ports
nxlog.conf (truncated)
1 <Extension ssl_connect>
2 Module xm_admin
3 Connect 192.168.1.1
4 Port 4041
5 SocketType SSL
6 CAFile %CERTDIR%/ca.pem
7 CertFile %CERTDIR%/client-cert.pem
8 CertKeyFile %CERTDIR%/client-key.pem
9 KeyPass secret
10 AllowUntrusted FALSE
11 RequireCert TRUE
12 Reconnect 60
13 <ACL conf>
14 Directory %CONFDIR%
15 AllowRead TRUE
16 AllowWrite TRUE
17 </ACL>
18 <ACL cert>
19 Directory %CERTDIR%
20 AllowRead TRUE
21 AllowWrite TRUE
22 </ACL>
23 </Extension>
24
25 <Extension tcp_listen>
26 Module xm_admin
27 Listen localhost
28 Port 8080
29 [...]
See the list of installer packages that provide the xm_aixaudit module in the Available Modules chapter of the
NXLog User Guide.
120.2.1. Configuration
The xm_aixaudit module accepts the following directive in addition to the common module directives.
EventsConfigFile
This optional directive contains the path to the file with a list of audit events. This file should contain events in
AuditEvent = FormatCommand format. The AuditEvent is a reference to the audit object which is defined
under the /etc/security/audit/objects path. The FormatCommand defines the auditpr output for the
object. For more information, see the The Audit Subsystem in AIX section on the IBM website.
769
120.2.2. Fields
The following fields are used by xm_aixaudit.
120.2.3. Examples
770
Example 541. Parsing AIX Audit Events
This configuration reads AIX audit logs from file and parses them with the InputType registered by
xm_aixaudit.
nxlog.conf
1 <Extension aixaudit>
2 Module xm_aixaudit
3 EventsConfigFile modules/extension/aixaudit/events
4 </Extension>
5
6 <Input in>
7 Module im_file
8 File "/audit/audit3.bin"
9 InputType aixaudit
10 </Input>
See the list of installer packages that provide the xm_asl module in the Available Modules chapter of the NXLog
User Guide.
120.3.1. Configuration
The xm_asl module accepts only the common module directives.
120.3.2. Fields
The following fields are used by xm_asl.
771
$Sender (type: string)
The name of the process that sent the message.
1/ALERT 5/CRITICAL
2/CRITICAL 5/CRITICAL
3/ERROR 4/ERROR
4/WARNING 3/WARNING
5/NOTICE 2/INFO
6/INFO 2/INFO
7/DEBUG 1/DEBUG
120.3.3. Examples
772
Example 542. Parsing Apple System Logs With xm_asl
This example uses an im_file module instance to read an ASL log file and the InputType provided by xm_asl
to parse the events. The various Fields are added to the event record.
nxlog.conf
1 <Extension asl_parser>
2 Module xm_asl
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File "tmp/input.asl"
8 InputType asl_parser
9 </Input>
On Solaris, the device file is not available and the BSM log files must be read and parsed with im_file and
xm_bsm as shown in the example below.
To properly read BSM Audit Logs from a device file, such as /dev/auditpipe, the im_bsm
WARNING module must be used. Do not use the xm_bsm module in combination with im_file to read
BSM logs from a device file.
See the list of installer packages that provide the xm_bsm module in the Available Modules chapter of the NXLog
User Guide.
120.4.1. Setup
For information about setting up BSM Auditing, see the corresponding documentation:
120.4.2. Configuration
The xm_bsm module accepts the following directives in addition to the common module directives.
EventFile
This optional directive can be used to specify the path to the audit event database containing a mapping
between event names and numeric identifiers. The default location is /etc/security/audit_event which is
used when the directive is not specified.
120.4.3. Fields
The following fields are used by xm_bsm.
773
$Arbitrary (type: string)
Arbitrary data token associated with the event, if any
774
$ExitErrno (type: string)
The exit status as passed to the exit() system call
775
$IPCPermMode (type: string)
The IPC access mode
776
$ProcessPID (type: string)
The process ID (PID) in the Process section
777
$SubjectSID (type: string)
The session ID (SID) in the Subject section
120.4.4. Examples
Example 543. Parsing BSM Events With xm_bsm
This configuration reads BSM audit logs from file and parses them with the InputType registered by
xm_bsm.
nxlog.conf
1 <Extension bsm_parser>
2 Module xm_bsm
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File '/var/audit/*'
8 InputType bsm_parser
9 </input>
778
CEF uses Syslog as a transport. For this reason the xm_syslog module must be used in
NOTE conjunction with xm_cef in order to parse or generate the additional Syslog header, unless the
CEF data is used without Syslog. See examples for both cases below.
See the list of installer packages that provide the xm_cef module in the Available Modules chapter of the NXLog
User Guide.
120.5.1. Configuration
The xm_cef module accepts the following directive in addition to the common module directives.
IncludeHiddenFields
This boolean directive specifies that the to_cef() function or the to_cef() procedure should inlude fields having
a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to TRUE,
then generated CEF text will contain these otherwise excluded fields as extension fields.
120.5.2. Functions
The following functions are exported by xm_cef.
string to_cef()
Convert the specified fields to a single CEF formatted string.
Note that directive IncludeHiddenFields has an effect on extension fields in the output.
120.5.3. Procedures
The following procedures are exported by xm_cef.
parse_cef();
Parse the $raw_event field as CEF input.
parse_cef(string source);
Parse the given string as CEF format.
to_cef();
Format the specified fields as CEF and put this into the $raw_event field. The CEF header fields can be
overridden by values contained in the following NXLog fields: $CEFVersion, $CEFDeviceVendor,
$CEFDeviceProduct, $CEFDeviceVersion, $CEFSignatureID, $CEFName, and $CEFSeverity.
Note that directive IncludeHiddenFields has an effect on extension fields in the output.
120.5.4. Fields
The following fields are used by xm_cef.
In addition to the fields listed below, the parse_cef() procedure will create a field for every key-value pair
contained in the Extension CEF field, such as $act, $cnt, $dhost, etc.
779
The vendor or manufacturer of the device that sent the CEF-formatted event log. This field takes the value of
the Device Vendor CEF header field.
This field takes the value of the Severity CEF header field.
120.5.5. Examples
780
Example 544. Sending Windows EventLog as CEF over UDP
This configuration collects both Windows EventLog and NXLog internal messages, converts to CEF with
Syslog headers, and forwards via UDP.
nxlog.conf
1 <Extension cef>
2 Module xm_cef
3 </Extension>
4
5 <Extension syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input internal>
10 Module im_internal
11 </Input>
12
13 <Input eventlog>
14 Module im_msvistalog
15 </Input>
16
17 <Output udp>
18 Module om_udp
19 Host 192.168.168.2
20 Port 1514
21 Exec $Message = to_cef(); to_syslog_bsd();
22 </Output>
23
24 <Route arcsight>
25 Path internal, eventlog => udp
26 </Route>
781
Example 545. Parsing CEF
The following configuration receives CEF over UDP and converts the parsed data into JSON.
nxlog.conf
1 <Extension cef>
2 Module xm_cef
3 </Extension>
4
5 <Extension syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Extension json>
10 Module xm_json
11 </Extension>
12
13 <Input udp>
14 Module im_udp
15 Host 0.0.0.0
16 Exec parse_syslog(); parse_cef($Message);
17 </Input>
18
19 <Output file>
20 Module om_file
21 File "cef2json.log"
22 Exec to_json();
23 </Output>
24
25 <Route cef2json>
26 Path udp => file
27 </Route>
See the list of installer packages that provide the xm_charconv module in the Available Modules chapter of the
NXLog User Guide.
120.6.1. Configuration
The xm_charconv module accepts the following directives in addition to the common module directives.
AutodetectCharsets
This optional directive accepts a comma-separated list of character set names. When auto is specified as the
source encoding for convert() or convert_fields(), these character sets will be tried for conversion. This
directive is not related to the LineReader directive or the corresponding InputType registered by the module.
BigEndian
This optional boolean directive specifies the endianness to use during the encoding conversion. If this
directive is not specified, it defaults to the host’s endianness. This directive only affects the registered
InputType and is only applicable if the LineReader directive is set to a non-Unicode encoding and CharBytes is
set to 2 or 4.
782
CharBytes
This optional integer directive specifies the byte-width of the encoding to use during conversion. Acceptable
values are 1 (the default), 2, and 4. Most variable width encodings will work with the default value. This
directive only affects the registered InputType and is only applicable if the LineReader directive is set to a
non-Unicode encoding.
LineReader
If this optional directive is specified with an encoding, an InputType will be registered using the name of the
xm_charconv module instance. The following Unicode encodings are supported: UTF-8, UCS-2, UCS-2BE, UCS-
2LE, UCS-4, UCS-4BE, UCS-4LE, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, UTF-32LE, and UTF-7. For
other encodings, it may be necessary to also set BigEndian and/or CharBytes.
120.6.2. Functions
The following functions are exported by xm_charconv.
120.6.3. Procedures
The following procedures are exported by xm_charconv.
120.6.4. Examples
Example 546. Character set auto-detection of various input encodings
This configuration shows an example of character set auto-detection. The input file can contain differently
encoded lines, and the module normalizes output to UTF-8.
nxlog.conf
1 <Extension charconv>
2 Module xm_charconv
3 AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2
4 </Extension>
5
6 <Input filein>
7 Module im_file
8 File "tmp/input"
9 Exec convert_fields("auto", "utf-8");
10 </Input>
11
12 <Output fileout>
13 Module om_file
14 File "tmp/output"
15 </Output>
16
17 <Route r>
18 Path filein => fileout
19 </Route>
783
Example 547. Registering and Using an InputType
This configuration uses the InputType registered via the LineReader directive to read a file with the ISO-
8859-2 encoding.
nxlog.conf
1 <Extension charconv>
2 Module xm_charconv
3 LineReader ISO-8859-2
4 </Extension>
5
6 <Input in>
7 Module im_file
8 File 'modules/extension/charconv/iso-8859-2.in'
9 InputType charconv
10 </Input>
The pm_transformer module provides a simple interface to parse and generate CSV format, but the xm_csv
module exports an API that can be used to solve more complex tasks involving CSV formatted data.
It is possible to use more than one xm_csv module instance with different options in order to
NOTE support different CSV formats at the same time. For this reason, functions and procedures
exported by the module are public and must be referenced by the module instance name.
See the list of installer packages that provide the xm_csv module in the Available Modules chapter of the NXLog
User Guide.
120.7.1. Configuration
The xm_csv module accepts the following directives in addition to the common module directives. The Fields
directive is required.
Fields
This mandatory directive accepts a comma-separated list of fields which will be filled from the input parsed.
Field names with or without the dollar sign ($) are accepted. The fields will be stored as strings unless their
types are explicitly specified with the FieldTypes directive.
Delimiter
This optional directive takes a single character (see below) as argument to specify the delimiter character
used to separate fields. The default delimiter character is the comma (,). Note that there is no delimiter after
the last field.
EscapeChar
This optional directive takes a single character (see below) as argument to specify the escape character used
to escape special characters. The escape character is used to prefix the following characters: the escape
character itself, the quote character, and the delimiter character. If EscapeControl is TRUE, the newline (\n),
carriage return (\r), tab (\t), and backspace (\b) control characters are also escaped. The default escape
character is the backslash character (\).
784
EscapeControl
If this optional boolean directive is set to TRUE, control characters are also escaped. See the EscapeChar
directive for details. The default is TRUE: control characters are escaped. Note that this is necessary to allow
single line CSV field lists which contain line-breaks.
FieldTypes
This optional directive specifies the list of types corresponding to the field names defined in Fields. If
specified, the number of types must match the number of field names specified with Fields. If this directive is
omitted, all fields will be stored as strings. This directive has no effect on the fields-to-CSV conversion.
QuoteChar
This optional directive takes a single character (see below) as argument to specify the quote character used to
enclose fields. If QuoteOptional is TRUE, then only string type fields are quoted. The default is the double-
quote character (").
QuoteMethod
This optional directive can take the following values:
All
All fields will be quoted.
None
Nothing will be quoted. This can be problematic if a field value (typically text that can contain any
character) contains the delimiter character. Make sure that this is escaped or replaced with something
else.
String
Only string type fields will be quoted. This has the same effect as QuoteOptional set to TRUE and is the
default behavior if the QuoteMethod directive is not specified.
Note that this directive only effects CSV generation when using to_csv(). The CSV parser can automatically
detect the quotation.
QuoteOptional
This directive has been deprecated in favor of QuoteMethod, which should be used instead.
StrictMode
If this optional boolean directive is set to TRUE, the CSV parser will fail to parse CSV lines that do not contain
the required number of fields. When this is set to FALSE and the input contains fewer fields than specified in
Field, the rest of the fields will be unset. The default value is FALSE.
UndefValue
This optional directive specifies a string which will be treated as an undefined value. This is particularly useful
when parsing the W3C format where the dash (-) marks an omitted field.
Delimiter ;
Control characters
The following non-printable characters can be specified with escape sequences:
785
\a
audible alert (bell)
\b
backspace
\t
horizontal tab
\n
newline
\v
vertical tab
\f
formfeed
\r
carriage return
Delimiter \t
Delimiter ';'
Delimiter '\'
Delimiter "\"
Delimiter 0x20
786
120.7.2. Functions
The following functions are exported by xm_csv.
string to_csv()
Convert the specified fields to a single CSV formatted string.
120.7.3. Procedures
The following procedures are exported by xm_csv.
parse_csv();
Parse the $raw_event field as CSV input.
parse_csv(string source);
Parse the given string as CSV format.
to_csv();
Format the specified fields as CSV and put this into the $raw_event field.
120.7.4. Examples
787
Example 548. Complex CSV Format Conversion
This example shows that the xm_csv module can not only parse and create CSV formatted input and output,
but with multiple xm_csv modules it is also possible to reorder, add, remove, or modify fields before
outputting to a different CSV format.
nxlog.conf
1 <Extension csv1>
2 Module xm_csv
3 Fields $id, $name, $number
4 FieldTypes integer, string, integer
5 Delimiter ,
6 </Extension>
7
8 <Extension csv2>
9 Module xm_csv
10 Fields $id, $number, $name, $date
11 Delimiter ;
12 </Extension>
13
14 <Input in>
15 Module im_file
16 File "tmp/input"
17 <Exec>
18 csv1->parse_csv();
19 $date = now();
20 if not defined $number $number = 0;
21 csv2->to_csv();
22 </Exec>
23 </Input>
24
25 <Output out>
26 Module om_file
27 File "tmp/output"
28 </Output>
Input Sample
1, "John K.", 42
2, "Joe F.", 43
Output Sample
1;42;"John K.";2011-01-15 23:45:20
2;43;"Joe F.";2011-01-15 23:45:20
120.8.1. Configuration
The xm_crypto module accepts the following directives in addition to the common module directives.
Password
This optional directive defines the password which can be used by the aes_encrypt and aes_decrypt stream
788
processors while processing data. This directive is mutually exclusive with the PasswordFile directive.
PasswordFile
This optional directive specifies the file to read the password from. This directive is mutually exclusive with
the Password directive.
UseSalt
This optional boolean directive defines the randomly-generated combination of characters which will be used
for encryption and decryption. The default value is TRUE.
Iter
This optional directive enhances security by enabling the PBKDF2 algorithm and setting the iteration count
during the key generation. For more details, see the EVP_BytesToKey section on the OpenSSL website.
aes_encrypt
This stream processor implements encryption of the log data. It is specified in the OutputType directive after
the specification of the output writer function. The encryption result is similar to running the following
OpenSSL command:
openssl enc -aes256 -md sha256 -pass pass:password -in input_filename -out output_encrypted_filename
Rotation of files is done automatically when encrypting log data with the aes_encrypt processor.
NOTE The rotation pattern is original_file -→ original_file.1 -→ original_file.2 -→
original_file.n. There is no built-in removal or cleanup.
aes_decrypt
This stream processor implements decryption of the log data. It is specified in the InputType directive before
the specification of the input reader function. The decryption result is similar to running the following
OpenSSL command:
openssl enc -aes256 -d -md sha256 -pass pass:password -in encrypted_filename -out
output_decrypted_filename
120.8.3. Examples
The examples below describe various ways for processing logs with the xm_crypto module.
789
Example 549. Encryption of Logs
The following configuration uses the im_file to read log messages. The aes_encrypt stream processor
encrypts data at the output. The result is saved to a file.
nxlog.conf
1 <Extension crypto>
2 Module xm_crypto
3 Password ThePassword123!!
4 </Extension>
5
6 <Input in>
7 Module im_file
8 File 'tmp/input'
9 </Input>
10
11 <Output out>
12 Module om_file
13 File 'tmp/output'
14 OutputType LineBased, crypto.aes_encrypt
15 </Output>
The following configuration uses the aes_decrypt stream processor to decrypt log messages at the input.
The result is saved to a file.
nxlog.conf
1 <Extension crypto>
2 Module xm_crypto
3 UseSalt TRUE
4 PasswordFile /tmp/passwordfile
5 </Extension>
6
7 <Input in>
8 Module im_file
9 File '/tmp/input'
10 InputType crypto.aes_decrypt, LineBased
11 </Input>
12
13 <Output out>
14 Module om_file
15 File '/tmp/output'
16 </Output>
790
Example 551. Decryption and Encryption of Logs
The configuration below uses the aes_decrypt stream processor to decrypt input data. The Exec directive
directive runs the to_syslog_ietf() procedure to convert messages to the IETF Syslog format. At the output,
the result is encrypted with the aes_encrypt processor and saved to a file.
<Extension crypto>
Module xm_crypto
UseSalt TRUE
PasswordFile /tmp/passwordfile
</Extension>
<Extension syslog>
Module xm_syslog
</Extension>
<Input in>
Module im_file
File 'tmp/input'
InputType crypto.aes_decrypt, LineBased
Exec to_syslog_ietf();
</Input>
<Output out>
Module om_file
File 'tmp/output'
OutputType LineBased, crypto.aes_encrypt
</Output>
The InputType and OutputType directives provide sequential usage of multiple stream processors to create
workflows. For example, the xm_zlib module functionality can be combined with the xm_crypto module to
provide compression and encryption of logs at the same time.
While configuring stream processors, compression should always precede encryption. In the opposite process,
decryption should occur before decompression.
791
Example 552. Processing Data With Various Stream Processors
The configuration below utilizes the aes_decrypt processor of the xm_crypto module to decrypt log
messages and the decompress stream processor of the xm_zlib module to decompress the data. Using the
Exec directive, messages with the info string in their body are selected. Then the selected messages are
compressed and encrypted using the compress and aes_encrypt stream processors. The result is saved to a
file.
<Extension zlib>
Module xm_zlib
Format zlib
CompressionLevel 9
CompBufSize 16384
DecompBufSize 16384
</Extension>
<Extension crypto>
Module xm_crypto
UseSalt TRUE
Password ThePassword123!!
</Extension>
<Input in>
Module im_file
File '/tmp/input'
InputType crypto.aes_decrypt, zlib.decompress, LineBased
Exec if not ($raw_event =~ /info/) drop();
</Input>
<Output out>
Module om_file
File 'tmp/output'
OutputType LineBased, zlib.compress, crypto.aes_encrypt
</Output>
The im_exec and om_exec modules also provide support for running external programs, though
the purpose of these is to pipe data to and read data from programs. The procedures provided
NOTE
by the xm_exec module do not pipe log message data, but are intended for multiple invocations
(though data can be still passed to the executed script/program as command line arguments).
See the list of installer packages that provide the xm_exec module in the Available Modules chapter of the NXLog
User Guide.
120.9.1. Configuration
The xm_exec module accepts only the common module directives.
120.9.2. Functions
The following functions are exported by xm_exec.
792
string exec(string command, varargs args)
Execute command, passing it the supplied arguments, and wait for it to terminate. The command is executed
in the caller module’s context. Returns a string from stdout. Note that the module calling this function will
block for at most 10 seconds or until the process terminates. Use the exec_async() procedure to avoid this
problem. All output written to standard error by the spawned process is discarded.
120.9.3. Procedures
The following procedures are exported by xm_exec.
120.9.4. Examples
Example 553. NXLog Acting as a Cron Daemon
This xm_exec module instance will run the command every second without waiting for it to terminate.
nxlog.conf
1 <Extension exec>
2 Module xm_exec
3 <Schedule>
4 Every 1 sec
5 Exec exec_async("/bin/true");
6 </Schedule>
7 </Extension>
793
Example 554. Sending Email Alerts
If the $raw_event field matches the regular expression, an email will be sent.
nxlog.conf
1 <Extension exec>
2 Module xm_exec
3 </Extension>
4
5 <Input tcp>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 1514
9 <Exec>
10 if $raw_event =~ /alertcondition/
11 {
12 exec_async("/bin/sh", "-c", 'echo "' + $Hostname +
13 '\n\nRawEvent:\n' + $raw_event +
14 '"|/usr/bin/mail -a "Content-Type: text/plain; charset=UTF-8" -s "ALERT" ' +
15 'user@domain.com');
16 }
17 </Exec>
18 </Input>
19
20 <Output file>
21 Module om_file
22 File "/var/log/messages"
23 </Output>
24
25 <Route tcp_to_file>
26 Path tcp => file
27 </Route>
See the list of installer packages that provide the xm_filelist module in the Available Modules chapter of the
NXLog User Guide.
120.10.1. Configuration
The xm_filelist module accepts the following directives in addition to the common module directives. The File
directive is required.
File
The mandatory File directive specifies the name of the file that will be read into memory. This directive may
be specified more than once if multiple files need to be operated on.
794
CheckInterval
This optional directive specifies the frequency with which the files are checked for modifications, in seconds.
The default value is 5 seconds. File checks are disabled if CheckInterval is set to 0.
120.10.2. Functions
The following functions are exported by xm_filelist.
Rotating, renaming, or removing the file written by om_file is also supported with the help of the
NOTE
om_file reopen() procedure.
See the list of installer packages that provide the xm_fileop module in the Available Modules chapter of the NXLog
User Guide.
120.11.1. Configuration
The xm_fileop module accepts only the common module directives.
120.11.2. Functions
The following functions are exported by xm_fileop.
string dir_temp_get()
Return the name of a directory suitable as a temporary storage location.
795
datetime file_ctime(string file)
Return the creation or inode-changed time of file. On error undef is returned and an error is logged.
120.11.3. Procedures
The following procedures are exported by xm_fileop.
dir_make(string path);
Create a directory recursively (like mkdir -p). It succeeds if the directory already exists. An error is logged if
the operation fails.
dir_remove(string file);
Remove the directory from the filesystem.
796
file_chown(string file, string user, string group);
Change the ownership of file. This function is only implemented on POSIX systems where chown() is available
in the underlying operating system. An error is logged if the operation fails.
file_cycle(string file);
Do a cyclic rotation on file. The file will be moved to "file.1". If "file.1" already exists it will be moved to "file.2",
and so on. Wildcards are supported in the file path and filename. The backslash (\) must be escaped if used
as the directory separator with wildcards (for example, C:\\test\\*.log). This procedure will reopen the
LogFile if it is cycled. An error is logged if the operation fails.
file_remove(string file);
Remove file. It is possible to specify a wildcard in the filename (but not in the path). The backslash (\) must be
escaped if used as the directory separator with wildcards (for example, C:\\test\\*.log). This procedure
will reopen the LogFile if it is removed. An error is logged if the operation fails.
file_touch(string file);
Update the last modification time of file or create the file if it does not exist. An error is logged if the operation
fails.
file_truncate(string file);
Truncate file to zero length. If the file does not exist, it will be created. An error is logged if the operation fails.
797
120.11.4. Examples
Example 555. Rotation of the Internal LogFile
In this example, the internal log file is rotated based on time and size.
nxlog.conf
1 #define LOGFILE C:\Program Files (x86)\nxlog\data\nxlog.log
2 define LOGFILE /var/log/nxlog/nxlog.log
3
4 <Extension fileop>
5 Module xm_fileop
6
7 # Check the log file size every hour and rotate if larger than 1 MB
8 <Schedule>
9 Every 1 hour
10 Exec if (file_size('%LOGFILE%') >= 1M) \
11 file_cycle('%LOGFILE%', 2);
12 </Schedule>
13
14 # Rotate log file every week on Sunday at midnight
15 <Schedule>
16 When @weekly
17 Exec file_cycle('%LOGFILE%', 2);
18 </Schedule>
19 </Extension>
Unlike Syslog format (with Snare Agent, for example), the GELF format contains structured data in JSON so that
the fields are available for analysis. This is especially convenient with sources such as the Windows EventLog
which already generate logs in a structured format.
The xm_gelf module provides the following reader and writer functions.
InputType GELF_TCP
This input reader generates GELF for use with TCP (use with the im_tcp input module).
InputType GELF_UDP
This input reader generates GELF for use with UDP (use with the im_udp input module).
InputType GELF
This type is equivalent to the GELF_UDP reader.
OutputType GELF_TCP
This output writer generates GELF for use with TCP (use with the om_tcp output module).
OutputType GELF_UDP
This output writer generates GELF for use with UDP (use with the om_udp output module).
OutputType GELF
This type is equivalent to the GELF_UDP writer.
798
Configuring NXLog to process GELF input or output requires loading the xm_gelf extension module and then
setting the corresponding InputType or OutputType in the Input or Output module instance. See the examples
below.
The GELF output generated by this module includes all fields, except for the $raw_event field and any field
having a leading dot (.) or underscore (_).
See the list of installer packages that provide the xm_gelf module in the Available Modules chapter of the NXLog
User Guide.
120.12.1. Configuration
The xm_gelf module accepts the following directives in addition to the common module directives.
IncludeHiddenFields
This boolean directive specifies that the GELF output should include fields having a leading dot (.) or
underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to TRUE, then the generated
GELF JSON will contain these otherwise excluded fields. In this case field name _fld1 will become __fld1 and
.fld2 will become _.fld2 in GELF JSON.
ShortMessageLength
This optional directive can be used to specify the length of the short_message field for the GELF output writers.
This defaults to 64 if the directive is not explicitly specified. If the field short_message or ShortMessage is
present, it will not be truncated.
UseNullDelimiter
If this optional boolean directive is TRUE, the GELF_TCP output writer will use the NUL delimiter. If this
directive is FALSE, it will use the newline delimiter. The default is TRUE.
120.12.2. Fields
The following fields are used by xm_gelf.
In addition to the fields listed below, if the GELF input contains custom user fields (those prefixed with the
character), those fields will be available without the prefix. For example, the GELF record
{"_foo": "bar"} will generate the field $foo containing the value "bar".
799
$SourceLine (type: integer)
The line in a file that caused the event. This is called line in the GELF specification.
120.12.3. Examples
Example 556. Parsing GELF Logs Collected via TCP
This configuration uses the im_tcp module to collect logs over TCP port 12201 and the xm_gelf module to
parse them.
nxlog.conf
1 <Extension gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input tcpin>
6 Module im_tcp
7 Host 0.0.0.0
8 Port 12001
9 InputType GELF_TCP
10 </Input>
800
Example 557. Sending Windows EventLog to Graylog2 in GELF
The following configuration reads the Windows EventLog and sends it to a Graylog2 server in GELF format.
nxlog.conf
1 <Extension gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input eventlog>
6 # Use 'im_mseventlog' for Windows XP, 2000 and 2003
7 Module im_msvistalog
8 # Uncomment the following to collect specific event logs only
9 # but make sure not to leave any `#` as only <!-- --> style comments
10 # are supported inside the XML.
11 #Query <QueryList>\
12 # <Query Id="0">\
13 # <Select Path="Application">*</Select>\
14 # <Select Path="System">*</Select>\
15 # <Select Path="Security">*</Select>\
16 # </Query>\
17 # </QueryList>
18 </Input>
19
20 <Output udp>
21 Module om_udp
22 Host 192.168.1.1
23 Port 12201
24 OutputType GELF_UDP
25 </Output>
26
27 <Route eventlog_to_udp>
28 Path eventlog => udp
29 </Route>
801
Example 558. Forwarding Custom Log Files to Graylog2 in GELF
In this example, custom application logs are collected and sent out in GELF, with custom fields set to make
the data more useful for the receiver.
nxlog.conf (truncated)
1 <Extension gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Input file>
6 Module im_file
7 File "/var/log/app*.log"
8
9 <Exec>
10 # Set the $EventTime field usually found in the logs by
11 # extracting it with a regexp. If this is not set, the current
12 # system time will be used which might be a little off.
13 if $raw_event =~ /(\d\d\d\d\-\d\d-\d\d \d\d:\d\d:\d\d)/
14 $EventTime = parsedate($1);
15
16 # Explicitly set the Hostname. This defaults to the system's
17 # hostname if unset.
18 $Hostname = 'myhost';
19
20 # Now set the severity level to something custom. This defaults
21 # to 'INFO' if unset. We can use the following numeric values
22 # here which are the standard Syslog values: ALERT: 1, CRITICAL:
23 # 2, ERROR: 3, WARNING: 4, NOTICE: 5, INFO: 6, DEBUG: 7
24 if $raw_event =~ /ERROR/ $SyslogSeverityValue = 3;
25 else $SyslogSeverityValue = 6;
26
27 # Set a field to contain the name of the source file
28 $FileName = file_name();
29 [...]
802
Example 559. Parsing a CSV File and Sending it to Graylog2 in GELF
With this configuration, NXLog will read a CSV file containing three fields and forward the data in GELF so
that the fields will be available on the server.
nxlog.conf
1 <Extension gelf>
2 Module xm_gelf
3 </Extension>
4
5 <Extension csv>
6 Module xm_csv
7 Fields $name, $number, $location
8 FieldTypes string, integer, string
9 Delimiter ,
10 </Extension>
11
12 <Input file>
13 Module im_file
14 File "/var/log/app/csv.log"
15 Exec csv->parse_csv();
16 </Input>
17
18 <Output udp>
19 Module om_udp
20 Host 192.168.1.1
21 Port 12201
22 OutputType GELF_UDP
23 </Output>
24
25 <Route csv_to_gelf>
26 Path file => udp
27 </Route>
120.13. Go (xm_go)
This module provides support for processing NXLog log data with methods written in the Go language. The file
specified by the ImportLib directive should contain one or more methods which can be called from the Exec
directive of any module. See also the im_go and om_go modules.
For the system requirements, installation details and environmental configuration requirements
NOTE of Go, see the Getting Started section in the Go documentation. The Go environment is only
needed for compiling the Go file. NXLog does not need the Go environment for its operation.
The Go script imports the NXLog module, and will have access to the following classes and functions.
class nxModule
This class is instantiated by NXLog and can be accessed via the nxLogdata.module attribute. This can be used
to set or access variables associated with the module (see the example below).
nxmodule.NxLogdataNew(*nxLogdata)
This function creates a new log data record.
nxmodule.Post(ld *nxLogdata)
This function puts log data struct for further processing.
803
nxmodule.AddEvent()
This function adds a READ event to NXLog and allows to call it later.
nxmodule.AddEventDelayed(mSec C.int)
This function adds a delayed READ event to NXLog and allows to call it later.
class nxLogdata
This class represents an event. It is instantiated by NXLog and passed to the method specified by the go_call()
function.
nxlogdata.Delete(field string)
This function removes the field from logdata.
nxlogdata.Fields() []string
This function returns an array of fields names in the logdata record.
module
This attribute is set to the module object associated with the event.
See the list of installer packages that provide the xm_go module in the Available Modules chapter of the NXLog
User Guide.
120.13.3. Configuration
The xm_go module accepts the following directives in addition to the common module directives.
804
ImportLib
This mandatory directive specifies the file containing the Go code compiled into a shared library .so file.
Exec
This mandatory directive uses the go_call(function) which must accept an nxLogData() object as its only
argument. For this directive, any number of go_call(function) may be defined as displayed below.
In this Go file template, a simple function is called via the go_call("process"); argument using the Exec
directive.
In this Go file template, a multi-argument function is called via the go_call("process", "arg1",
"arg2", …, "argN") argument using the Exec directive.
120.13.5. Examples
805
Example 560. Using xm_go for Log Processing
This configuration calls the go_call process in the compiled external go language shared object file to
mask the IPv4 addresses in the input file, so all of them looks x.x.x.x in the output file.
nxlog.conf
1 <Extension ext>
2 Module xm_go
3 ImportLib "file/process.so"
4 </Extension>
5
6 <Input in1>
7 Module im_file
8 File "file/input.txt"
9 </Input>
10
11 <Output out>
12 Module om_file
13 File "file/output.txt"
14 Exec go_call("process");
15 </Output>
Input Sample
Sep 30 14:20:24 mephisto vmnet-dhcpd: Configured subnet: 192.168.169.0↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Setting vmnet-dhcp IP address: 192.168.169.254↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Recving on VNet/vmnet1/192.168.169.0↵
Sep 30 14:20:24 mephisto kernel: /dev/vmnet: open called by PID 3243 (vmnet-dhcpd)↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Sending on VNet/vmnet1/192.168.169.0↵
Output Sample
Sep 30 14:20:24 mephisto vmnet-dhcpd: Configured subnet: x.x.x.x↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Setting vmnet-dhcp IP address: x.x.x.x↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Recving on VNet/vmnet1/x.x.x.x↵
Sep 30 14:20:24 mephisto kernel: /dev/vmnet: open called by PID 3243 (vmnet-dhcpd)↵
Sep 30 14:20:24 mephisto vmnet-dhcpd: Sending on VNet/vmnet1/x.x.x.x↵
806
See the list of installer packages that provide the xm_grok module in the Available Modules chapter of the NXLog
User Guide.
120.14.1. Configuration
The xm_grok module accepts the following directives in addition to the common module directives.
Pattern
This mandatory directive specifies a directory or file containing Grok patterns. Wildcards may be used to
specify multiple directories or files. This directive may be used more than once.
120.14.2. Functions
The following functions are exported by xm_grok.
120.14.3. Procedures
The following procedures are exported by xm_grok.
match_grok(string pattern);
Attempt to match and parse the $raw_event field of the current event with the specified pattern.
120.14.4. Examples
807
Example 561. Using Grok Patterns for Parsing
This configuration reads Syslog events from file and parses them with the parse_syslog() procedure (this
sets the $Message field). Then the match_grok() function is used to attempt a series of matches on the
$Message field until one is successful. If no patterns match, an internal message is logged.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension grok>
6 Module xm_grok
7 Pattern modules/extension/grok/patterns2.txt
8 </Extension>
9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 if match_grok($Message, "%{SSH_AUTHFAIL_WRONGUSER}") {}
16 else if match_grok($Message, "%{SSH_AUTHFAIL_WRONGCREDS}") {}
17 else if match_grok($Message, "%{SSH_AUTH_SUCCESS}") {}
18 else if match_grok($Message, "%{SSH_DISCONNECT}") {}
19 else
20 {
21 log_info('Event did not match any pattern');
22 }
23 </Exec>
24 </Input>
patterns2.txt
USERNAME [a-zA-Z0-9_-]+
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
WORD \b\w+\b
GREEDYDATA .*
IP (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-
9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-
9]{1,2}))(?![0-9])
808
declared with the public and static modifiers in the Java code to be accessible from NXLog, and the first
parameter must be of NXLog.Logdata type. See also the im_java and om_java modules.
For the system requirements, installation details and environmental configuration requirements
NOTE
of Java, see the Installing Java section in the Java documentation.
The NXLog Java class provides access to the NXLog functionality in the Java code. This class contains nested
classes Logdata and Module with log processing methods, as well as methods for sending messages to the
internal logger.
class NXLog.Logdata
This Java class provides the methods to interact with an NXLog event record object:
getField(name)
This method returns the value of the field name in the event.
setField(name, value)
This method sets the value of field name to value.
deleteField(name)
This method removes the field name from the event record.
getFieldnames()
This method returns an array with the names of all the fields currently in the event record.
getFieldtype(name)
This method retrieves the field type using the value from the name field.
class NXLog.Module
The methods below allow setting and accessing variables associated with the module instance.
saveCtx(key,value)
This method saves user data in the module data storage using values from the key and value fields.
loadCtx(key)
This method retrieves data from the module data storage using the value from the key field.
Below is the list of methods for sending messages to the internal logger.
NXLog.logInfo(msg)
This method sends the message msg to to the internal logger at INFO log level. It does the same as the core
log_info() procedure.
NXLog.logDebug(msg)
This method sends the message msg to to the internal logger at DEBUG log level. It does the same as the core
log_debug() procedure.
NXLog.logWarning(msg)
This method sends the message msg to to the internal logger at WARNING log level. It does the same as the
core log_warning() procedure.
NXLog.logError(msg)
This method sends the message msg to to the internal logger at ERROR log level. It does the same as the core
log_error() procedure.
809
120.15.1. Configuration
The NXLog process maintains only one JVM instance for all xm_java, im_java or om_java running instances. This
means all Java classes loaded by the ClassPath directive will be available for all running instances.
The xm_java module accepts the following directives in addition to the common module directives.
ClassPath
This mandatory directive defines the path to the .class files or a .jar file. This directive should be defined at
least once within a module block.
VMOption
This optional directive defines a single Java Virtual Machine (JVM) option.
VMOptions
This optional block directive serves the same purpose as the VMOption directive, but also allows specifying
multiple Java Virtual Machine (JVM) instances, one per line.
JavaHome
This optional directive defines the path to the Java Runtime Environment (JRE). The path is used to search for
the libjvm shared library. If this directive is not defined, the Java home directory will be set to the build-time
value. Only one JRE can be defined for one or multiple NXLog Java instances. Defining multiple JRE instances
causes an error.
120.15.2. Procedures
The following procedures are exported by xm_java.
Below is an example of module usage. The process1 and process2 methods of the Extension Java class
split log data into key-value pairs and add an additional field to each entry. The results are then converted
to JSON format.
810
nxlog.conf
1 <Extension ext>
2 Module xm_java
3 # Path to the compiled Java class
4 Classpath /tmp/Extension.jar
5 </Extension>
6
7 <Output fileout>
8 Module om_file
9 File '/tmp/output'
10 # Calling the first method to split data into key-value pairs
11 Exec java_call("Extension.process1");
12 # Calling the second method and passing the additional parameter
13 Exec ext->call("Extension.process2", "test");
14 Exec to_json();
15 </Output>
811
Extension.java
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
if (type.equals("CWD")) {
try {
NXLog.logInfo(String.format("type: %s", type));
Files.write(
Paths.get("tmp/processed"),
((String) ld.getField("raw_event") + "\n").getBytes(),
StandardOpenOption.APPEND,
StandardOpenOption.CREATE
);
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
812
Input sample
type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"↵
type=PATH msg=audit(1489999368.711:35724): item=0 name="/root/test" inode=528869 dev=08:01
mode=040755 ouid=0 ogid=0 rdev=00:00↵
type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e syscall=2 success=yes exit=3
a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0 uid=0 gid=0 euid=0 suid=0
fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls" exe="/bin/ls" key=(null)↵
Output Sample
{
"type":"CWD",
"msg":"audit(1489999368.711:35724):",
"cwd":"\"/root/nxlog\"",
"Stage":"test"
}
{
"type":"PATH",
"msg":"audit(1489999368.711:35724):",
"item":"0",
"name":"\"/root/test\"",
"inode":"528869",
"dev":"08:01",
"mode":"040755",
"ouid":"0",
"ogid":"0",
"rdev":"00:00",
"Stage":"test"
}
{
"type":"SYSCALL",
"msg":"audit(1489999368.711:35725):",
"arch":"c000003e",
"syscall":"2",
"success":"yes",
"exit":"3",
"a0":"12dcc40",
"a1":"90800",
"a2":"0",
"a3":"0",
"items":"1",
"ppid":"15391",
"pid":"12309",
"auid":"0",
"uid":"0",
"gid":"0",
"euid":"0",
"suid":"0",
"fsuid":"0",
"egid":"0",
"sgid":"0",
"fsgid":"0",
"tty":"pts4",
"ses":"583",
"comm":"\"ls\"",
"exe":"\"/bin/ls\"",
"key":"(null)",
"Stage":"test"
}
813
120.16. JSON (xm_json)
This module provides functions and procedures for processing data formatted as JSON. JSON can be generated
from log data, or JSON can be parsed into fields. Unfortunately, the JSON specification does not define a type for
datetime values so these are represented as JSON strings. The JSON parser in xm_json can automatically detect
datetime values, so it is not necessary to explicitly use parsedate().
See the list of installer packages that provide the xm_json module in the Available Modules chapter of the NXLog
User Guide.
120.16.1. Configuration
The xm_json module accepts the following directives in addition to the common module directives.
DateFormat
This optional directive can be used to set the format of the datetime strings in the generated JSON. This
directive is similar to the global DateFormat, but is independent of it: this directive is defined separately and
has its own default. If this directive is not specified, the default is YYYY-MM-DDThh:mm:ss.sTZ.
DetectNestedJSON
This optional directive can be used to disable the autodetection of nested JSON strings when calling the
to_json() function or the to_json() procedure. For example, consider a field $key which contains the string
value of {"subkey":42}. If DetectNestedJSON is set to FALSE, to_json() will produce
{"key":"{\"subkey\":42}"}. If DetectNestedJSON is set to TRUE (the default), the result is
{"key":{"subkey":42}}—a valid nested JSON record.
Flatten
This optional boolean directive specifies that the parse_json() procedure should flatten nested JSON, creating
field names with dot notation. The default is FALSE. If Flatten is set to TRUE, the following JSON will populate
the fields $event.time and $event.severity:
{"event":{"time":"2015-01-01T00:00:00.000Z","severity":"ERROR"}}
ForceUTF8
This optional boolean directive specifies whether the generated JSON should be valid UTF-8. The JSON
specification requires JSON records to be UTF-8 encoded, and some tools fail to parse JSON if it is not valid
UTF-8. If ForceUTF8 is set to TRUE, the generated JSON will be validated and any invalid character will be
replaced with a question mark (?). The default is FALSE.
IncludeHiddenFields
This boolean directive specifies that the to_json() function or the to_json() procedure should inlude fields
having a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to
TRUE, then generated JSON will contain these otherwise excluded fields.
ParseDate
If this boolean directive is set to TRUE, xm_json will attempt to parse as a timestamp any string that appears
to begin with a 4-digit year (as a regular expression, ^[12][0-9]{3}-). If this directive is set to FALSE, xm_json
will not attempt to parse these strings. The default is TRUE.
PrettyPrint
If set to TRUE, this optional boolean directive specifies that the generated JSON should be pretty-printed,
where each key-value is printed on a new indented line. Note that this adds line-breaks to the JSON records,
which can cause parser errors in some tools that expect single-line JSON. If this directive is not specified, the
default is FALSE.
814
UnFlatten
This optional boolean directive specifies that the to_json() procedure should generate nested JSON when field
names exist containing the dot (.). For example, if UnFlatten is set to TRUE, the two fields $event.time and
$event.severity will be converted to JSON as follows:
{"event":{"time":"2015-01-01T00:00:00.000Z","severity":"ERROR"}}
When UnFlatten is set to FALSE (the default if not specified), the following JSON would result:
{"event.time":"2015-01-01T00:00:00.000Z","event.severity":"ERROR"}
120.16.2. Functions
The following functions are exported by xm_json.
string to_json()
Convert the fields to JSON and return this as a string value. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded unless IncludeHiddenFields directive is set to
TRUE.
120.16.3. Procedures
The following procedures are exported by xm_json.
parse_json();
Parse the $raw_event field as JSON input.
parse_json(string source);
Parse the given string as JSON format.
to_json();
Convert the fields to JSON and put this into the $raw_event field. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded unless IncludeHiddenFields directive is set to
TRUE.
120.16.4. Examples
815
Example 563. Syslog to JSON Format Conversion
The following configuration accepts Syslog (both BSD and IETF) via TCP and converts it to JSON.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension json>
6 Module xm_json
7 </Extension>
8
9 <Input tcp>
10 Module im_tcp
11 Port 1514
12 Host 0.0.0.0
13 Exec parse_syslog(); to_json();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/json.txt"
19 </Output>
20
21 <Route tcp_to_file>
22 Path tcp => file
23 </Route>
Input Sample
<30>Sep 30 15:45:43 host44.localdomain.hu acpid: 1 client rule loaded↵
Output Sample
{
"MessageSourceAddress":"127.0.0.1",
"EventReceivedTime":"2011-03-08 14:22:41",
"SyslogFacilityValue":1,
"SyslogFacility":"DAEMON",
"SyslogSeverityValue":5,
"SyslogSeverity":"INFO",
"SeverityValue":2,
"Severity":"INFO",
"Hostname":"host44.localdomain.hu",
"EventTime":"2011-09-30 14:45:43",
"SourceName":"acpid",
"Message":"1 client rule loaded "
}
816
Example 564. Converting Windows EventLog to Syslog-Encapsulated JSON
The following configuration reads the Windows EventLog and converts it to the BSD Syslog format, with the
message part containing the fields in JSON.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension json>
6 Module xm_json
7 </Extension>
8
9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message = to_json(); to_syslog_bsd();
12 </Input>
13
14 <Output tcp>
15 Module om_tcp
16 Host 192.168.1.1
17 Port 1514
18 </Output>
19
20 <Route eventlog_json_tcp>
21 Path eventlog => tcp
22 </Route>
Output Sample
<14>Mar 8 14:40:11 WIN-OUNNPISDHIG Service_Control_Manager: {"EventTime":"2012-03-08
14:40:11","EventTimeWritten":"2012-03-08 14:40:11","Hostname":"WIN-
OUNNPISDHIG","EventType":"INFO","SeverityValue":2,"Severity":"INFO","SourceName":"Service
Control
Manager","FileName":"System","EventID":7036,"CategoryNumber":0,"RecordNumber":6788,"Message":"T
he nxlog service entered the running state. ","EventReceivedTime":"2012-03-08 14:40:12"}↵
It is quite common to have a different set of keys in each log line when accepting key-value formatted input
messages. Extracting values from such logs using regular expressions can be quite cumbersome. The xm_kvp
extension module automates this process.
Log messages containing key-value pairs typically look like one the following:
Keys are usually separated from the value using an equal sign (=) or a colon (:); and the key-value pairs are
delimited with a comma (,), a semicolon (;), or a space. In addition, values and keys may be quoted and may
contain escaping. The module will try to guess the format, or the format can be explicitly specified using the
817
configuration directives below.
It is possible to use more than one xm_kvp module instance with different options in order to
NOTE support different KVP formats at the same time. For this reason, functions and procedures
exported by the module are public and must be referenced by the module instance name.
See the list of installer packages that provide the xm_kvp module in the Available Modules chapter of the NXLog
User Guide.
120.17.1. Configuration
The xm_kvp module accepts the following directives in addition to the common module directives.
DetectNumericValues
If this optional boolean directive is set to TRUE, the parse_kvp() procedure will try to parse numeric values as
integers first. The default is TRUE (numeric values will be parsed as integers and unquoted in the output).
Note that floating-point numbers will not be handled.
EscapeChar
This optional directive takes a single character (see below) as argument. It specifies the character used for
escaping special characters. The escape character is used to prefix the following characters: the EscapeChar
itself, the KeyQuoteChar, and the ValueQuoteChar. If EscapeControl is TRUE, the newline (\n), carriage return
(\r), tab (\t), and backspace (\b) control characters are also escaped. The default escape character is the
backslash (\).
EscapeControl
If this optional boolean directive is set to TRUE, control characters are also escaped. See the EscapeChar
directive for details. The default is TRUE (control characters are escaped). Note that this is necessary in order
to support single-line KVP field lists containing line-breaks.
IncludeHiddenFields
This boolean directive specifies that the to_kvp() function or the to_kvp() procedure should inlude fields
having a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to
TRUE, then generated text will contain these otherwise excluded fields.
KeyQuoteChar
This optional directive takes a single character (see below) as argument. It specifies the quote character for
enclosing key names. If this directive is not specified, the module will accept single-quoted keys, double-
quoted keys, and unquoted keys.
KVDelimiter
This optional directive takes a single character (see below) as argument. It specifies the delimiter character
used to separate the key from the value. If this directive is not set and the parse_kvp() procedure is used, the
module will try to guess the delimiter from the following: the colon (:) or the equal-sign (=).
KVPDelimiter
This optional directive takes a single character (see below) as argument. It specifies the delimiter character
used to separate the key-value pairs. If this directive is not set and the parse_kvp() procedure is used, the
module will try to guess the delimiter from the following: the comma (,), the semicolon (;), or the space.
QuoteMethod
This directive can be used to specify the quote method used for the values by to_kvp().
All
The values will be always quoted. This is the default.
818
Delimiter
The value will be only enclosed in quotes if it contains the delimiter character.
None
The values will not be quoted.
ValueQuoteChar
This optional directive takes a single character (see below) as argument. It specifies the quote character for
enclosing key values. If this directive is not specified, the module will accept single-quoted values, double-
quoted values, and unquoted values. Normally, quotation is used when the value contains a space or the
KVDelimiter character.
Delimiter ;
Control characters
The following non-printable characters can be specified with escape sequences:
\a
audible alert (bell)
\b
backspace
\t
horizontal tab
\n
newline
\v
vertical tab
\f
formfeed
\r
carriage return
Delimiter \t
819
Delimiter ';'
Delimiter '\'
Delimiter "\"
Delimiter 0x20
120.17.2. Functions
The following functions are exported by xm_kvp.
string to_kvp()
Convert the internal fields to a single key-value pair formatted string.
120.17.3. Procedures
The following procedures are exported by xm_kvp.
parse_kvp();
Parse the $raw_event field as key-value pairs and populate the internal fields using the key names.
parse_kvp(string source);
Parse the given string key-value pairs and populate the internal fields using the key names.
reset_kvp();
Reset the KVP parser so that the autodetected KeyQuoteChar, ValueQuoteChar, KVDelimiter, and
KVPDelimiter characters can be detected again.
to_kvp();
Format the internal fields as key-value pairs and put this into the $raw_event field.
Note that directive IncludeHiddenFields has an effect on fields included in the output.
820
120.17.4. Examples
The following examples illustrate various scenarios for parsing KVPs, whether embedded, encapsulated (in
Syslog, for example), or alone. In each case, the logs are converted from KVP input files to JSON output files,
though obviously there are many other possibilities.
The following two lines of input are in a simple KVP format where each line consists of various keys with
values assigned to them.
Input Sample
Name=John, Age=42, Weight=84, Height=142
Name=Mike, Weight=64, Age=24, Pet=dog, Height=172
This input can be parsed with the following configuration. The parsed fields can be used in NXLog
expressions: a new field named $Overweight is added and set to TRUE if the conditions are met. Finally a
few automatically added fields are removed, and the log is then converted to JSON.
nxlog.conf (truncated)
1 <Extension kvp>
2 Module xm_kvp
3 KVPDelimiter ,
4 KVDelimiter =
5 EscapeChar \\
6 </Extension>
7
8 <Extension json>
9 Module xm_json
10 </Extension>
11
12 <Input filein>
13 Module im_file
14 File "modules/extension/kvp/xm_kvp5.in"
15 <Exec>
16 if $raw_event =~ /^#/ drop();
17 else
18 {
19 kvp->parse_kvp();
20 delete($EventReceivedTime);
21 delete($SourceModuleName);
22 delete($SourceModuleType);
23 if ( integer($Weight) > integer($Height) - 100 ) $Overweight = TRUE;
24 to_json();
25 }
26 </Exec>
27 </Input>
28 [...]
Output Sample
{"Name":"John","Age":42,"Weight":84,"Height":142,"Overweight":true}
{"Name":"Mike","Weight":64,"Age":24,"Pet":"dog","Height":172}
821
Example 566. Parsing KVPs in Cisco ACS Syslog
Input Sample
<38>2010-10-12 21:01:29 10.0.1.1 CisACS_02_FailedAuth 1k1fg93nk 1 0 Message-Type=Authen
failed,User-Name=John,NAS-IP-Address=10.0.1.2,AAA Server=acs01↵
<38>2010-10-12 21:01:31 10.0.1.1 CisACS_02_FailedAuth 2k1fg63nk 1 0 Message-Type=Authen
failed,User-Name=Foo,NAS-IP-Address=10.0.1.2,AAA Server=acs01↵
These logs are in Syslog format with a set of values present in each record and an additional set of KVPs.
The following configuration can be used to process this and convert it to JSON.
nxlog.conf (truncated)
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Extension kvp>
10 Module xm_kvp
11 KVDelimiter =
12 KVPDelimiter ,
13 </Extension>
14
15 <Input cisco>
16 Module im_file
17 File "modules/extension/kvp/cisco_acs.in"
18 <Exec>
19 parse_syslog_bsd();
20 if ( $Message =~ /^CisACS_(\d\d)_(\S+) (\S+) (\d+) (\d+) (.*)$/ )
21 {
22 $ACSCategoryNumber = $1;
23 $ACSCategoryName = $2;
24 $ACSMessageId = $3;
25 $ACSTotalSegments = $4;
26 $ACSSegmentNumber = $5;
27 $Message = $6;
28 kvp->parse_kvp($Message);
29 [...]
Output Sample
{"SourceModuleName":"cisco","SourceModuleType":"im_file","SyslogFacilityValue":4,"SyslogFacilit
y":"AUTH","SyslogSeverityValue":6,"SyslogSeverity":"INFO","SeverityValue":2,"Severity":"INFO","
Hostname":"10.0.1.1","EventTime":"2010-10-12 21:01:29","Message":"Message-Type=Authen
failed,User-Name=John,NAS-IP-Address=10.0.1.2,AAA Server=acs01","ACSCategoryNumber":"02"
,"ACSCategoryName":"FailedAuth","ACSMessageId":"1k1fg93nk","ACSTotalSegments":"1","ACSSegmentNu
mber":"0","Message-Type":"Authen failed","User-Name":"John","NAS-IP-Address":"10.0.1.2","AAA
Server":"acs01"}
{"SourceModuleName":"cisco","SourceModuleType":"im_file","SyslogFacilityValue":4,"SyslogFacilit
y":"AUTH","SyslogSeverityValue":6,"SyslogSeverity":"INFO","SeverityValue":2,"Severity":"INFO","
Hostname":"10.0.1.1","EventTime":"2010-10-12 21:01:31","Message":"Message-Type=Authen
failed,User-Name=Foo,NAS-IP-Address=10.0.1.2,AAA Server=acs01","ACSCategoryNumber":"02"
,"ACSCategoryName":"FailedAuth","ACSMessageId":"2k1fg63nk","ACSTotalSegments":"1","ACSSegmentNu
mber":"0","Message-Type":"Authen failed","User-Name":"Foo","NAS-IP-Address":"10.0.1.2","AAA
Server":"acs01"}
822
Example 567. Parsing KVPs in Sidewinder Logs
Input Sample
date="May 5 14:34:40 2009
MDT",fac=f_mail_filter,area=a_kmvfilter,type=t_mimevirus_reject,pri=p_major,pid=10174,ruid=0,eu
id=0,pgid=10174,logid=0,cmd=kmvfilter,domain=MMF1,edomain=MMF1,message_id=(null),srcip=66.74.18
4.9,mail_sender=<habuzeid6@…>,virus_name=W32/Netsky.c@MM!zip,reason="Message scan detected a
Virus in msg Unknown, message being Discarded, and not quarantined"↵
This can be parsed and converted to JSON with the following configuration.
nxlog.conf
1 <Extension kvp>
2 Module xm_kvp
3 KVPDelimiter ,
4 KVDelimiter =
5 EscapeChar \\
6 ValueQuoteChar "
7 </Extension>
8
9 <Extension json>
10 Module xm_json
11 </Extension>
12
13 <Input sidewinder>
14 Module im_file
15 File "modules/extension/kvp/sidewinder.in"
16 Exec kvp->parse_kvp(); delete($EventReceivedTime); to_json();
17 </Input>
18
19 <Output file>
20 Module om_file
21 File 'tmp/output'
22 </Output>
23
24 <Route sidewinder_to_file>
25 Path sidewinder => file
26 </Route>
Output Sample
{"SourceModuleName":"sidewinder","SourceModuleType":"im_file","date":"May 5 14:34:40 2009 MDT"
,"fac":"f_mail_filter","area":"a_kmvfilter","type":"t_mimevirus_reject","pri":"p_major","pid":1
0174,"ruid":0,"euid":0,"pgid":10174,"logid":0,"cmd":"kmvfilter","domain":"MMF1","edomain":"MMF1
","message_id":"(null)","srcip":"66.74.184.9","mail_sender":"<habuzeid6@…>","virus_name":"W32/N
etsky.c@MM!zip","reason":"Message scan detected a Virus in msg Unknown, message being
Discarded, and not quarantined"}
URLs in HTTP requests frequently contain URL parameters which are a special kind of key-value pairs
delimited by the ampersand (&). Here is an example of two HTTP requests logged by the Apache web server
in the Combined Log Format.
823
Input Sample
192.168.1.1 - foo [11/Jun/2013:15:44:34 +0200] "GET /do?action=view&obj_id=2 HTTP/1.1" 200 1514
"https://localhost" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0"↵
192.168.1.1 - - [11/Jun/2013:15:44:44 +0200] "GET /do?action=delete&obj_id=42 HTTP/1.1" 401 788
"https://localhost" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0"↵
The following configuration file parses the access log and extracts all the fields. The request parameters are
extracted into the $HTTPParams field using a regular expression, and then this field is further parsed using
the KVP parser. At the end of the processing all fields are converted to KVP format using the to_kvp()
procedure of the kvp2 instance.
nxlog.conf (truncated)
1 <Extension kvp>
2 Module xm_kvp
3 KVPDelimiter &
4 KVDelimiter =
5 </Extension>
6
7 <Extension kvp2>
8 Module xm_kvp
9 KVPDelimiter ;
10 KVDelimiter =
11 #QuoteMethod None
12 </Extension>
13
14 <Input apache>
15 Module im_file
16 File "modules/extension/kvp/apache_url.in"
17 <Exec>
18 if $raw_event =~ /(?x)^(\S+)\ (\S+)\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
19 \ HTTP.\d\.\d\"\ (\d+)\ (\d+)\ \"([^\"]+)\"\ \"([^\"]+)\"/
20 {
21 $Hostname = $1;
22 if $3 != '-' $AccountName = $3;
23 $EventTime = parsedate($4);
24 $HTTPMethod = $5;
25 $HTTPURL = $6;
26 $HTTPResponseStatus = $7;
27 $FileSize = $8;
28 $HTTPReferer = $9;
29 [...]
The two request parameters action and obj_id then appear at the end of the KVP formatted lines.
Output Sample
SourceModuleName=apache;SourceModuleType=im_file;Hostname=192.168.1.1;AccountName=foo;EventTime
=2013-06-11
15:44:34;HTTPMethod=GET;HTTPURL=/do?action=view&obj_id=2;HTTPResponseStatus=200;FileSize=1514;H
TTPReferer=https://localhost;HTTPUserAgent='Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0
Firefox/17.0';HTTPParams=action=view&obj_id=2;action=view;obj_id=2;↵
SourceModuleName=apache;SourceModuleType=im_file;Hostname=192.168.1.1;EventTime=2013-06-11
15:44:44;HTTPMethod=GET;HTTPURL=/do?action=delete&obj_id=42;HTTPResponseStatus=401;FileSize=788
;HTTPReferer=https://localhost;HTTPUserAgent='Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
Gecko/17.0 Firefox/17.0';HTTPParams=action=delete&obj_id=42;action=delete;obj_id=42;↵
824
120.18. LEEF (xm_leef)
This module provides two functions to generate and parse data in the Log Event Extended Format (LEEF), which
is used by IBM Security QRadar products. For more information about the format see the Log Event Extended
Format (LEEF) Version 2 specification.
See the list of installer packages that provide the xm_leef module in the Available Modules chapter of the NXLog
User Guide.
120.18.1. Configuration
The xm_leef module accepts the following directives in addition to the common module directives.
AddSyslogHeader
This optional boolean directive specifies whether a RFC 3164 (BSD-style) Syslog header should be prepended
to the output. This defaults to TRUE (a Syslog header will be added by the to_leef() procedure).
IncludeHiddenFields
This boolean directive specifies that the to_leef() function or the to_leef() procedure should inlude fields
having a leading dot (.) or underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to
TRUE, then generated LEEF text will contain these otherwise excluded fields.
LEEFHeader
This optional directive takes a string type expression and only has an effect on how to_leef() formats the
result. It should evaluate to the following format:
LEEF:1.0|Microsoft|MSExchange|2013 SP1|15345|
When this directive is not specified, the LEEF header is constructed using the $Vendor, $SourceName (or
$SourceModuleName), $Version, and $EventID fields.
120.18.2. Functions
The following functions are exported by xm_leef.
string to_leef()
Convert the internal fields to a single LEEF formatted string.
Note that directive IncludeHiddenFields has an effect on fields included in the output.
120.18.3. Procedures
The following procedures are exported by xm_leef.
parse_leef();
Parse the $raw_event field as key-value pairs and create the following NXLog fields (if possible): $Category,
$AccountName, $AccountType, $Domain, $EventTime, $Hostname, $MessageSourceAddress, $SeverityValue
(mapped from the sev attribute), $SourceName, $devTimeFormat, $LEEFVersion, $Vendor, $Version,
$EventID, $DelimiterCharacter.
parse_leef(string source);
Parse the the given string as key-value pairs and create the following NXLog fields (if possible): $Category,
825
$AccountName, $AccountType, $Domain, $EventTime, $Hostname, $MessageSourceAddress, $SeverityValue
(mapped from the sev attribute), $SourceName, $devTimeFormat, $LEEFVersion, $Vendor, $Version,
$EventID, $DelimiterCharacter.
to_leef();
Format the internal fields as LEEF and put this into the $raw_event field. to_leef() will automatically map the
following fields to event attributes, if available:
$AccountName accountName
$AccountType role
$Category cat
$Domain domain
$EventTime devTime
$Hostname identHostName
$MessageSourceAddress src
$SourceName vSrcName
120.18.4. Fields
The following fields are used by xm_leef.
In addition to the fields listed below, the parse_leef() procedure will create a field for every LEEF attribute
contained in the source LEEF message such as $srcPort, $cat, $identHostName, etc.
826
$EventTime (type: datetime)
The time when the event occurred. This field takes the value of the devTime LEEF attribute.
3 1
4 2
5 2
6 3
7 3
8 4
9 4
≥10 5
120.18.5. Examples
827
Example 569. Sending Windows EventLog as LEEF over UDP
This configuration will collect Windows EventLog and NXLog internal messages, convert them to LEEF, and
forward via UDP.
nxlog.conf
1 <Extension leef>
2 Module xm_leef
3 </Extension>
4
5 <Input internal>
6 Module im_internal
7 </Input>
8
9 <Input eventlog>
10 Module im_msvistalog
11 </Input>
12
13 <Output udp>
14 Module om_udp
15 Host 192.168.168.2
16 Port 1514
17 Exec to_leef();
18 </Output>
19
20 <Route qradar>
21 Path internal, eventlog => udp
22 </Route>
The xm_msdns module does not support the detailed format enabled via the Details option
WARNING in the DNS Server Debug Logging configuration. NXLog could be configured to parse this
format with the xm_multiline module.
See the list of installer packages that provide the xm_msdns module in the Available Modules chapter of the
NXLog User Guide.
120.19.1. Configuration
The xm_msdns module accepts the following directives in addition to the common module directives.
DateFormat
This optional directive allows you to define the format of the date field when parsing DNS Server logs. The
directive’s argument must be a format string compatiable with the C strptime(3) function. This directive works
similarly to the global DateFormat directive, and if not specified, the default format [D|DD]/[M|MM]/YYYY
[H|HH]:MM:SS [AM|PM] is used.
EventLine
This boolean directive specifies EVENT lines in the input should be parsed. If set to FALSE, EVENT lines will be
discarded. The default is TRUE.
828
NoteLine
This boolean directive specifies that Note: lines in the input should be parsed. If set to FALSE, Note: lines will
be discarded. The default is TRUE.
PacketLine
This boolean directive specifies that PACKET lines in the input should be parsed. If set to FALSE, PACKET lines
will be discarded. The default is TRUE.
120.19.2. Procedures
The following procedures are exported by xm_msdns.
parse_msdns();
Parse the $raw_event field and populate the DNS log fields.
parse_msdns(string source);
Parse the given string and populate the DNS log fields.
120.19.3. Fields
The following fields are used by xm_msdns.
829
$ParseFailure (type: string)
The remaining unparsed portion of a log message which does not match an expected format.
120.19.4. Examples
830
Example 570. Parsing DNS Logs With InputType
In this configuration, the DNS log file at C:\dns.log is parsed using the InputType provided by the
xm_msdns module. Any Note: lines in the input are discarded (the NoteLine directive is set to FALSE).
nxlog.conf
1 <Extension dns_parser>
2 Module xm_msdns
3 EventLine TRUE
4 PacketLine TRUE
5 NoteLine FALSE
6 </Extension>
7
8 <Input in>
9 Module im_file
10 File 'modules/extension/msdns/xm_msdns1.in'
11 InputType dns_parser
12 </Input>
For cases where parsing via InputType is not possible, individual events can be parsed with the
parse_msdns() procedure.
nxlog.conf
1 <Extension dns_parser>
2 Module xm_msdns
3 </Extension>
4
5 <Input in>
6 Module im_file
7 File 'modules/extension/msdns/xm_msdns1.out'
8 Exec dns_parser->parse_msdns();
9 </Input>
The module maintains a separate context for each input source, allowing multi-line messages to be processed
correctly even when coming from multiple sources (specifically, multiple files or multiple network connections).
UDP is treated as a single source and all logs are processed under the same context. It is
WARNING therefore not recommended to use this module with im_udp if messages will be received
by multiple UDP senders (such as Syslog).
See the list of installer packages that provide the xm_multiline module in the Available Modules chapter of the
NXLog User Guide.
120.20.1. Configuration
The xm_multiline module accepts the following directives in addition to the common module directives. One of
831
FixedLineCount and HeaderLine must be specified.
FixedLineCount
This directive takes a positive integer number defining the number of lines to concatenate. This is useful
when receiving log messages spanning a fixed number of lines. When this number is defined, the module
knows where the event message ends and will not hold a message in the buffers until the next message
arrives.
HeaderLine
This directive takes a string or a regular expression literal. This will be matched against each line. When the
match is successful, the successive lines are appended until the next header line is read. This directive is
mandatory unless FixedLineCount is used.
Until a new message arrives with its associated header, the previous message is stored in
the buffers because the module does not know where the message ends. The im_file
NOTE module will forcibly flush this buffer after the configured PollInterval timeout. If this behavior
is unacceptable, disable AutoFlush, use an end marker with EndLine, or switch to an
encapsulation method (such as JSON).
The /s and /m regular expression modifiers may be used here, but they have no meaning,
NOTE
because HeaderLine is only checked against one input line at a time.
AutoFlush
If set to TRUE, this boolean directive specifies that the corresponding im_file module should forcibly flush the
buffer after its configured PollInterval timeout. The default is TRUE. If EndLine is used, AutoFlush is
automatically set to FALSE to disable this behavior. AutoFlush has no effect if xm_multiline is used with an
input module other than im_file.
EndLine
This is similar to the HeaderLine directive. This optional directive also takes a string or a regular expression
literal to be matched against each line. When the match is successful the message is considered complete.
Exec
This directive is almost identical to the behavior of the Exec directive used by the other modules with the
following differences:
• each line is passed in $raw_event as it is read, and the line terminator in included; and
• other fields cannot be used, and captured strings can not be stored as separate fields.
This is mostly useful for rewriting lines or filtering out certain lines with the drop() procedure.
120.20.2. Examples
Example 572. Parsing multi-line XML logs and converting to JSON
XML is commonly formatted as indented multi-line to make it more readable. In the following configuration
file the HeaderLine and EndLine directives are used to parse the events. The events are then converted to
JSON after some timestamp normalization.
832
nxlog.conf (truncated)
1 <Extension multiline>
2 Module xm_multiline
3 HeaderLine /^<event>/
4 EndLine /^<\/event>/
5 </Extension>
6
7 <Extension xmlparser>
8 Module xm_xml
9 </Extension>
10
11 <Extension json>
12 Module xm_json
13 </Extension>
14
15 <Input filein>
16 Module im_file
17 File "modules/extension/multiline/xm_multiline5.in"
18 InputType multiline
19 <Exec>
20 # Discard everything that doesn't seem to be an xml event
21 if $raw_event !~ /^<event>/ drop();
22
23 # Parse the xml event
24 parse_xml();
25
26 # Rewrite some fields
27 $EventTime = parsedate($timestamp);
28 delete($timestamp);
29 [...]
Input Sample
<?xml version="1.0" encoding="UTF-8">
<event>
<timestamp>2012-11-23 23:00:00</timestamp>
<severity>ERROR</severity>
<message>
Something bad happened.
Please check the system.
</message>
</event>
<event>
<timestamp>2012-11-23 23:00:12</timestamp>
<severity>INFO</severity>
<message>
System state is now back to normal.
</message>
</event>
Output Sample
{"SourceModuleName":"filein","SourceModuleType":"im_file","severity":"ERROR","message":"\n
Something bad happened.\n Please check the system.\n ","EventTime":"2012-11-23 23:00:00"}
{"SourceModuleName":"filein","SourceModuleType":"im_file","severity":"INFO","message":"\n
System state is now back to normal.\n ","EventTime":"2012-11-23 23:00:12"}
Each log message has a header (TIMESTAMP INTEGER SEVERITY) which is used as the message boundary. A
833
regular expression is defined for this with the HeaderLine directive. Each log message is prepended with an
additional line containing dashes and is written to a file.
nxlog.conf
1 <Extension dicom_multi>
2 Module xm_multiline
3 HeaderLine /^\d\d\d\d-\d\d-\d\d\d\d:\d\d:\d\d\.\d+\s+\d+\s+\S+\s+/
4 </Extension>
5
6 <Input filein>
7 Module im_file
8 File "modules/extension/multiline/xm_multiline4.in"
9 InputType dicom_multi
10 </Input>
11
12 <Output fileout>
13 Module om_file
14 File 'tmp/output'
15 Exec $raw_event = "--------------------------------------\n" + $raw_event;
16 </Output>
17
18 <Route parse_dicom>
19 Path filein => fileout
20 </Route>
Input Sample
2011-12-1512:22:51.000000 4296 INFO Association Request Parameters:↵
Our Implementation Class UID: 2.16.124.113543.6021.2↵
Our Implementation Version Name: RZDCX_2_0_1_8↵
Their Implementation Class UID:↵
Their Implementation Version Name:↵
Application Context Name: 1.2.840.10008.3.1.1.1↵
Requested Extended Negotiation: none↵
Accepted Extended Negotiation: none↵
2011-12-1512:22:51.000000 4296 DEBUG Constructing Associate RQ PDU↵
2011-12-1512:22:51.000000 4296 DEBUG WriteToConnection, length: 310, bytes written: 310,
loop no: 1↵
2011-12-1512:22:51.015000 4296 DEBUG PDU Type: Associate Accept, PDU Length: 216 + 6 bytes
PDU header↵
02 00 00 00 00 d8 00 01 00 00 50 41 43 53 20 20↵
20 20 20 20 20 20 20 20 20 20 52 5a 44 43 58 20↵
20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00↵
2011-12-1512:22:51.031000 4296 DEBUG DIMSE sendDcmDataset: sending 146 bytes↵
834
Output Sample
--------------------------------------↵
2011-12-1512:22:51.000000 4296 INFO Association Request Parameters:↵
Our Implementation Class UID: 2.16.124.113543.6021.2↵
Our Implementation Version Name: RZDCX_2_0_1_8↵
Their Implementation Class UID:↵
Their Implementation Version Name:↵
Application Context Name: 1.2.840.10008.3.1.1.1↵
Requested Extended Negotiation: none↵
Accepted Extended Negotiation: none↵
--------------------------------------↵
2011-12-1512:22:51.000000 4296 DEBUG Constructing Associate RQ PDU↵
--------------------------------------↵
2011-12-1512:22:51.000000 4296 DEBUG WriteToConnection, length: 310, bytes written: 310,
loop no: 1↵
--------------------------------------↵
2011-12-1512:22:51.015000 4296 DEBUG PDU Type: Associate Accept, PDU Length: 216 + 6 bytes
PDU header↵
02 00 00 00 00 d8 00 01 00 00 50 41 43 53 20 20↵
20 20 20 20 20 20 20 20 20 20 52 5a 44 43 58 20↵
20 20 20 20 20 20 20 20 20 20 00 00 00 00 00 00↵
--------------------------------------↵
2011-12-1512:22:51.031000 4296 DEBUG DIMSE sendDcmDataset: sending 146 bytes↵
835
Example 574. Multi-line messages with a fixed string header
The following configuration will process messages having a fixed string header containing dashes. Each
event is then prepended with a hash mark (#) and written to a file.
nxlog.conf
1 <Extension multiline>
2 Module xm_multiline
3 HeaderLine "---------------"
4 </Extension>
5
6 <Input filein>
7 Module im_file
8 File "modules/extension/multiline/xm_multiline1.in"
9 InputType multiline
10 Exec $raw_event = "#" + $raw_event;
11 </Input>
12
13 <Output fileout>
14 Module om_file
15 File 'tmp/output'
16 </Output>
17
18 <Route parse_multiline>
19 Path filein => fileout
20 </Route>
Input Sample
---------------↵
1↵
---------------↵
1↵
2↵
---------------↵
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa↵
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb↵
ccccccccccccccccccccccccccccccccccccc↵
dddd↵
---------------↵
Output Sample
#---------------↵
1↵
#---------------↵
1↵
2↵
#---------------↵
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa↵
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb↵
ccccccccccccccccccccccccccccccccccccc↵
dddd↵
#---------------↵
836
Example 575. Multi-line messages with fixed line count
The following configuration will process messages having a fixed line count of four. Lines containing only
whitespace are ignored and removed. Each event is then prepended with a hash mark (#) and written to a
file.
nxlog.conf
1 <Extension multiline>
2 Module xm_multiline
3 FixedLineCount 4
4 Exec if $raw_event =~ /^\s*$/ drop();
5 </Extension>
6
7 <Input filein>
8 Module im_file
9 File "modules/extension/multiline/xm_multiline2.in"
10 InputType multiline
11 </Input>
12
13 <Output fileout>
14 Module om_file
15 File 'tmp/output'
16 Exec $raw_event = "#" + $raw_event;
17 </Output>
18
19 <Route parse_multiline>
20 Path filein => fileout
21 </Route>
Input Sample
1↵
2↵
3↵
4↵
1asd↵
↵
2asdassad↵
3ewrwerew↵
4xcbccvbc↵
↵
1dsfsdfsd↵
2sfsdfsdrewrwe↵
↵
3sdfsdfsew↵
4werwerwrwe↵
Output Sample
#1↵
2↵
3↵
4↵
#1asd↵
2asdassad↵
3ewrwerew↵
4xcbccvbc↵
#1dsfsdfsd↵
2sfsdfsdrewrwe↵
3sdfsdfsew↵
4werwerwrwe↵
837
Example 576. Multi-line messages with a Syslog header
Often, multi-line messages are logged over Syslog and each line is processed as an event, with its own
Syslog header. It is commonly necessary to merge these back into a single event message.
Input Sample
Nov 21 11:40:27 hostname app[26459]: Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-
ERR TX-DRP TX-OVR Flg↵
Nov 21 11:40:27 hostname app[26459]: eth2 1500 0 16936814 0 0 0 30486067
0 8 0 BMRU↵
Nov 21 11:40:27 hostname app[26459]: lo 16436 0 277217234 0 0 0
277217234 0 0 0 LRU↵
Nov 21 11:40:27 hostname app[26459]: tun0 1500 0 316943 0 0 0 368642
0 0 0 MOPRU↵
Nov 21 11:40:28 hostname app[26459]: Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-
ERR TX-DRP TX-OVR Flg↵
Nov 21 11:40:28 hostname app[26459]: eth2 1500 0 16945117 0 0 0 30493583
0 8 0 BMRU↵
Nov 21 11:40:28 hostname app[26459]: lo 16436 0 277217234 0 0 0
277217234 0 0 0 LRU↵
Nov 21 11:40:28 hostname app[26459]: tun0 1500 0 316943 0 0 0 368642
0 0 0 MOPRU↵
Nov 21 11:40:29 hostname app[26459]: Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-
ERR TX-DRP TX-OVR Flg↵
Nov 21 11:40:29 hostname app[26459]: eth2 1500 0 16945270 0 0 0 30493735
0 8 0 BMRU↵
Nov 21 11:40:29 hostname app[26459]: lo 16436 0 277217234 0 0 0
277217234 0 0 0 LRU↵
Nov 21 11:40:29 hostname app[26459]: tun0 1500 0 316943 0 0 0 368642
0 0 0 MOPRU↵
The following configuration strips the Syslog header from the netstat output stored in the traditional Syslog
formatted file, and each message is then printed again with a line of dashes used as a separator.
838
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension netstat>
6 Module xm_multiline
7 FixedLineCount 4
8 <Exec>
9 parse_syslog_bsd();
10 $raw_event = $Message + "\n";
11 </Exec>
12 </Extension>
13
14 <Input filein>
15 Module im_file
16 File "modules/extension/multiline/xm_multiline3.in"
17 InputType netstat
18 </Input>
19
20 <Output fileout>
21 Module om_file
22 File 'tmp/output'
23 <Exec>
24 $raw_event = "-------------------------------------------------------" +
25 "-----------------------------\n" + $raw_event;
26 </Exec>
27 </Output>
28
29 <Route parse_multiline>
30 Path filein => fileout
31 </Route>
Output Sample
------------------------------------------------------------------------------------↵
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg↵
eth2 1500 0 16936814 0 0 0 30486067 0 8 0 BMRU↵
lo 16436 0 277217234 0 0 0 277217234 0 0 0 LRU↵
tun0 1500 0 316943 0 0 0 368642 0 0 0 MOPRU↵
------------------------------------------------------------------------------------↵
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg↵
eth2 1500 0 16945117 0 0 0 30493583 0 8 0 BMRU↵
lo 16436 0 277217234 0 0 0 277217234 0 0 0 LRU↵
tun0 1500 0 316943 0 0 0 368642 0 0 0 MOPRU↵
------------------------------------------------------------------------------------↵
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg↵
eth2 1500 0 16945270 0 0 0 30493735 0 8 0 BMRU↵
lo 16436 0 277217234 0 0 0 277217234 0 0 0 LRU↵
tun0 1500 0 316943 0 0 0 368642 0 0 0 MOPRU↵
This module only supports parsing NetFlow data received as UDP datagrams and does not
NOTE
support TCP.
839
xm_netflow uses the IP address of the exporter device to distinguish between different devices
NOTE
so that templates with the same name would not conflict.
The module exports an input parser which can be referenced in the UDP input instance with the InputType
directive:
InputType netflow
This input reader function parses the payload and extracts NetFlow specific fields.
See the list of installer packages that provide the xm_netflow module in the Available Modules chapter of the
NXLog User Guide.
120.21.1. Configuration
The xm_netflow module accepts only the common module directives.
120.21.2. Fields
The fields generated by xm_netflow are provided separately. Please refer to the documentation available online or
in the NXLog package.
120.21.3. Examples
Example 577. Parsing UDP NetFlow Data
The following configuration receives NetFlow data over UDP and converts the parsed data into JSON.
nxlog.conf
1 <Extension netflow>
2 Module xm_netflow
3 </Extension>
4
5 <Extension json>
6 Module xm_json
7 </Extension>
8
9 <Input udpin>
10 Module im_udp
11 Host 0.0.0.0
12 Port 2162
13 InputType netflow
14 </Input>
15
16 <Output out>
17 Module om_file
18 File "netflow.log"
19 Exec to_json();
20 </Output>
21
22 <Route nf>
23 Path udpin => out
24 </Route>
840
120.22. Radius NPS (xm_nps)
This module provides functions and procedures for processing data in NPS Database Format stored in files by
Microsoft Radius services. Internet Authentication Service (IAS) is the Microsoft implementation of a RADIUS
server and proxy. Internet Authentication Service (IAS) was renamed to Network Policy Server (NPS) starting with
Windows Server 2008. This module is capable of parsing both IAS and NPS formatted data.
"RasBox","RAS",10/22/2006,09:13:09,1,"DOMAIN\user","DOMAIN\user",,,,,,"192.168.132.45",12,,"192.168.
132.45",,,,0,"CONNECT 24000",1,2,4,,0,"311 1 192.168.132.45 07/31/2006 21:35:14
749",,,,,,,,,,,,,,,,,,,,,,,,,,,,"MSRASV5.00",311,,,,
"RasBox","RAS",10/22/2006,09:13:09,3,,"DOMAIN\user",,,,,,,,,,,,,,,,,4,,36,"311 1 192.168.132.45
07/31/2006 21:35:14 749",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"0x00453D36393120523D3020563D33",,,
"RasBox","RAS",10/22/2006,09:13:13,1,"DOMAIN\user","DOMAIN\user",,,,,,"192.168.132.45",12,,"192.168.
132.45",,,,0,"CONNECT 24000",1,2,4,,0,"311 1 192.168.132.45 07/31/2006 21:35:14
750",,,,,,,,,,,,,,,,,,,,,,,,,,,,"MSRASV5.00",311,,,,
For more information on the NPS format see the Interpret NPS Database Format Log Files article on Microsoft
TechNet.
See the list of installer packages that provide the xm_nps module in the Available Modules chapter of the NXLog
User Guide.
120.22.1. Configuration
The xm_nps module accepts only the common module directives.
120.22.2. Procedures
The following procedures are exported by xm_nps.
parse_nps();
Parse the $raw_event field as NPS input.
parse_nps(string source);
Parse the given string as NPS format.
120.22.3. Examples
841
Example 578. Parsing NPS Data
The following configuration reads NPS formatted files and converts the parsed data into JSON.
nxlog.conf
1 <Extension nps>
2 Module xm_nps
3 </Extension>
4
5 <Extension json>
6 Module xm_json
7 </Extension>
8
9 <Input filein>
10 Module im_file
11 File 'C:\logs\IAS.log'
12 Exec parse_nps();
13 </Input>
14
15 <Output fileout>
16 Module om_file
17 File 'C:\out.json'
18 Exec to_json();
19 </Output>
20
21 <Route nps_to_json>
22 Path filein => fileout
23 </Route>
There are other techniques such as the radix tree which solve the linearity problem; the drawback is that usually
these require the user to learn a special syntax for specifying patterns. If the log message is already parsed and
is not treated as single line of message, then it is possible to process only a subset of the patterns which partially
solves the linearity problem. With other performance improvements employed within the xm_pattern module, its
speed can compare to the other techniques. Yet the xm_pattern module uses regular expressions which are
familiar to users and can easily be migrated from other tools.
Traditionally, pattern matching on log messages has employed a technique where the log message was one
string and the pattern (regular expression or radix tree based pattern) was executed against it. To match patterns
against logs which contain structured data (such as the Windows EventLog), this structured data (the fields of the
log) must be converted to a single string. This is a simple but inefficient method used by many tools.
The NXLog patterns defined in the XML pattern database file can contain more than one field. This allows multi-
dimensional pattern matching. Thus with NXLog’s xm_pattern module there is no need to convert all fields into a
single string as it can work with multiple fields.
Patterns can be grouped together under pattern groups. Pattern groups serve an optimization purpose. The
group can have an optional matchfield block which can check a condition. If the condition (such as $SourceName
matches sshd) is satisfied, the xm_pattern module will descend into the group and check each pattern against the
842
log. If the pattern group’s condition did not match ($SourceName was not sshd), the module can skip all patterns
in the group without having to check each pattern individually.
When the xm_pattern module finds a matching pattern, the $PatternID and $PatternName fields are set on the
log message. These can be used later in conditional processing and correlation rules of the pm_evcorr module,
for example.
The xm_pattern module does not process all patterns. It exits after the first matching pattern is
found. This means that at most one pattern can match a log message. Multiple patterns that
can match the same subset of logs should be avoided. For example, with two regular expression
NOTE patterns ^\d+ and ^\d\d, only one will be matched but not consistently because the internal
order of patterns and pattern groups is changed dynamically by xm_pattern (patterns with the
highest match count are placed and tried first). For a strictly linearly executing pattern matcher,
see the Exec directive.
See the list of installer packages that provide the xm_pattern module in the Available Modules chapter of the
NXLog User Guide.
120.23.1. Configuration
The xm_pattern module accepts the following directives in addition to the common module directives.
PatternFile
This mandatory directive specifies the name of the pattern database file.
120.23.2. Functions
The following functions are exported by xm_pattern.
boolean match_pattern()
Execute the match_pattern() procedure. If the event is successfully matched, return TRUE, otherwise FALSE.
120.23.3. Procedures
The following procedures are exported by xm_pattern.
match_pattern();
Attempt to match the current event according to the PatternFile. Execute statements and add fields as
specified.
120.23.4. Fields
The following fields are used by xm_pattern.
120.23.5. Examples
Example 579. Using the match_pattern() Procedure
This configuration reads Syslog messages from file and parses them with parse_syslog(). The events are
843
then further processed with a pattern file and the corresponding match_pattern() procedure to add
additional fields to SSH authentication success or failure events. The matching is done against the
$SourceName and $Message fields, so the Syslog parsing must be performed before the pattern matching
will work.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension pattern>
6 Module xm_pattern
7 PatternFile 'modules/extension/pattern/patterndb2-3.xml'
8 </Extension>
9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 match_pattern();
16 </Exec>
17 </Input>
The following pattern database contains two patterns to match SSH authentication messages. The patterns
are under a group named ssh which checks whether the $SourceName field is sshd and only tries to match
the patterns if the logs are indeed from sshd. The patterns both extract $AuthMethod, $AccountName, and
$SourceIP4Address fields from the log message when the pattern matches the log. Additionally
$TaxonomyStatus and $TaxonomyAction are set. The second pattern shows an Exec block example, which
is evaluated when the pattern matches.
patterndb2-3.xml
<?xml version='1.0' encoding='UTF-8'?>
<patterndb>
<created>2018-01-01 01:02:03</created>
<version>4</version>
<group>
<name>ssh</name>
<id>1</id>
<matchfield>
<name>SourceName</name>
<type>exact</type>
<value>sshd</value>
</matchfield>
<pattern>
<id>1</id>
<name>ssh auth success</name>
<matchfield>
<name>Message</name>
<type>regexp</type>
<value>^Accepted (\S+) for (\S+) from (\S+) port \d+ ssh2</value>
<capturedfield>
<name>AuthMethod</name>
<type>string</type>
</capturedfield>
<capturedfield>
844
<name>AccountName</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>SourceIP4Address</name>
<type>ipaddr</type>
</capturedfield>
</matchfield>
<set>
<field>
<name>TaxonomyStatus</name>
<value>success</value>
<type>string</type>
</field>
<field>
<name>TaxonomyAction</name>
<value>authenticate</value>
<type>string</type>
</field>
</set>
</pattern>
<pattern>
<id>2</id>
<name>ssh auth failure</name>
<matchfield>
<name>Message</name>
<type>regexp</type>
<value>^Failed (\S+) for invalid user (\S+) from (\S+) port \d+ ssh2</value>
<capturedfield>
<name>AuthMethod</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>AccountName</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>SourceIP4Address</name>
<type>ipaddr</type>
</capturedfield>
</matchfield>
<set>
<field>
<name>TaxonomyStatus</name>
<value>failure</value>
<type>string</type>
</field>
<field>
<name>TaxonomyAction</name>
<value>authenticate</value>
<type>string</type>
</field>
</set>
<exec>
845
$TestField = 'test';
$TestField = $Testfield + 'value';
</exec>
</pattern>
</group>
</patterndb>
This example is the same as the previous one, and uses the same pattern file, but it uses the
match_pattern() function to discard any event that is not matched by the pattern file.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension pattern>
6 Module xm_pattern
7 PatternFile modules/extension/pattern/patterndb2-3.xml
8 </Extension>
9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 if not match_pattern() drop();
16 </Exec>
17 </Input>
While the NXLog language is already a powerful framework, it is not intended to be a fully featured programming
language and does not provide lists, arrays, hashes, and other features available in many high-level languages.
With this module, Perl can be used to process event data via a built-in Perl interpreter. See also the im_perl and
om_perl modules.
The Perl interpreter is only loaded if the module is declared in the configuration. The module will parse the file
specified in the PerlCode directive when NXLog starts the module. This file should contain one or more methods
which can be called from the Exec directive of any module that will use Perl for log processing. See the example
below.
Perl code defined via this module must not be called from the im_perl and om_perl
WARNING
modules as that would involve two Perl interpreters and will likely result in a crash.
To use the xm_perl module on Windows, a separate Perl environment must be installed, such as
NOTE
Strawberry Perl. Currently, the xm_perl module on Windows requires Strawberry Perl 5.28.0.1.
846
To access event data, the Log::Nxlog module must be included, which provides the following methods.
log_debug(msg)
Send the message msg to the internal logger on DEBUG log level. This method does the same as the
log_debug() procedure in NXLog.
log_info(msg)
Send the message msg to the internal logger on INFO log level. This method does the same as the log_info()
procedure in NXLog.
log_warning(msg)
Send the message msg to the internal logger on WARNING log level. This method does the same as the
log_warning() procedure in NXLog.
log_error(msg)
Send the message msg to the internal logger on ERROR log level. This method does the same as the
log_error() procedure in NXLog.
delete_field(event, key)
Delete the value associated with the field named key.
field_names(event)
Return a list of the field names contained in the event data. This method can be used to iterate over all of the
fields.
field_type(event, key)
Return a string representing the type of the value associated with the field named key.
get_field(event, key)
Retrieve the value associated with the field named key. This method returns a scalar value if the key exists
and the value is defined, otherwise it returns undef.
For the full NXLog Perl API, see the POD documentation in Nxlog.pm. The documentation can be read with
perldoc Log::Nxlog.
See the list of installer packages that provide the xm_perl module in the Available Modules chapter of the NXLog
User Guide.
120.24.1. Configuration
The xm_perl module accepts the following directives in addition to the common module directives.
PerlCode
This mandatory directive expects a file containing valid Perl code. This file is read and parsed by the Perl
interpreter. Methods defined in this file can be called with the call() procedure.
847
On Windows, the Perl script invoked by the PerlCode directive must define the Perl library
paths at the beginning of the script to provide access to the Perl modules.
nxlog-windows.pl
NOTE
use lib 'c:\Strawberry\perl\lib';
use lib 'c:\Strawberry\perl\vendor\lib';
use lib 'c:\Strawberry\perl\site\lib';
use lib 'c:\Program Files\nxlog\data';
Config
This optional directive allows you to pass configuration strings to the script file defined by the PerlCode
directive. This is a block directive and any text enclosed within <Config></Config> is submitted as a single
string literal to the Perl code.
If you pass several values using this directive (for example, separated by the \n delimiter) be
NOTE
sure to parse the string correspondingly inside the Perl code.
120.24.2. Procedures
The following procedures are exported by xm_perl.
call(string subroutine);
Call the given Perl subroutine.
120.24.3. Examples
848
Example 581. Using the built-in Perl interpreter
In this example, logs are parsed as Syslog and then are passed to a Perl method which does a GeoIP lookup
on the source address of the incoming message.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension perl>
6 Module xm_perl
7 PerlCode modules/extension/perl/processlogs.pl
8 </Extension>
9
10 <Output fileout>
11 Module om_file
12 File 'tmp/output'
13
14 # First we parse the input natively from nxlog
15 Exec parse_syslog_bsd();
16
17 # Now call the 'process' subroutine defined in 'processlogs.pl'
18 Exec perl_call("process");
19
20 # You can also invoke this public procedure 'call' in case
21 # of multiple xm_perl instances like this:
22 # Exec perl->call("process");
23 </Output>
processlogs.pl (truncated)
use lib "$FindBin::Bin/../../../../src/modules/extension/perl";
use strict;
use warnings;
# Without Log::Nxlog you cannot access (read or modify) the event data
use Log::Nxlog;
use Geo::IP;
my $geoip;
BEGIN
{
# This will be called once when nxlog starts so you can use this to
# initialize stuff here
#$geoip = Geo::IP->new(GEOIP_MEMORY_CACHE);
$geoip = Geo::IP->open('modules/extension/perl/GeoIP.dat', GEOIP_MEMORY_CACHE);
[...]
The Python script should import the nxlog module, and will have access to the following classes and functions.
849
nxlog.log_debug(msg)
Send the message msg to the internal logger at DEBUG log level. This function does the same as the core
log_debug() procedure.
nxlog.log_info(msg)
Send the message msg to the internal logger at INFO log level. This function does the same as the core
log_info() procedure.
nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This function does the same as the core
log_warning() procedure.
nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This function does the same as the core
log_error() procedure.
class nxlog.Module
This class is instantiated by NXLog and can be accessed via the LogData.module attribute. This can be used to
set or access variables associated with the module (see the example below).
class nxlog.LogData
This class represents an event. It is instantiated by NXLog and passed to the method specified by the
python_call() procedure.
delete_field(name)
This method removes the field name from the event record.
field_names()
This method returns a list with the names of all the fields currently in the event record.
get_field(name)
This method returns the value of the field name in the event.
set_field(name, value)
This method sets the value of field name to value.
module
This attribute is set to the Module object associated with the event.
See the list of installer packages that provide the xm_python module in the Available Modules chapter of the
NXLog User Guide.
120.25.1. Configuration
The xm_python module accepts the following directives in addition to the common module directives.
PythonCode
This mandatory directive specifies a file containing Python code. The python_call() procedure can be used to
call a Python function defined in the file. The function must accept an nxlog.LogData object as its argument.
120.25.2. Procedures
The following procedures are exported by xm_python.
call(string subroutine);
Call the given Python subroutine.
850
python_call(string function);
Call the specified function, which must accept an nxlog.LogData() object as its only argument.
120.25.3. Examples
Example 582. Using Python for Log Processing
This configuration calls two Python functions to modify each event record. The add_checksum() uses
Python’s hashlib module to add a $ChecksumSHA1 field to the event; the add_counter() function adds a
$Counter field for non-DEBUG events.
The pm_hmac module offers a more complete implementation for checksumming. See
NOTE
Statistical Counters for a native way to add counters.
nxlog.conf (truncated)
1 </Input>
2
3 <Extension _json>
4 Module xm_json
5 DateFormat YYYY-MM-DD hh:mm:ss
6 </Extension>
7
8 <Extension _syslog>
9 Module xm_syslog
10 </Extension>
11
12 <Extension python>
13 Module xm_python
14 PythonCode modules/extension/python/py/processlogs2.py
15 </Extension>
16
17 <Output out>
18 Module om_file
19 File 'tmp/output'
20 <Exec>
21 # The $SeverityValue field is added by this procedure.
22 # Most other parsers also add a normalized severity value.
23 parse_syslog();
24
25 # Add a counter for each event with log level above DEBUG.
26 python_call('add_counter');
27
28 # Calculate a checksum (after the counter field is added).
29 [...]
851
processlogs2.py (truncated)
import hashlib
import nxlog
def add_checksum(event):
# Convert field list to dictionary
all = {}
for field in event.field_names():
all.update({field: event.get_field(field)})
def add_counter(event):
# Get module object and initialize counter
module = event.module
[...]
See the list of installer packages that provide the xm_resolver module in the Available Modules chapter of the
NXLog User Guide.
120.26.1. Configuration
The xm_resolver module accepts the following directives in addition to the common module directives.
CacheExpiry
Specifies the time in seconds after entries in the cache are considered invalid and are refreshed by issuing a
DNS lookup. The default expiry is 3600 seconds.
CacheLimit
This directive can be used to specify an upper limit on the number of entries in the cache in order to prevent
the cache from becoming arbitrary large and potentially exhausting memory. When the number of entries in
the cache reaches this value, no more items will be inserted into the cache. The default is 100,000 entries.
120.26.2. Functions
The following functions are exported by xm_resolver.
852
string gid_to_name(string gid)
Return the group name assigned to the string gid on Unix. If gid cannot be looked up, undef is returned.
120.26.3. Examples
853
Example 583. Using Functions Provided by xm_resolver
It is common for devices to send Syslog messages containing the IP address of the device instead of a real
hostname. In this example, Syslog messages are parsed and the hostname field of each Syslog header is
converted to a hostname if it looks like an IP address.
nxlog.conf (truncated)
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _resolver>
6 Module xm_resolver
7 </Extension>
8
9 <Input tcp>
10 Module im_tcp
11 Host 0.0.0.0
12 Port 1514
13 <Exec>
14 parse_syslog();
15 if $Hostname =~ /^\d+\.\d+\.\d+\.\d+/
16 {
17 $HostIP = $Hostname;
18 $Hostname = ipaddr_to_name($HostIP);
19 if not defined $Hostname $Hostname = $HostIP;
20 #WIN
21 if ($Hostname == ipaddr_to_name("127.0.0.1"))
22 {
23 $Hostname = "localhost";
24 }
25 #END
26 }
27 </Exec>
28 </Input>
29 [...]
Input Sample
<38>2014-11-11 11:40:27 127.0.0.1 sshd[3436]: Failed none for invalid user asdf from 127.0.0.1
port 51824 ssh2↵
<38>2014-11-12 12:42:37 127.0.0.1 sshd[3436]: Failed password for invalid user fdsa from
127.0.0.1 port 51824 ssh2↵
Output Sample
<38>Nov 11 11:40:27 localhost sshd[3436]: Failed none for invalid user asdf from 127.0.0.1 port
51824 ssh2↵
<38>Nov 12 12:42:37 localhost sshd[3436]: Failed password for invalid user fdsa from 127.0.0.1
port 51824 ssh2↵
• renaming fields,
• deleting specified fields (blacklist),
• keeping only a list of specified fields (whitelist), and
854
• evaluating additional statements.
The xm_rewrite module provides Delete, Keep, and Rename directives for modifying event records. With the Exec
directive of this module, it is possible to invoke functions and procedures from other modules. This allows all
data transformation to be configured in a single module instance in order to simplify the configuration. Then the
transformation can be referenced from another module by adding:
Exec rewrite->process();
This same statement can be used by more than one module instance if necessary, rather than duplicating
configuration.
See the list of installer packages that provide the xm_rewrite module in the Available Modules chapter of the
NXLog User Guide.
120.27.1. Configuration
The xm_rewrite module accepts the following directives in addition to the common module directives.
The order of the action directives is significant as the module executes them in the order of appearance. It is
possible to configure an xm_rewrite instance with no directives (other than the Module directive). In this case, the
corresponding process() procedure will do nothing.
Delete
This directive takes a field name or a list of fields. The fields specified will be removed from the event record.
This can be used to blacklist specific fields that are not wanted in the event record. This is equivalent to using
delete() in Exec.
Exec
This directive works the same way as the Exec directive in other modules: the statement(s) provided in the
argument/block will be evaluated in the context of the module that called process() (i.e., as though the
statement(s) from this Exec directive/block were inserted into the caller’s Exec directive/block, at the location
of the process() call).
Keep
This directive takes a field name or a list of fields. The fields specified will be kept and all other fields not
appearing in the list will be removed from the event record. This can be used to whitelist specific fields.
To retain only the $raw_event field, use Keep raw_event (it is not possible to delete the $raw_event field).
This can be helpful for discarding extra event fields after $raw_event has been set (with to_json(), for
example) and before an output module that operates on all fields in the event record (such as
om_batchcompress).
Rename
This directive takes two fields. The field in the first argument will be renamed to the name in the second. This
is equivalent to using rename_field() in Exec.
120.27.2. Procedures
The following procedures are exported by xm_rewrite.
process();
This procedure invokes the data processing as specified in the configuration of the xm_rewrite module
instance.
855
120.27.3. Examples
Example 584. Using xm_rewrite to Transform Syslog Data Read from File
The following configuration parses Syslog data from a file, invokes the process() procedure of the xm_rewrite
instance to keep and rename whitelisted fields, then writes JSON-formatted output to a file.
nxlog.conf (truncated)
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension rewrite>
6 Module xm_rewrite
7 Keep EventTime, Severity, Hostname, SourceName, Message
8 Rename EventTime, timestamp
9 Rename Hostname, host
10 Rename SourceName, src
11 Rename Message, msg
12 Rename Severity, sev
13 Exec if $msg =~ /error/ $sev = 'ERROR';
14 </Extension>
15
16 <Extension json>
17 Module xm_json
18 </Extension>
19
20 <Input syslogfile>
21 Module im_file
22 File "modules/extension/rewrite/xm_rewrite.in"
23 Exec parse_syslog();
24 Exec rewrite->process();
25 </Input>
26
27 <Output fileout>
28 Module om_file
29 [...]
Input Sample
<0>2010-10-12 12:49:06 mybox app[12345]: kernel message↵
<30>2010-10-12 12:49:06 mybox app[12345]: daemon - info↵
<27>2010-10-12 12:49:06 mybox app[12345]: daemon - error↵
<30>2010-10-12 13:19:11 mybox app[12345]: There was an error↵
Output Sample
{"sev":"CRITICAL","host":"mybox","timestamp":"2010-10-12 12:49:06","src":"app","msg":"kernel
message"}
{"sev":"INFO","host":"mybox","timestamp":"2010-10-12 12:49:06","src":"app","msg":"daemon -
info"}
{"sev":"ERROR","host":"mybox","timestamp":"2010-10-12 12:49:06","src":"app","msg":"daemon -
error"}
{"sev":"ERROR","host":"mybox","timestamp":"2010-10-12 13:19:11","src":"app","msg":"There was an
error"}
856
Example 585. Performing Additional Parsing in an xm_rewrite Module Instance
The following configuration does the exact same processing. In this case, however, the Syslog parsing is
moved into the xm_rewrite module instance so the input module only needs to invoke the process()
procedure.
nxlog.conf (truncated)
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension rewrite>
6 Module xm_rewrite
7 Exec parse_syslog();
8 Keep EventTime, Severity, Hostname, SourceName, Message
9 Rename EventTime, timestamp
10 Rename Hostname, host
11 Rename SourceName, src
12 Rename Message, msg
13 Rename Severity, sev
14 Exec if $msg =~ /error/ $sev = 'ERROR';
15 </Extension>
16
17 <Extension json>
18 Module xm_json
19 </Extension>
20
21 <Input syslogfile>
22 Module im_file
23 File "modules/extension/rewrite/xm_rewrite.in"
24 Exec rewrite->process();
25 </Input>
26
27 <Output fileout>
28 Module om_file
29 [...]
Nxlog.log_info(msg)
Send the message msg to the internal logger at DEBUG log level. This method does the same as the core
log_debug() procedure.
Nxlog.log_debug(msg)
Send the message msg to the internal logger at INFO log level. This method does the same as the core
log_info() procedure.
Nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This method does the same as the core
log_warning() procedure.
857
Nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This method does the same as the core
log_error() procedure.
class Nxlog.LogData
This class represents an event.
field_names()
This method returns an array with the names of all the fields currently in the event record.
get_field(name)
This method returns the value of the field name in the event.
set_field(name, value)
This method sets the value of field name to value.
See the list of installer packages that provide the xm_ruby module in the Available Modules chapter of the NXLog
User Guide.
120.28.1. Configuration
The xm_ruby module accepts the following directives in addition to the common module directives.
RubyCode
This mandatory directive expects a file containing valid Ruby code. Methods defined in this file can be called
with the ruby_call() procedure.
120.28.2. Procedures
The following procedures are exported by xm_ruby.
call(string subroutine);
Calls the Ruby method provided in the first argument.
ruby_call(string subroutine);
Calls the Ruby method provided in the first argument.
120.28.3. Examples
858
Example 586. Processing Logs With Ruby
In this example logs are parsed as Syslog, then the data is passed to a Ruby method which adds an
incrementing $AlertCounter field for any event with a normalized $SeverityValue of at least 4.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension ruby>
6 Module xm_ruby
7 RubyCode ./modules/extension/ruby/processlogs2.rb
8 </Extension>
9
10 <Input in>
11 Module im_file
12 File 'test2.log'
13 <Exec>
14 parse_syslog();
15 ruby->call('add_alert_counter');
16 </Exec>
17 </Input>
processlogs2.rb
$counter = 0
def add_alert_counter(event)
if event.get_field('SeverityValue') >= 4
Nxlog.log_debug('Adding AlertCounter field')
$counter += 1
event.set_field('AlertCounter', $counter)
end
end
Like the xm_syslog module, the xm_snmp module does not provide support for the network transport layer. Since
traps are sent primarily over UDP (typically to port 162), the im_udp module should be used together with this
module. This module registers an input reader function under the name "snmp" which can be used in the
InputType directive to parse UDP message payloads.
The module supports MIB definitions in order to resolve OID numbers to names. In addition to the standard
xm_snmp module fields and the im_udp module fields, each variable in the trap message will be available as an
internal NXLog field. If the OID cannot be resolved to a name, the string OID. will be prepended to the dotted
OID number representation in the field name. For example, if a trap contains a string variable with OID
1.3.6.1.4.1.311.1.13.1.9999.3.0, this field can be accessed as the NXLog field
$OID.1.3.6.1.4.1.311.1.13.1.9999.3.0. If the object identifier can be resolved to a name called FIELDNAME,
the value will be available in the NXLog field $SNMP.FIELDNAME. The SNMP trap variables are also put in the
$raw_event field and are listed as name="value" pairs there in the order they appear in the trap. The following
is an example of the contents of the $raw_event field (line breaks added):
859
2011-12-15 18:10:35 192.1.1.114 INFO \
OID.1.3.6.1.4.1.311.1.13.1.9999.1.0="test msg" \
OID.1.3.6.1.4.1.311.1.13.1.9999.2.0="Administrator" \
OID.1.3.6.1.4.1.311.1.13.1.9999.3.0="WIN-OUNNPISDHIG" \
OID.1.3.6.1.4.1.311.1.13.1.9999.4.0="1" \
OID.1.3.6.1.4.1.311.1.13.1.9999.5.0="0" \
OID.1.3.6.1.4.1.311.1.13.1.9999.6.0="test msg"
To convert the output to Syslog format, consider using one of the to_syslog() procedures
NOTE provided by the xm_syslog module. However, note that the resulting format will not be in
accordance with RFC 5675.
Microsoft Windows can convert and forward EventLog messages as SNMPv1 traps. The evntwin utility can be
used to configure which events are sent as traps. See How to Generate SNMP traps from Windows Events for
more information about setting up this feature.
The Net-SNMP toolkit (available for Unix/Linux and Windows) provides the snmptrap command line utility which
can be used for sending test SNMP traps. Create the following MIB definition file and put it in a directory
specified by the MIBDir directive:
demo-trap TRAP-TYPE
STATUS current
ENTERPRISE demotraps
VARIABLES { sysLocation }
DESCRIPTION "This is just a demo"
::= 17
END
Here is an example for invoking the snmptrap utility (line break added):
The received trap should look like this in the $raw_event field:
If the MIB definition can not be loaded or parsed, the unresolved OID number will be seen in the message:
See the list of installer packages that provide the xm_snmp module in the Available Modules chapter of the NXLog
User Guide.
120.29.1. Configuration
The xm_snmp module accepts the following directives in addition to the common module directives.
AllowAuthenticatedOnly
This boolean directive specifies whether only authenticated SNMP v3 traps should be accepted. If set to TRUE,
the User block must also be defined, and unauthenticated SNMP traps are not accepted. The default is FALSE:
all SNMP traps are accepted.
860
MIBDir
This optional directive can be used to define a directory which contains MIB definition files. Multiple MIBDir
directives can be specified.
User
This directive is specified as a block (see Parsing Authenticated and Encrypted SNMP Traps) and provides the
authentication details for an SNMP v3 user. The block must be named with the corresponding user. This
block can be specified more than once to provide authentication details for multiple users.
AuthPasswd
This required directive specifies the authentication password.
AuthProto
This optional directive specifies the authentication protocol to use. Supported values are md5 and sha1. If
this directive is not specified, the default is md5.
EncryptPasswd
This directive specifies the encryption password to use for encrypted traps.
EncryptProto
This optional directive specifies the encryption protocol to use. Supported values are des and aes. The
default, if encryption is in use and this directive is not specified, is des.
120.29.2. Fields
The following fields are used by xm_snmp.
861
$SNMP.TrapCodeSpecific (type: integer)
A code value indicating an implementation-specific trap type. Available in SNMP v1 only.
120.29.3. Examples
Example 587. Using MIB Definitions to Parse SNMP Traps
The InputType snmp directive in the im_udp module block is required to parse the SNMP payload in the
UDP message.
nxlog.conf
1 <Extension snmp>
2 Module xm_snmp
3 MIBDir /usr/share/mibs/iana
4 MIBDir /usr/share/mibs/ietf
5 MIBDir /usr/share/mibs/site
6 </Extension>
7
8 <Input udp>
9 Module im_udp
10 Host 0.0.0.0
11 Port 162
12 InputType snmp
13 </Input>
862
Example 588. Parsing Authenticated and Encrypted SNMP Traps
This configuration parses SNMP v3 traps. Only authenticated traps are parsed; a warning is printed for each
non-authenticated source that sends a trap. The User block provides authentication and encryption
settings for the switch1 user.
nxlog.conf
1 <Extension snmp>
2 Module xm_snmp
3 MIBDir /usr/share/mibs/iana
4 MIBDir /usr/share/mibs/ietf
5 AllowAuthenticatedOnly TRUE
6 <User switch1>
7 AuthPasswd secret
8 AuthProto sha1
9 EncryptPasswd secret
10 EncryptProto aes
11 </User>
12 </Extension>
13
14 <Input udp>
15 Module im_udp
16 Host 0.0.0.0
17 Port 162
18 InputType snmp
19 </Input>
This module will be completely removed in a future release, please update your configuration
NOTE
files.
The older but still widespread BSD Syslog standard defines both the format and the transport protocol in RFC
3164. The transport protocol is UDP, but to provide reliability and security, this line-based format is also
commonly transferred over TCP and SSL. There is a newer standard defined in RFC 5424, also known as the IETF
Syslog format, which obsoletes the BSD Syslog format. This format overcomes most of the limitations of BSD
Syslog and allows multi-line messages and proper timestamps. The transport method is defined in RFC 5426 for
UDP and RFC 5425 for TLS/SSL.
Because the IETF Syslog format supports multi-line messages, RFC 5425 defines a special format to encapsulate
these by prepending the payload size in ASCII to the IETF Syslog message. Messages transferred in UDP packets
are self-contained and do not need this additional framing. The following input reader and output writer
functions are provided by the xm_syslog module to support this TLS transport defined in RFC 5425. While RFC
5425 explicitly defines that the TLS network transport protocol is to be used, pure TCP may be used if security is
863
not a requirement. Syslog messages can also be written to file with this framing format using these functions.
InputType Syslog_TLS
This input reader function parses the payload size and then reads the message according to this value. It is
required to support Syslog TLS transport defined in RFC 5425.
OutputType Syslog_TLS
This output writer function prepends the payload size to the message. It is required to support Syslog TLS
transport defined in RFC 5425.
The Syslog_TLS InputType/OutputType can work with any input/output such as im_tcp or im_file
NOTE and does not depend on SSL transport at all. The name Syslog_TLS was chosen to refer to the
octet-framing method described in RFC 5425 used for TLS transport.
The pm_transformer module can also parse and create BSD and IETF Syslog messages, but the
NOTE functions and procedures provided by this module make it possible to solve more complex tasks
which pm_transformer is not capable of on its own.
Structured data in IETF Syslog messages is parsed and put into NXLog fields. The SD-ID will be prepended to the
field name with a dot unless it is NXLOG@XXXX. Consider the following Syslog message:
After this IETF-formatted Syslog message is parsed with parse_syslog_ietf(), there will be two additional fields:
$exampleSDID.eventID and $exampleSDID.eventSource. When SD-ID is NXLOG, the field name will be the
same as the SD-PARAM name. The two additional fields extracted from the structured data part of the following
IETF Syslog message are $eventID and $eventSource:
See the list of installer packages that provide the xm_syslog module in the Available Modules chapter of the
NXLog User Guide.
120.31.1. Configuration
The xm_syslog module accepts the following directives in addition to the common module directives.
IETFTimestampInGMT
This is an alias for the UTCTimestamp directive below.
ReplaceLineBreaks
This optional directive specifies a character with which to replace line breaks in the Syslog message when
generating Syslog events with to_syslog_bsd(), to_syslog_ietf(), and to_syslog_snare(). The default is a space. To
retain line breaks in Syslog messages, set this to \n.
SnareDelimiter
This optional directive takes a single character (see below) as argument. This character is used by the
to_syslog_snare() procedure to separate fields. If this directive is not specified, the default delimiter character
is the tab (\t). In latter versions of Snare 4 this has changed to the hash mark (#); this directive can be used to
specify the alternative delimiter. Note that there is no delimiter after the last field.
SnareReplacement
This optional directive takes a single character (see below) as argument. This character is used by the
864
to_syslog_snare() procedure to replace occurrences of the delimiter character inside the $Message field. If this
directive is not specified, the default replacement character is the space.
UTCTimestamp
This optional boolean directive can be used to format the timestamps produced by to_syslog_ietf() in
UTC/GMT instead of local time. The default is FALSE: local time is used with a timezone indicator.
Delimiter ;
Control characters
The following non-printable characters can be specified with escape sequences:
\a
audible alert (bell)
\b
backspace
\t
horizontal tab
\n
newline
\v
vertical tab
\f
formfeed
\r
carriage return
Delimiter \t
Delimiter ';'
Delimiter '\'
865
A character in double quotes
Double quotes can be used like single quotes:
Delimiter "\"
Delimiter 0x20
120.31.2. Functions
The following functions are exported by xm_syslog.
120.31.3. Procedures
The following procedures are exported by xm_syslog.
parse_syslog();
Parse the $raw_event field as either BSD Syslog (RFC 3164) or IETF Syslog (RFC 5424) format.
parse_syslog(string source);
Parse the given string as either BSD Syslog (RFC 3164) or IETF Syslog (RFC 5424) format.
parse_syslog_bsd();
Parse the $raw_event field as BSD Syslog (RFC 3164) format.
parse_syslog_bsd(string source);
Parse the given string as BSD Syslog (RFC 3164) format.
parse_syslog_ietf();
Parse the $raw_event field as IETF Syslog (RFC 5424) format.
866
parse_syslog_ietf(string source);
Parse the given string as IETF Syslog (RFC 5424) format.
to_syslog_bsd();
Create a BSD Syslog formatted log message in $raw_event from the fields of the event. The following fields
are used to construct the $raw_event field: $EventTime; $Hostname; $SourceName; $ProcessID or
$ExecutionProcessID; $Message or $raw_event; $SyslogSeverity, $SyslogSeverityValue, $Severity, or
$SeverityValue; and $SyslogFacility or $SyslogFacilityValue. If the fields are not present, a sensible default is
used.
to_syslog_ietf();
Create an IETF Syslog (RFC 5424) formatted log message in $raw_event from the fields of the event. The
following fields are used to construct the $raw_event field: $EventTime; $Hostname; $SourceName;
$ProcessID or $ExecutionProcessID; $Message or $raw_event; $SyslogSeverity, $SyslogSeverityValue,
$Severity, or $SeverityValue; and $SyslogFacility or $SyslogFacilityValue. If the fields are not present, a
sensible default is used.
to_syslog_snare();
Create a SNARE Syslog formatted log message in $raw_event. The following fields are used to construct the
$raw_event field: $EventTime, $Hostname, $SeverityValue, $FileName, $Channel, $SourceName,
$AccountName, $AccountType, $EventType, $Category, $RecordNumber, and $Message.
120.31.4. Fields
The following fields are used by xm_syslog.
In addition to the fields listed below, the parse_syslog() and parse_syslog_ietf() procedures will create fields from
the Structured Data part of an IETF Syslog message. If the SD-ID in this case is not "NXLOG", these fields will be
prefixed by the SD-ID (for example, $mySDID.CustomField).
867
$SeverityValue (type: integer)
The normalized severity number of the event, mapped as follows.
Syslog Normalized
Severity Severity
0/emerg 5/critical
1/alert 5/critical
2/crit 5/critical
3/err 4/error
4/warning 3/warning
5/notice 2/info
6/info 2/info
7/debug 1/debug
120.31.5. Examples
868
Example 589. Sending a File as BSD Syslog over UDP
In this example, logs are collected from files, converted to BSD Syslog format with the to_syslog_bsd()
procedure, and sent over UDP with the om_udp module.
nxlog.conf (truncated)
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input file>
6 Module im_file
7
8 # We monitor all files matching the wildcard.
9 # Every line is read into the $raw_event field.
10 File "/var/log/app*.log"
11
12 <Exec>
13 # Set the $EventTime field usually found in the logs by
14 # extracting it with a regexp. If this is not set, the current
15 # system time will be used which might be a little off.
16 if $raw_event =~ /(\d\d\d\d\-\d\d-\d\d \d\d:\d\d:\d\d)/
17 {
18 $EventTime = parsedate($1);
19 }
20
21 # Now set the severity to something custom. This defaults to
22 # 'INFO' if unset.
23 if $raw_event =~ /ERROR/ $Severity = 'ERROR';
24 else $Severity = 'INFO';
25
26 # The facility can be also set, otherwise the default value is
27 # 'USER'.
28 $SyslogFacility = 'AUDIT';
29 [...]
869
Example 590. Collecting BSD Style Syslog Messages over UDP
To collect BSD Syslog messages over UDP, use the parse_syslog_bsd() procedure coupled with the im_udp
module as in the following example.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 </Output>
16
17 <Route syslog_to_file>
18 Path udp => file
19 </Route>
To collect IETF Syslog messages over UDP as defined by RFC 5424 and RFC 5426, use the parse_syslog_ietf()
procedure coupled with the im_udp module as in the following example. Note that, as for BSD Syslog, the
default port is 514 (as defined by RFC 5426).
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input ietf>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog_ietf();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 </Output>
16
17 <Route ietf_to_file>
18 Path ietf => file
19 </Route>
870
Example 592. Collecting Both IETF and BSD Syslog Messages over the Same UDP Port
To collect both IETF and BSD Syslog messages over UDP, use the parse_syslog() procedure coupled with the
im_udp module as in the following example. This procedure is capable of detecting and parsing both Syslog
formats. Since 514 is the default UDP port number for both BSD and IETF Syslog, this port can be useful to
collect both formats simultaneously. To accept both formats on different ports, the appropriate parsers can
be used as in the previous two examples.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 Exec parse_syslog();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 </Output>
16
17 <Route syslog_to_file>
18 Path udp => file
19 </Route>
871
Example 593. Collecting IETF Syslog Messages over TLS/SSL
To collect IETF Syslog messages over TLS/SSL as defined by RFC 5424 and RFC 5425, use the
parse_syslog_ietf() procedure coupled with the im_ssl module as in this example. Note that the default port
is 6514 in this case (as defined by RFC 5425). The payload format parser is handled by the Syslog_TLS input
reader.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input ssl>
6 Module im_ssl
7 Host localhost
8 Port 6514
9 CAFile %CERTDIR%/ca.pem
10 CertFile %CERTDIR%/client-cert.pem
11 CertKeyFile %CERTDIR%/client-key.pem
12 KeyPass secret
13 InputType Syslog_TLS
14 Exec parse_syslog_ietf();
15 </Input>
16
17 <Output file>
18 Module om_file
19 File "/var/log/logmsg.txt"
20 </Output>
21
22 <Route ssl_to_file>
23 Path ssl => file
24 </Route>
872
Example 594. Forwarding IETF Syslog over TCP
The following configuration uses the to_syslog_ietf() procedure to convert input to IETF Syslog and forward
it over TCP.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input file>
6 Module im_file
7 File "/var/log/input.txt"
8 Exec $TestField = "test value"; $Message = $raw_event;
9 </Input>
10
11 <Output tcp>
12 Module om_tcp
13 Host 127.0.0.1
14 Port 1514
15 Exec to_syslog_ietf();
16 OutputType Syslog_TLS
17 </Output>
18
19 <Route file_to_syslog>
20 Path file => tcp
21 </Route>
Because of the Syslog_TLS framing, the raw data sent over TCP will look like the following.
Output Sample
130 <13>1 2012-01-01T16:15:52.873750Z - - - [NXLOG@14506 EventReceivedTime="2012-01-01
17:15:52" TestField="test value"] test message↵
This example shows that all fields—except those which are filled by the Syslog parser—are added to the
structured data part.
873
Example 595. Conditional Rewrite of the Syslog Facility—Version 1
If the message part of the Syslog event matches the regular expression, the $SeverityValue field will be
set to the "error" Syslog severity integer value (which is provided by the syslog_severity_value() function).
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Port 514
8 Host 0.0.0.0
9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output file>
13 Module om_file
14 File "/var/log/logmsg.txt"
15 Exec if $Message =~ /error/ $SeverityValue = syslog_severity_value("error");
16 Exec to_syslog_bsd();
17 </Output>
18
19 <Route syslog_to_file>
20 Path udp => file
21 </Route>
874
Example 596. Conditional Rewrite of the Syslog Facility—Version 2
The following example does almost the same thing as the previous example, except that the Syslog parsing
and rewrite is moved to a processor module and the rewrite only occurs if the facility was modified. This
can make processing faster on multi-core systems because the processor module runs in a separate
thread. This method can also minimize UDP packet loss because the input module does not need to parse
Syslog messages and therefore can process UDP packets faster.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input udp>
6 Module im_udp
7 Host 0.0.0.0
8 Port 514
9 </Input>
10
11 <Processor rewrite>
12 Module pm_null
13 <Exec>
14 parse_syslog_bsd();
15 if $Message =~ /error/
16 {
17 $SeverityValue = syslog_severity_value("error");
18 to_syslog_bsd();
19 }
20 </Exec>
21 </Processor>
22
23 <Output file>
24 Module om_file
25 File "/var/log/logmsg.txt"
26 </Output>
27
28 <Route syslog_to_file>
29 Path udp => rewrite => file
30 </Route>
A common W3C log source is Microsoft IIS, which produces output like the following:
875
#Software: Microsoft Internet Information Services 7.0↵
#Version: 1.0↵
#Date: 2010-02-13 07:08:22↵
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-
username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-
win32-status sc-bytes cs-bytes time-taken↵
2010-02-13 07:08:21 W3SVC76 DNP1WEB1 174.120.30.2 GET / - 80 - 61.135.169.37 HTTP/1.1
Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+zh-CN;+rv:1.9.0.1)+Gecko/2008070208+Firefox/3.0.1 -
http://www.baidu.com/s?wd=QQ www.domain.com 200 0 0 29554 273 1452↵
2010-02-13 07:25:00 W3SVC76 DNP1WEB1 174.120.30.2 GET /index.htm - 80 - 119.63.198.110 HTTP/1.1
Baiduspider+(+http://www.baidu.jp/spider/) - - www.itcsoftware.com 200 0 0 17791 210 551↵
The format generated by BRO is similar, as it too defines the field names in the header. The field types and
separator characters are also specified in the header. This allows the parser to automatically process the data.
Below is a sample from BRO:
#separator \x09↵
#set_separator ,↵
#empty_field (empty)↵
#unset_field -↵
#path dns↵
#open 2013-04-09-21-01-43↵
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto
trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA
TC RD↵
RA Z answers TTLs↵
#types time string addr port addr port enum count string count string
count string count string bool bool bool bool count vector[string]
vector[interval]↵
1210953058.350065 m2EJRWK7sCg 192.168.2.16 1920 192.168.2.1 53 udp
16995 ipv6.google.com 1 C_INTERNET 28 AAAA 0 NOERROR F F T
T 0↵
ipv6.l.google.com,2001:4860:0:2001::68 8655.000000,300.000000↵
1210953058.350065 m2EJRWK7sCg 192.168.2.16 1920 192.168.2.1 53 udp
16995 ipv6.google.com 1 C_INTERNET 28 AAAA 0 NOERROR F F T
T 0↵
ipv6.l.google.com,2001:4860:0:2001::68 8655.000000,300.000000↵
To use the parser in an input module, the InputType directive must reference the instance name of the xm_w3c
module. See the example below.
See the list of installer packages that provide the xm_w3c module in the Available Modules chapter of the NXLog
User Guide.
120.32.1. Configuration
The xm_w3c module accepts the following directives in addition to the common module directives.
Delimiter
This optional directive takes a single character (see below) as argument to specify the delimiter character
used to separate fields. If this directive is not specified, the default delimiter character is either the space or
tab character, as detected. For Microsoft Exchange Message Tracking logs the comma must be set as the
delimiter:
Delimiter ,
Note that there is no delimiter after the last field in W3C, but Microsoft Exchange Message Tracking logs can
contain a trailing comma.
876
FieldType
This optional directive can be used to specify a field type for a particular field. For example, to parse a
ByteSent field as an integer, use FieldType ByteSent integer. This directive can be used more than once
to provide types for multiple fields.
Delimiter ;
Control characters
The following non-printable characters can be specified with escape sequences:
\a
audible alert (bell)
\b
backspace
\t
horizontal tab
\n
newline
\v
vertical tab
\f
formfeed
\r
carriage return
Delimiter \t
Delimiter ';'
Delimiter '\'
877
A character in double quotes
Double quotes can be used like single quotes:
Delimiter "\"
Delimiter 0x20
120.32.2. Fields
The following fields are used by xm_w3c.
120.32.3. Examples
878
Example 597. Parsing Advanced IIS Logs
The following configuration parses logs from the IIS Advanced Logging Module using the pipe delimiter. The
logs are converted to JSON.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension w3cinput>
6 Module xm_w3c
7 Delimiter |
8 </Extension>
9
10 <Input w3c>
11 Module im_file
12 File 'C:\inetpub\logs\LogFiles\W3SVC\ex*.log'
13 InputType w3cinput
14 </Input>
15
16 <Output file>
17 Module om_file
18 File 'C:\test\IIS.json'
19 Exec to_json();
20 </Output>
21
22 <Route w3c_to_json>
23 Path w3c => file
24 </Route>
See the list of installer packages that provide the xm_wtmp module in the Available Modules chapter of the
NXLog User Guide.
120.33.1. Configuration
The xm_wtmp module accepts only the common module directives.
120.33.2. Examples
879
Example 598. WTMP to JSON Format Conversion
nxlog.conf
1 <Extension wtmp>
2 Module xm_wtmp
3 </Extension>
4
5 <Extension json>
6 Module xm_json
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/var/log/wtmp'
12 InputType wtmp
13 Exec to_json();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File '/var/log/wtmp.txt'
19 </Output>
20
21 <Route processwtmp>
22 Path in => out
23 </Route>
Output Sample
{
"EventTime":"2013-10-01 09:39:59",
"AccountName":"root",
"Device":"pts/1",
"LoginType":"login",
"EventReceivedTime":"2013-10-10 15:40:20",
"SourceModuleName":"input",
"SourceModuleType":"im_file"
}
{
"EventTime":"2013-10-01 23:23:38",
"AccountName":"shutdown",
"Device":"no device",
"LoginType":"shutdown",
"EventReceivedTime":"2013-10-11 10:58:00",
"SourceModuleName":"input",
"SourceModuleType":"im_file"
}
See the list of installer packages that provide the xm_xml module in the Available Modules chapter of the NXLog
User Guide.
880
120.34.1. Configuration
The xm_xml module accepts the following directives in addition to the common module directives.
IgnoreRootTag
This optional boolean directive causes parse_xml() to omit the root tag when setting field names. For
example, when this is set to TRUE and the RootTag is set to Event, a field might be named
$Event.timestamp. With this directive set to FALSE, that field name would be $timestamp. The default value
is TRUE.
IncludeHiddenFields
This boolean directive specifies that the to_xml() function or the to_xml() procedure should inlude fields
having a leading underscore (_) in their names. The default is TRUE. If IncludeHiddenFields is set to TRUE,
then generated XML will contain these otherwise excluded fields.
Note that leading dot (.) is not allowed in XML attribute names thus field names having a leading dot (.) will
always be excluded from XML output.
ParseAttributes
When this optional boolean directive is set to TRUE, parse_xml() will also parse XML attributes. The default is
FALSE (attributes are not parsed). For example, if ParseAttributes is set to TRUE, the following would be
parsed into $Msg.time, $Msg.type, and $Msg:
RootTag
This optional directive can be used to specify the name of the root tag that will be used by to_xml() to
generate XML. The default RootTag is Event.
PrefixWinEvent
When this optional boolean directive is set to TRUE, parse_windows_eventlog_xml() will create EventData.
prefiexed fields from <EventData> section of event xml and UserData. prefiexed fields from <UserData>
section. The default PrefixWinEvent is FALSE.
120.34.2. Functions
The following functions are exported by xm_xml.
string to_xml()
Convert the fields to XML and returns this as a string value. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded.
Note that directive IncludeHiddenFields has an effect on fields included in the output.
120.34.3. Procedures
The following procedures are exported by xm_xml.
parse_windows_eventlog_xml();
Parse the $raw_event field as windows eventlog XML input.
parse_windows_eventlog_xml(string source);
Parse the given string as windows eventlog XML format.
881
parse_xml();
Parse the $raw_event field as XML input.
parse_xml(string source);
Parse the given string as XML format.
to_xml();
Convert the fields to XML and put this into the $raw_event field. The $raw_event field and any field having a
leading dot (.) or underscore (_) will be automatically excluded.
Note that directive IncludeHiddenFields has an effect on fields included in the output.
120.34.4. Examples
882
Example 599. Syslog to XML Format Conversion
The following configuration accepts Syslog (both BSD and IETF) and converts it to XML.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension xml>
6 Module xm_xml
7 </Extension>
8
9 <Input tcp>
10 Module im_tcp
11 Port 1514
12 Host 0.0.0.0
13 Exec parse_syslog(); to_xml();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "/var/log/log.xml"
19 </Output>
20
21 <Route tcp_to_file>
22 Path tcp => file
23 </Route>
Input Sample
<30>Sep 30 15:45:43 host44.localdomain.hu acpid: 1 client rule loaded↵
Output Sample
<Event>
<MessageSourceAddress>127.0.0.1</MessageSourceAddress>
<EventReceivedTime>2012-03-08 15:05:39</EventReceivedTime>
<SyslogFacilityValue>3</SyslogFacilityValue>
<SyslogFacility>DAEMON</SyslogFacility>
<SyslogSeverityValue>6</SyslogSeverityValue>
<SyslogSeverity>INFO</SyslogSeverity>
<SeverityValue>2</SeverityValue>
<Severity>INFO</Severity>
<Hostname>host44.localdomain.hu</Hostname>
<EventTime>2012-09-30 15:45:43</EventTime>
<SourceName>acpid</SourceName>
<Message>1 client rule loaded</Message>
</Event>
883
Example 600. Converting Windows EventLog to Syslog-Encapsulated XML
The following configuration reads the Windows EventLog and converts it to the BSD Syslog format where
the message part contains the fields in XML.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension xml>
6 Module xm_xml
7 </Extension>
8
9 <Input eventlog>
10 Module im_msvistalog
11 Exec $Message = to_xml(); to_syslog_bsd();
12 </Input>
13
14 <Output tcp>
15 Module om_tcp
16 Host 192.168.1.1
17 Port 1514
18 </Output>
19
20 <Route eventlog_to_tcp>
21 Path eventlog => tcp
22 </Route>
Output Sample
<14>Mar 8 15:12:12 WIN-OUNNPISDHIG Service_Control_Manager: <Event><EventTime>2012-03-08
15:12:12</EventTime><EventTimeWritten>2012-03-08 15:12:12</EventTimeWritten><Hostname>WIN-
OUNNPISDHIG</Hostname><EventType>INFO</EventType><SeverityValue>2</SeverityValue><Severity>INFO
</Severity><SourceName>Service Control
Manager</SourceName><FileName>System</FileName><EventID>7036</EventID><CategoryNumber>0</Catego
ryNumber><RecordNumber>6791</RecordNumber><Message>The nxlog service entered the running state.
</Message><EventReceivedTime>2012-03-08 15:12:14</EventReceivedTime></Event>↵
120.35.1. Configuration
The xm_zlib module accepts the following directives in addition to the common module directives.
Format
This optional directive defines the algorithm for compressing and decompressing log data. The available
values are gzip and zlib; the default value is gzip.
CompressionLevel
This optional directive defines the level of compression and ranges between 0 and 9. 0 means compression
884
with the lowest level, but with the highest performance and 9 means the highest level of compression, but
the lowest performance. If this directive is not specified, the default compression level is set to the default of
the zlib library. This usually equals to 6.
CompBufSize
This optional directive defines the amount of bytes for the compression memory buffer and starts with 8192
bytes. The default value is 16384.
DecompBufSize
This optional directive defines the amount of bytes for the decompression memory buffer and starts with
16384 bytes. The default value is 32768.
DataType
This optional directive defines the data type used by the compress stream processor. Specifying data type
improves compression results. The available values are unknown, text, and binary; the default value is
unknown.
MemoryLevel
This optional directive defines the available amount of compression memory and accepts values between 1
and 9. The default value is 8.
compress
This stream processor compresses log data and is specified in the OutputType directive after the output
writer function. The result is similar to running the following command:
decompress
This stream processor decompresses log data and is specified in the InputType directive before the input
reader function. The result is similar to running the following command:
120.35.3. Examples
The examples below describe various ways for processing logs with the xm_zlib module.
885
Example 601. Compression of Logs
The configuration below utilizes the im_systemd module to read Systemd messages and convert them to
JSON using the to_json() procedure of the xm_json module. The JSON-formatted messages are then
compressed using the compress stream processor. The result is saved to a file.
nxlog.conf
1 <Extension zlib>
2 Module xm_zlib
3 Format gzip
4 CompressionLevel 9
5 CompBufSize 16384
6 DecompBufsize 16384
7 </Extension>
8
9 <Extension _json>
10 Module xm_json
11 </Extension>
12
13 <Input in>
14 Module im_systemd
15 Exec to_json();
16 </Input>
17
18 <Output out>
19 Module om_file
20 OutputType LineBased, zlib.compress
21 File '/tmp/output'
22 </Output>
The following configuration uses the decompress stream processor to process gzip-compressed messages
at the input. The result is saved to a file.
nxlog.conf
1 <Extension zlib>
2 Module xm_zlib
3 Format gzip
4 CompressionLevel 9
5 CompBufSize 16384
6 DecompBufsize 16384
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/tmp/input'
12 InputType zlib.decompress, LineBased
13 </Input>
14
15 <Output out>
16 Module om_file
17 File '/tmp/output'
18 </Output>
The xm_zlib module can process data via a single or multiple instances.
886
Multiple instances provide flexibility because each instance can be customized for a specific scenario; while using
a single instance makes configuration shorter.
The configuration below uses the zlib1 instance of the xm_zlib module to decompress gzip-compressed data
at the input. After that, messages are converted to JSON using the xm_json module. The JSON data is then
compressed to a zlib format using the zlib2 instance of the xm_zlib module. The result is saved to a file.
nxlog.conf
1 <Extension zlib1>
2 Module xm_zlib
3 Format gzip
4 CompressionLevel 9
5 CompBufSize 16384
6 DecompBufSize 16384
7 </Extension>
8
9 <Extension zlib2>
10 Module xm_zlib
11 Format zlib
12 CompressionLevel 3
13 CompBufSize 64000
14 DecompBufSize 64000
15 </Extension>
16
17 <Extension _json>
18 Module xm_json
19 </Extension>
20
21 <Input in>
22 Module im_file
23 File '/tmp/input'
24 InputType zlib1.decompress, LineBased
25 Exec to_json();
26 </Input>
27
28 <Output out>
29 Module om_file
30 File 'tmp/output'
31 OutputType LineBased, zlib2.compress
32 </Output>
887
Example 604. Processing Data With a Single Module Instance
The configuration below uses a single zlib1 module instance to decompress gzip-compressed messages via
the decompress stream processor and convert them to IETF Syslog format via the to_syslog_ietf() procedure
in the Exec directive. It then compresses logs using the compress processor. The result is saved to a file.
nxlog.conf
1 <Extension zlib1>
2 Module xm_zlib
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input in>
10 Module im_file
11 File '/tmp/input'
12 InputType zlib1.decompress, LineBased
13 Exec to_syslog_ietf();
14 </Input>
15
16 <Output out>
17 Module om_file
18 File 'tmp/output'
19 OutputType LineBased, zlib1.compress
20 </Output>
The InputType and OutputType directives provide sequential usage of multiple stream processors to create
workflows. For example, the xm_zlib module functionality can be combined with the xm_crypto module to
provide compression and encryption operations of logs.
While configuring stream processors, compression should always precede encryption. In the opposite process,
decryption should always occur before decompression.
888
Example 605. Processing Data With Various Stream Processors
The configuration below uses the aes_decrypt stream processor of the xm_crypto module to decrypt, and
the decompress stream processor of the xm_zlib module to decompress log data. Using the Exec directive,
messages with the stdout string in their body are selected. The selected messages then compressed and
encrypted with the compress and aes_encrypt stream processors. The result is saved to a file.
nxlog.conf
1 <Extension zlib>
2 Module xm_zlib
3 Format gzip
4 CompressionLevel 9
5 CompBufSize 16384
6 DecompBufsize 16384
7 </Extension>
8
9 <Extension crypto>
10 Module xm_crypto
11 UseSalt TRUE
12 PasswordFile /tmp/passwordfile
13 </Extension>
14
15 <Input in>
16 Module im_file
17 File '/tmp/input'
18 InputType crypto.aes_decrypt, zlib.decompress, LineBased
19 Exec if not ($raw_event =~ /stdout/) drop();
20 </Input>
21
22 <Output out>
23 Module om_file
24 File 'tmp/output'
25 OutputType LineBased, zlib.compress, crypto.aes_encrypt
26 </Output>
889
Chapter 121. Input Modules
Input modules are responsible for collecting event log data from various sources.
Each module provides a set of fields for each log message, these are documented in the corresponding sections
below. The NXLog core creates a set of core fields which are available to each module.
See the list of installer packages that provide the im_acct module in the Available Modules chapter of the NXLog
User Guide.
121.1.1. Configuration
The im_acct module accepts the following directives in addition to the common module directives.
AcctOff
This boolean directive specifes that accounting should be disabled when im_acct stops. If AcctOff is set to
FALSE, accounting will not be disabled; events will continue to be written to the log file for NXLog to collect
later. The default is FALSE.
AcctOn
This boolean directive specifies that accounting should be enabled when im_acct starts. If AcctOn is set to
FALSE, accounting will not be enabled automatically. The default is TRUE.
File
This optional directive specifies the path where the kernel writes accounting data.
FileSizeLimit
NXLog will automatically truncate the log file when it reaches this size, specified as an integer in bytes (see
Integer). The default is 1 MB.
121.1.2. Fields
The following fields are used by im_acct.
890
The process start time.
891
$XSIGFlag (type: boolean)
Set to TRUE if an XSIG flag is associated with the process event (killed by a signal).
121.1.3. Examples
Example 606. Collecting Process Accounting Logs
With this configuration, the im_acct module will collect process accounting logs. Process accounting will be
automatically enabled and configured to write logs to the file specified. NXLog will allow the file to grow to a
maximum size of 10 MB before truncating it.
nxlog.conf
1 <Input acct>
2 Module im_acct
3 File '/var/log/acct.log'
4 FileSizeLimit 10M
5 </Input>
See the list of installer packages that provide the im_aixaudit module in the Available Modules chapter of the
NXLog User Guide.
121.2.1. Configuration
The im_aixaudit module accepts the following directives in addition to the common module directives.
DeviceFile
This optional directive specifies the device file from which to read audit events. If this is not specified, it
defaults to /dev/audit.
EventsConfigFile
This optional directive contains the path to the file with a list of audit events. This file should contain events in
AuditEvent = FormatCommand format. The AuditEvent is a reference to the audit object which is defined
under the /etc/security/audit/objects path. The FormatCommand defines the auditpr output for the
object. For more information, see the The Audit Subsystem in AIX section on the IBM website.
121.2.2. Fields
See the xm_aixaudit Fields.
121.2.3. Examples
892
Example 607. Reading AIX Audit Events From the Kernel
This configuration reads AIX audit events directly from the kernel via the (default) /dev/audit device file.
nxlog.conf
1 <Input in>
2 Module im_aixaudit
3 DeviceFile /dev/audit
4 </Input>
See the list of installer packages that provide the im_azure module in the Available Modules chapter of the NXLog
User Guide.
1. After logging in to the Portal, click New on the left panel, select the Storage category, and choose the
Storage account - blob, file, table, queue.
2. Create the new storage account. Provide a storage name, location, and replication type.
3. Click [ Create Storage Account ] and wait for storage setup to complete.
4. Go to Apps, select the application for which to enable logging, and click Configure.
5. Scroll down to the application diagnostic section and configure the table and blob storage options
corresponding with the storage account created above.
6. Confirm the changes by clicking Save, then restart the service. Note that it may take a while for Azure to
create the table and/or blob in the storage.
121.3.2. Configuration
The im_azure module accepts the following directives in addition to the common module directives. The AuthKey
and StorageName directives are required, along with either BlobName or TableName.
AuthKey
This mandatory directive specifies the authentication key to use for connecting to Azure.
BlobName
This directive specifies the storage blob to connect to. One of BlobName and TableName must be defined
(but not both).
SSLCompression
This Boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
893
StorageName
This mandatory directive specifies the name of the storage account from which to collect logs.
TableName
This directive specifies the storage table to connect to. One of BlobName and TableName must be defined
(but not both).
Address
This directive specifies the URL for connecting to the storage account and corresponding table or blob. If this
directive is not specified, it defaults to http://<table|blob>.<storagename>.core.windows.net. If
defined, the value must start with http:// or https://.
HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all HTTPS connections must present a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS client. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS client. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS client. The certificate filenames in this directory must be in
the OpenSSL hashed format.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS client.
894
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSRequireCert
This boolean directive specifies that the remote HTTPS client must present a certificate. If set to TRUE and
there is no certificate presented during the connection handshake, the connection will be refused. The
default value is TRUE: each connection must use a certificate.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
PollInterval
This directive specifies how frequently the module will check for new events, in seconds. If this directive is not
specified, it defaults to 1 second. Fractional seconds may be specified (PollInterval 0.5 will check twice
every second).
121.3.3. Fields
The following fields are used by im_azure.
Azure Normalized
Severity Severity
Critical 5/CRITICAL
Warning 3/WARNING
Information 2/INFO
Verbose 1/DEBUG
895
121.4. Batched Compression (im_batchcompress)
The im_batchcompress module provides a compressed network transport with optional SSL encryption. It uses its
own protocol to receive and decompress a batch of messages sent by om_batchcompress.
See the list of installer packages that provide the im_batchcompress module in the Available Modules chapter of
the NXLog User Guide.
121.4.1. Configuration
The im_batchcompress module accepts the following directives in addition to the common module directives.
ListenAddr
The module will accept connections on this IP address or a DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).
Port
The module instance will listen on this port for incoming connections. The default is port 2514.
Port directive for will become deprecated in this context from NXLog EE 6.0. Provide the
IMPORTANT
port in ListenAddr.
AllowUntrusted
This boolean directive specifies whether the remote connection should be allowed without certificate
verification. If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The
default value is FALSE: by default, all connections must present a trusted certificate.
CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.
CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.
CertFile
This specifies the path of the certificate file to be used for the SSL handshake.
CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.
CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
896
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.
CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The filenames in this directory must be in the OpenSSL
hashed format.
CRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.
RequireCert
This boolean directive specifies that the remote must present a certificate. If set to TRUE and there is no
certificate presented during the connection handshake, the connection will be refused. The default value is
TRUE: by default, each connections must use a certificate.
SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.
SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
121.4.2. Fields
The following fields are used by im_batchcompress.
121.4.3. Examples
897
Example 608. Reading Batch Compressed Data
This configuration listens on port 2514 for incoming log batches and writes them to file.
nxlog.conf
1 <Input batchcompress>
2 Module im_batchcompress
3 ListenAddr 1.1.1.1:2514
4 Port
5 </Input>
6
7 # old syntax
8 #<Input batchcompress>
9 # Module im_batchcompress
10 # ListenAddr 0.0.0.0
11 # Port 2514
12 #</Input>
13
14 <Output file>
15 Module om_file
16 File "tmp/output"
17 </Output>
18
19 <Route batchcompress_to_file>
20 Path batchcompress => file
21 </Route>
The BSM /dev/auditpipe device file is available on FreeBSD and macOS. On Solaris, the device file is not
available and the log files must be read and parsed with im_file and xm_bsm as shown in the example.
See the list of installer packages that provide the im_bsm module in the Available Modules chapter of the NXLog
User Guide.
121.5.1. Setup
For information about setting up BSM Auditing, see the xm_bsm Setup section.
121.5.2. Configuration
The im_bsm module accepts the following directives in addition to the common module directives.
DeviceFile
This optional directive specifies the device file from which to read BSM events. If this is not specified, it
defaults to /dev/auditpipe.
EventFile
This optional directive can be used to specify the path to the audit event database containing a mapping
between event names and numeric identifiers. The default location is /etc/security/audit_event which is
used when the directive is not specified.
898
121.5.3. Fields
See the xm_bsm Fields.
121.5.4. Examples
Example 609. Reading BSM Audit Events From the Kernel
This configuration reads BSM audit events directly from the kernel via the (default) /dev/auditpipe device
file (which is not available on Solaris, see the xm_bsm example instead).
nxlog.conf
1 <Input in>
2 Module im_bsm
3 DeviceFile /dev/auditpipe
4 </Input>
The OPSEC SDK provides libraries only in 32-bit versions and this makes it impossible to compile
a 64-bit application. For this reason the im_checkpoint module uses a helper program called nx-
NOTE
im-checkpoint. This helper is responsible for collecting the logs and transmitting these over a
pipe to the im_checkpoint module.
CheckPoint uses a certificate export method with an activation password so that certificate keys can be securely
transferred over the network in order to establish trust relationships between the entities involved when using
SSL-based authenticated connections. The following entities (hosts) are involved in the log generation and
collection process:
SmartDashboard
The firewall administrator can use the SmartDashboard management interface to connect to and manage the
firewall.
NXLog
The log collector running NXLog which connects to SPLAT over the OPSEC LEA protocol utilizing the
im_checkpoint module.
The following steps are required to configure the LEA connection between SPLAT and NXLog.
1. Enable the LEA service on SPLAT. Log in to SPLAT, enter expert mode, and run vi
$FWDIR/conf/fwopsec.conf. Make sure the file contains the following lines. Then restart the firewall with
the cprestart command (or cpstop and cpstart).
fwopsec.conf
lea_server auth_port 18184
lea_server auth_type sslca
2. Make sure SPLAT will accept ICA pull requests, the LEA Connection (port 18184), and can generate logs. For
899
testing purposes, it is easiest to create a single rule to accept all connections and log these. For this the
SmartDashboard host must be added as a GUI Client on SPLAT and a user needs to be configured to be able
to log onto SPLAT remotely from SmartDashboard.
3. Create the certificates for NXLog in SmartDashboard. Select Manage › Servers › OPSEC Applications, then
click [ New ] and select OPSEC Application. A dialog window should appear. Fill in the following properties
and then click [ OK ].
Name
Set to nxlog.
Description
Set to NXLog log collector or something similar.
Host
Click on [ New ] to create a new host and name it accordingly (nxloghost, for example).
Client Entities
Check LEA. All other options should be unchecked.
4. Retrieve the OPSEC application certificate. From the NXLog host, run the following command:
/opt/nxlog/bin/opsec_pull_cert -h SPLAT_IP_ADDR -n nxlog -p ACTIVATION_KEY. Make sure to
substitute the correct values in place of SPLAT_IP_ADDR and ACTIVATION_KEY. If the command is successful,
the certificate file opsec.p12 should appear in the current directory. Copy this file to /opt/nxlog/etc.
5. Get the DN of SPLAT. In SmartDashboard, double-click on Network Objects › Check Point › SPLAT. The
properties window will contain a similar DN under Secure Internal Communication such as
CN=cp_mgmt,o=splat..ebo9pf.
6. Retrieve the sic_policy.conf file from SPLAT. Initiate a secure copy from the firewall in expert mode. Then
move the file to the correct location.
7. Edit /opt/nxlog/etc/sic_policy.conf, and add the necessary policy to the [Outbound rules] section.
sic_policy.conf
1 [Outbound rules]
2 # apply_to peer(s) port(s) service(s) auth-method(s)
3 # --------------------------------------------------------
4
5 # OPSEC configurations - place here (and in [Inbound rules] too)
6 ANY ; ANY ; 18184 ; fwn1_opsec, ssl_opsec, ssl_clear_opsec, lea ; any_method
8. Edit /opt/nxlog/etc/lea.conf. The file should contain the following. Make sure to substitute the correct
value in place of SPLAT_IP_ADDR and use the correct DN values for opsec_sic_name and lea_server
opsec_entity_sic_name.
900
lea.conf
lea_server ip SPLAT_IP_ADDR
lea_server auth_port 18184
lea_server auth_type sslca
opsec_sic_name "CN=nxlog,O=splat..ebo9pf"
opsec_sslca_file /opt/nxlog/etc/opsec.p12
lea_server opsec_entity_sic_name "CN=cp_mgmt,o=splat..ebo9pf"
opsec_sic_policy_file /opt/nxlog/etc/sic_policy.conf
Refer to the Check Point documentation for more information regarding the LEA log service configuration.
To test whether the log collection works, execute the following command: /opt/nxlog/bin/nx-im-checkpoint
--readfromlast FALSE > output.bin. The process should not exit. Type Ctrl+c to interrupt it. The created
file output.bin should contain logs in NXLog’s Binary format.
The two files sslauthkeys.C and sslsess.C are used during the key-based authentication.
NOTE These files are stored in the same directory where lea.conf resides. To override this, set the
OPSECDIR environment variable.
If the log collection is successful, you can now try running NXLog with the im_checkpoint module.
See the list of installer packages that provide the im_checkpoint module in the Available Modules chapter of the
NXLog User Guide.
121.6.1. Configuration
The im_checkpoint module accepts the following directives in addition to the common module directives.
Command
The optional directive specifies the path of the nx-im-checkpoint binary. If not specified, the default is
/opt/nxlog/bin/nx-im-checkpoint on Linux.
LEAConfigFile
This optional directive specifies the path of the LEA configuration file. If not specified, the default is
/opt/nxlog/etc/lea.conf. This file must be edited in order for the OPSEC LEA connection to work.
LogFile
This can be used to specify the log file to be read. If not specified, it defaults to fw.log. To collect the audit
log, use LogFile fw.adtlog which would then be passed to the nx-im-checkpoint binary as --logfile
fw.adtlog.
ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved record number could be read, the module will resume reading from this saved record number. If
ReadFromLast is FALSE, the module will read all logs from the LEA source. This can result in quite a lot of
messages, and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.
Restart
901
Restart the nx-im-checkpoint process if it exits. There is a one second delay before it is restarted to avoid a
denial-of-service if the process is not behaving. This boolean directive defaults to FALSE.
SavePos
This boolean directive specifies that the last record number should be saved when NXLog exits. The record
number will be read from the cache file upon startup. The default is TRUE: the record number is saved if this
directive is not specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache
directive.
121.6.2. Fields
The following fields are used by im_checkpoint.
The LEA protocol provides Check Point device logs in a structured format. For the list of LEA fields, see LEA Fields
Update on CheckPoint.com. Some of the field names are mapped to normalized names which NXLog uses in
other modules (such as $EventTime). The list of these fields is provided below. The other LEA fields are
reformatted such that non-alphanumeric characters are replaced with an underscore (_) in field names. The
$raw_event field contains the list of all fields and their respective values without any modification to the original
LEA field naming.
902
$Severity (type: string)
The IPS protection severity level setting. Originally called severity. Set to INFO if it was not provided in the logs.
121.6.3. Examples
Example 610. Converting Check Point LEA Input to JSON
This configuration instructs NXLog to collect logs from Check Point devices over the LEA protocol and store
the logs in a file in JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input checkpoint>
6 Module im_checkpoint
7 Command /opt/nxlog/bin/nx-im-checkpoint
8 LEAConfigFile /opt/nxlog/etc/lea.conf
9 </Input>
10
11 <Output file>
12 Module om_file
13 File 'tmp/output'
14 Exec $raw_event = to_json();
15 </Output>
16
17 <Route checkpoint_to_file>
18 Path checkpoint => file
19 </Route>
The im_dbi and om_dbi modules support GNU/Linux only because of the libdbi library. The
NOTE
im_odbc and om_odbc modules provide native database access on Windows.
903
libdbi needs drivers to access the database engines. These are in the libdbd-* packages on
Debian and Ubuntu. CentOS 5.6 has a libdbi-drivers RPM package, but this package does not
NOTE contain any driver binaries under /usr/lib64/dbd. The drivers for both MySQL and PostgreSQL
are in libdbi-dbd-mysql. If these are not installed, NXLog will return a libdbi driver initialization
error.
See the list of installer packages that provide the im_dbi module in the Available Modules chapter of the NXLog
User Guide.
121.7.1. Configuration
The im_dbi module accepts the following directives in addition to the common module directives.
Driver
This mandatory directive specifies the name of the libdbi driver which will be used to connect to the
database. A DRIVER name must be provided here for which a loadable driver module exists under the name
libdbdDRIVER.so (usually under /usr/lib/dbd/). The MySQL driver is in the libdbdmysql.so file.
SQL
This directive should specify the SELECT statement to be executed every PollInterval seconds. The module
automatically appends a WHERE id > ? LIMIT 10 clause to the statement. The result set returned by the
SELECT statement must contain an id column which is then stored and used for the next query.
Option
This directive can be used to specify additional driver options such as connection parameters. The manual of
the libdbi driver should contain the options available for use here.
PollInterval
This directive specifies how frequently the module will check for new records, in seconds. If this directive is
not specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will check
twice every second).
SavePos
If this boolean directive is set to TRUE, the position will be saved when NXLog exits. The position will be read
from the cache file upon startup. The default is TRUE: the position will be saved if this directive is not
specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.
121.7.2. Examples
904
Example 611. Reading From a MySQL Database
This example uses libdbi and the MySQL driver to connect to the logdb database on the local host and
execute the provided statement.
nxlog.conf
1 <Input dbi>
2 Module im_dbi
3 Driver mysql
4 Option host 127.0.0.1
5 Option username mysql
6 Option password mysql
7 Option dbname logdb
8 SQL SELECT id, facility, severity, hostname, \
9 timestamp, application, message \
10 FROM log
11 </Input>
12
13 <Output file>
14 Module om_file
15 File "tmp/output"
16 </Output>
17
18 <Route dbi_to_file>
19 Path dbi => file
20 </Route>
ETW is a mechanism in Windows designed for efficient logging of both kernel and user-mode applications. Debug
and Analytical channels are based on ETW and cannot be collected as regular Windows Eventlog channels via the
im_msvistalog module. Various Windows services such as the Windows Firewall and DNS Server can be
configured to log events through Windows Event Tracing.
The im_etw module reads event tracing data directly for maximum efficiency. Unlike other solutions, im_etw does
not save ETW data into intermediary trace files that need to be parsed again.
See the list of installer packages that provide the im_etw module in the Available Modules chapter of the NXLog
User Guide.
121.8.1. Configuration
The im_etw module accepts the following directives in addition to the common module directives. One of
KernelFlags and Provider must be specified.
KernelFlags
This directive specifies that kernel trace logs should be collected, and accepts a comma-separated list of flags
to use for filtering the logs. The Provider and KernelFlags directives are mutually exclusive (but one must be
specified). The following values are allowed: ALPC, CSWITCH, DBGPRINT, DISK_FILE_IO, DISK_IO,
DISK_IO_INIT, DISPATCHER, DPC, DRIVER, FILE_IO, FILE_IO_INIT, IMAGE_LOAD, INTERRUPT,
MEMORY_HARD_FAULTS, MEMORY_PAGE_FAULTS, NETWORK_TCPIP, NO_SYSCONFIG, PROCESS, PROCESS_COUNTERS,
PROFILE, REGISTRY, SPLIT_IO, SYSTEMCALL, THREAD, VAMAP, and VIRTUAL_ALLOC.
905
Provider
This directive specifies the name (not GUID) of the ETW provider from which to collect trace logs. Providers
available for tracing can be listed with logman query providers. The Provider and KernelFlags directives
are mutually exclusive (but one must be specified). The Windows Kernel Trace provider is not supported;
instead, the KernelFlags directive should be used to open a kernel logger session.
Level
This optional directive specifies the log level for collecting trace events. Because kernel log sessions do not
provide log levels, this directive is only available in combination with the Provider directive. Valid values
include Critical, Error, Warning, Information, and Verbose. If this directive is not specified, the verbose
log level is used.
MatchAllKeyword
This optional directive is used for filtering ETW events based on keywords. Defaults to 0x00000000. For more
information, see System ETW Provider Event Keyword-Level Settings in Microsoft documentation.
MatchAnyKeyword
This optional directive is used for filtering ETW events based on keywords. Defaults to 0x00000000. For more
information, see System ETW Provider Event Keyword-Level Settings in Microsoft documentation.
121.8.2. Fields
The following fields are used by im_etw.
Depending on the ETW provider from which NXLog collects trace logs, the set of fields generated by the im_etw
module may slightly vary. In addition to the fields listed below, the module can generate special provider-specific
fields. If the module is configured to collect trace logs from a custom provider (for example, from a custom user-
mode application), the module will also generate fields derived from the custom provider trace logs.
906
$EventType (type: string)
One of CRITICAL, ERROR, WARNING, DEBUG, AUDIT_FAILURE, AUDIT_SUCCESS, or INFO.
1/Critical 5/CRITICAL
2/Error 4/ERROR
3/Warning 3/WARNING
4/Information 2/INFO
5/Verbose 1/DEBUG
121.8.3. Examples
907
Example 612. Collecting Events From the Windows Kernel Trace
With this configuration, NXLog will collect trace events from the Windows kernel. Only events matching the
PROCESS and THREAD flags will be collected.
nxlog.conf
1 <Input etw>
2 Module im_etw
3 KernelFlags PROCESS, THREAD
4 </Input>
With this configuration, NXLog will collect events from the Microsoft-Windows-Firewall trace provider.
nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-Firewall
4 </Input>
With this configuration, NXLog will assign event log level for a specified provider.
nxlog.conf
1 <Input etw>
2 Module im_etw
3 Provider Microsoft-Windows-DNSServer
4 Level verbose
5 MatchAnyKeyword 0xFFFFFFFFFFFFFFFF
6 MatchAllKeyword 0x0
7 </Input>
If you are using a Perl script, consider using im_perl instead or turning on Autoflush with $|
WARNING = 1;, otherwise im_exec might not receive data immediately due to Perl’s internal buffering.
See the Perl language reference for more information about $|.
See the list of installer packages that provide the im_exec module in the Available Modules chapter of the NXLog
User Guide.
121.9.1. Configuration
The im_exec module accepts the following directives in addition to the common module directives. The Command
directive is required.
Command
This mandatory directive specifies the name of the program or script to be executed.
908
Arg
This is an optional parameter. Arg can be specified multiple times, once for each argument that needs to be
passed to the Command. Note that specifying multiple arguments with one Arg directive, with arguments
separated by spaces, will not work (the Command would receive it as one argument).
InputType
See the InputType description in the global module configuration section.
Restart
Restart the process if it exits. There is a one second delay before it is restarted to avoid a denial-of-service
when a process is not behaving. Looping should be implemented in the script itself, this directive is only to
provide some safety against malfunctioning scripts and programs. This boolean directive defaults to FALSE:
the Command will not be restarted if it exits.
121.9.2. Examples
Example 615. Emulating im_file
The im_file module should be used to read log messages from files. This example only
NOTE
demonstrates the use of the im_exec module.
nxlog.conf
1 <Input messages>
2 Module im_exec
3 Command /usr/bin/tail
4 Arg -f
5 Arg /var/log/messages
6 </Input>
7
8 <Output file>
9 Module om_file
10 File "tmp/output"
11 </Output>
12
13 <Route messages_to_file>
14 Path messages => file
15 </Route>
im_file uses a one second interval to monitor files for new messages. This method was implemented because
polling a regular file is not supported on all platforms. If there is no more data to read, the module will sleep for
1 second.
By using wildcards, the module can read multiple files simultaneously and will open new files as they appear. It
will also enter newly created directories if recursion is enabled.
909
The module needs to scan the directory content for wildcarded file monitoring. This can present
a significant load if there are many files (hundreds or thousands) in the monitored directory. For
NOTE
this reason it is highly recommended to rotate files out of the monitored directory either using
the built-in log rotation capabilities of NXLog or with external tools.
See the list of installer packages that provide the im_file module in the Available Modules chapter of the NXLog
User Guide.
121.10.1. Configuration
The im_file module accepts the following directives in addition to the common module directives. The File
directive is required.
File
This mandatory directive specifies the name of the input file to open. It may be given more than once in a
single im_file module instance. The value must be a string type expression. For relative filenames you should
be aware that NXLog changes its working directory to "/" unless the global SpoolDir is set to something else.
On Windows systems the directory separator is the backslash (\). For compatibility reasons the forward slash
(/) character can be also used as the directory separator, but this only works for filenames not containing
wildcards. If the filename is specified using wildcards, the backslash (\) should be used for the directory
separator. Filenames on Windows systems are treated case-insensitively, but case-sensitively on Unix/Linux.
Wildcards are supported in filenames and directories. Wildcards are not regular expressions, but are patterns
commonly used by Unix shells to expand filenames (also known as "globbing").
?
Matches a single character only.
*
Matches zero or more characters.
\*
Matches the asterisk (*) character.
\?
Matches the question mark (?) character.
[…]
Used to specify a single character. The class description is a list containing single characters and ranges of
characters separated by the hyphen (-). If the first character of the class description is ^ or !, the sense of
the description is reversed (any character not in the list is accepted). Any character can have a backslash (
\) preceding it, which is ignored, allowing the characters ] and - to be used in the character class, as well
as ^ and ! at the beginning.
By default, the backslash character (\) is used as an escape sequence. This character is also
the directory separator on Windows. Because of this, escaping of wildcard characters is not
supported on Windows, see the EscapeGlobPatterns directive. However, string literals are
evaluated differently depending on the quotation type. Single quoted strings are interpreted
as-is without escaping, e.g. 'C:\t???*.log' stays C:\t???\*.log. Escape sequences in
NOTE
double quoted strings are processed, for example "C:\\t???\*.log" becomes
C:\t???\*.log after evaluation. In both cases, the evaluated string is the same and gets
separated into parts with different glob patterns at different levels. In the previous example
the parts are c:, t???, and *.log. NXLog matches these at the proper directory levels to find
all matching files.
910
ActiveFiles
This directive specifies the maximum number of files NXLog will actively monitor. If there are modifications to
more files in parallel than the value of this directive, then modifications to files above this limit will only get
noticed after the DirCheckInterval (all data should be collected eventually). Typically there are only a few log
sources actively appending data to log files, and the rest of the files are dormant after being rotated, so the
default value of 10 files should be sufficient in most cases. This directive is also only relevant in case of a
wildcarded File path.
CloseWhenIdle
If set to TRUE, this boolean directive specifies that open input files should be closed as soon as possible after
there is no more data to read. Some applications request an exclusive lock on the log file when written or
rotated, and this directive can possibly help if the application tries again to acquire the lock. The default is
FALSE.
DirCheckInterval
This directive specifies how frequently, in seconds, the module will check the monitored directory for
modifications to files and new files in case of a wildcarded File path. The default is twice the value of the
PollInterval directive (if PollInterval is not set, the default is 2 seconds). Fractional seconds may be specified. It
is recommended to increase the default if there are many files which cannot be rotated out and the NXLog
process is causing high CPU load.
Exclude
This directive can specify a file or a set of files (using wildcards) to be excluded. More than one occurrence of
the Exclude directive can be specified.
InputType
See the InputType directive in the list of common module directives. If this directive is not specified the
default is LineBased (the module will use CRLF as the record terminator on Windows, or LF on Unix).
This directive also supports stream processors, see the description in the InputType section.
NoEscape
This boolean directive specifies whether the backslash (\) in file paths should be disabled as an escape
sequence. This is especially useful for file paths on Windows. By default, NoEscape is FALSE (backslash
escaping is enabled and the path separator on Windows must be escaped).
OnEOF
This optional block directive can be used to specify a group of statements to execute when a file has been
fully read (on end-of-file). Only one OnEOF block can be specified per im_file module instance. The following
directives are used inside this block.
Exec
This mandatory directive specifies the actions to execute after EOF has been detected and the grace
period has passed. Like the normal Exec directive, the OnEOF Exec can be specified as a normal directive
or a block directive.
GraceTimeout
This optional directive specifies the time in seconds to wait before executing the actions configured in the
Exec block or directive. The default is 1 second.
PollInterval
This directive specifies how frequently the module will check for new files and new log entries, in seconds. If
this directive is not specified, it defaults to 1 second. Fractional seconds may be specified (PollInterval 0.5
will check twice every second).
ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
911
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved position value could be read, the module will resume reading from this saved position. If
ReadFromLast is FALSE, the module will read all logs from the file. This can result in quite a lot of messages,
and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.
Recursive
If set to TRUE, this boolean directive specifies that input files set with the File directive should be searched
recursively under sub-directories. For example, /var/log/error.log will match
/var/log/apache2/error.log. Wildcards can be used in combination with Recursive: /var/log/*.log will
match /var/log/apache2/access.log. This directive only causes scanning under the given path and does
not affect the processing of wildcarded directories: /var/*/qemu/debian.log will not match
/var/log/libvirt/qemu/debian.log. The default is FALSE.
RenameCheck
If set to TRUE, this boolean directive specifies that input files should be monitored for possible file rotation via
renaming in order to avoid re-reading the file contents. A file is considered to be rotated when NXLog detects
a new file whose inode and size matches that of another watched file which has just been deleted. Note that
this does not always work correctly and can yield false positives when a log file is deleted and another is
added with the same size. The file system is likely to reuse to inode number of the deleted file and thus the
module will falsely detect this as a rename/rotation. For this reason the default value of RenameCheck is
FALSE: renamed files are considered to be new and the file contents will be re-read.
It is recommended to use a naming scheme for rotated files so names of rotated files do not
NOTE match the wildcard and are not monitored anymore after rotation, instead of trying to solve
the renaming issue with this directive.
SavePos
If this boolean directive is set to TRUE, the file position will be saved when NXLog exits. The file position will
be read from the cache file upon startup. The default is TRUE: the file position will be saved if this directive is
not specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.
121.10.2. Functions
The following functions are exported by im_file.
string file_name()
Return the name of the currently open file which the log was read from.
integer record_number()
Returns the number of processed records (including the current record) of the currently open file since it was
opened or truncated.
121.10.3. Examples
912
Example 616. Forwarding Logs From a File to a Remote Host
This configuration will read from a file and forward messages via TCP. No additional processing is done.
nxlog.conf
1 <Input messages>
2 Module im_file
3 File "/var/log/messages"
4 </Input>
5
6 <Output tcp>
7 Module om_tcp
8 Host 192.168.1.1
9 Port 514
10 </Output>
11
12 <Route messages_to_tcp>
13 Path messages => tcp
14 </Route>
Files are checked periodically, not in real-time. If there are multiple changes between two scans, only the
cumulative effect is logged. For example, if one user modifies a file and another user reverts the changes before
the next scan occurs, only the change in modification time is detected.
For real-time monitoring, auditing must be enabled on the host operating system. See the File Integrity
Monitoring chapter in the User Guide for more information.
See the list of installer packages that provide the im_fim module in the Available Modules chapter of the NXLog
User Guide.
121.11.1. Configuration
The im_fim module accepts the following directives in addition to the common module directives. The File
directive is required.
File
This mandatory directive specifies the name of the input file to scan. It must be a string type expression. See
the im_file File directive for more details on how files can be specified. Wildcards are supported. More than
one occurrence of the File directive can be used.
Digest
This specifies the digest method (hash function) to be used to calculate the checksum. The default is sha1.
The following message digest methods can be used: md2, md5, mdc2, rmd160, sha, sha1, sha224, sha256,
sha384, and sha512.
Exclude
This directive can specify a file or a set of files (using wildcards) to be excluded from the scan. More than one
913
occurrence of the Exclude directive can be specified.
NoEscape
This boolean directive specifies whether the backslash (\) in file paths should be disabled as an escape
sequence. By default, NoEscape is FALSE (the path separator on Windows needs to be escaped).
Recursive
If set to TRUE, this boolean directive specifies that files set with the File directive should be searched
recursively under sub-directories. For example, /var/log/error.log will match
/var/log/apache2/error.log. Wildcards can be used in combination with Recursive: /var/log/*.log will
match /var/log/apache2/access.log. This directive only causes scanning under the given path and does
not affect the processing of wildcarded directories: /var/*/qemu/debian.log will not match
/var/log/libvirt/qemu/debian.log. The default is FALSE.
ScanInterval
This directive specifies how long the module will wait between scans for modifications, in seconds. The
default is 86400 seconds (1 day). The value of ScanInterval can be set to 0 to disable periodic scanning and
instead invoke scans via the start_scan() procedure.
121.11.2. Functions
The following functions are exported by im_fim.
boolean is_scanning()
Returns TRUE if scanning is in progress.
121.11.3. Procedures
The following procedures are exported by im_fim.
start_scan();
Start the file integrity scan. This could be invoked from the Schedule block, for example.
121.11.4. Fields
The following fields are used by im_fim.
914
$FileName (type: string)
The name of the file that the changes were detected on.
121.11.5. Examples
Example 617. Periodic File Integrity Monitoring
With this configuration, NXLog will monitor the specified directories recursively. Scans will occur hourly.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/etc/*"
4 Exclude "/etc/mtab"
5 File "/bin/*"
6 File "/sbin/*"
7 File "/usr/bin/*"
8 File "/usr/sbin/*"
9 Recursive TRUE
10 ScanInterval 3600
11 </Input>
915
Example 618. Scheduled Scan
The im_fim module provides a start_scan() procedure that can be called to invoke the scan. The following
configuration sets ScanInterval to zero to disable periodic scanning and uses a Schedule block instead to
trigger the scan every day at midnight.
nxlog.conf
1 <Input fim>
2 Module im_fim
3 File "/bin/*"
4 File "/sbin/*"
5 File "/usr/bin/*"
6 File "/usr/sbin/*"
7 Recursive TRUE
8 ScanInterval 0
9 <Schedule>
10 When @daily
11 Exec start_scan();
12 </Schedule>
13 </Input>
121.12. Go (im_go)
This module provides support for collecting log data with methods written in the Go language. The file specified
by the ImportLib directive should contain one or more methods which can be called from the Exec directive of
any module. See also the xm_go and om_go modules.
For the system requirements, installation details and environmental configuration requirements
NOTE of Go, see the Getting Started section in the Go documentation. The Go environment is only
needed for compiling the Go file. NXLog does not need the Go environment for its operation.
The Go script imports the NXLog module, and will have access to the following classes and functions.
class nxModule
This class is instantiated by NXLog and can be accessed via the nxLogdata.module attribute. This can be used
to set or access variables associated with the module (see the example below).
nxmodule.NxLogdataNew(*nxLogdata)
This function creates a new log data record.
nxmodule.Post(ld *nxLogdata)
This function puts log data struct for further processing.
nxmodule.AddEvent()
This function adds a READ event to NXLog. This allows to call the READ event later.
nxmodule.AddEventDelayed(mSec C.int)
This function adds a delayed READ event to NXLog. This allows to call the delayed READ event later.
class nxLogdata
This class represents an event. It is instantiated by NXLog and passed to the function specified by the
ImportFunc directive.
916
nxlogdata.GetString(field string) (string, bool)
This function returns the value/exists pair for the string representation of the logdata field.
nxlogdata.Delete(field string)
This function removes the field from logdata.
nxlogdata.Fields() []string
This function returns an array of fields names in the logdata record.
module
This attribute is set to the module object associated with the event.
See the list of installer packages that provide the im_go module in the Available Modules chapter of the NXLog
User Guide.
121.12.3. Configuration
The im_go module accepts the following directives in addition to the common module directives.
ImportLib
This mandatory directive specifies the file containing the Go code compiled into a shared library .so file.
ImportFunc
This mandatory directive calls the specified function, which must accept an unsafe.Pointer object as its only
argument. This function is called when the module tries to read data. It is a mandatory function.
917
In this Go file template, the read function is called via the ImportFunc directive.
im_go Template
//export read
func read(ctx unsafe.Pointer) {
// get reference to caller module
if module, ok := gonxlog.GetModule(ctx); ok {
// generate new logdata for NXLog
ld := module.NxLogdataNew()
// set 'raw_event' value
ld.Set("raw_event", "some string data")
// send logdata to NXLog input module
module.Post(ld)
}
}
121.12.5. Examples
918
Example 619. Using im_go to Generate Event Data
This configuration reads log files from the /var/log/syslog file directory from a remote server via SSH. The
code defined in the shared object library then gets the reference from the context pointer and gets data
from the channel. After that, it generates new log data by setting the raw_event value then sending it to
the input module by calling the read function. Finally it is saved to a file.
nxlog.conf
1 <Input in1>
2 Module im_go
3 ImportLib "input/input.so"
4 ImportFunc read
5 </Input>
6
7 <Output out>
8 Module om_file
9 File "output/file"
10 Exec log_info($raw_event);
11 </Output>
See the list of installer packages that provide the im_http module in the Available Modules chapter of the NXLog
User Guide.
919
121.13.1. Configuration
The im_http module accepts the following directives in addition to the common module directives.
ListenAddr
The module will accept connections on this IP address or a DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).
Port
The module instance will listen for incoming connections on this port. The default is port 80.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in ListenAddr.
HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all HTTPS connections must present a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS client. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS client. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS client. The certificate filenames in this directory must be in
the OpenSSL hashed format.
920
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS client.
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSRequireCert
This boolean directive specifies that the remote HTTPS client must present a certificate. If set to TRUE and
there is no certificate presented during the connection handshake, the connection will be refused. The
default value is TRUE: each connection must use a certificate.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
121.13.2. Fields
The following fields are used by im_http.
121.13.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
921
Example 620. Receiving Logs over HTTPS
This configuration listens for HTTPS connections from localhost. Received log messages are written to file.
nxlog.conf
1 <Input http>
2 Module im_http
3 ListenAddr 127.0.0.1:8888
4 HTTPSCertFile %CERTDIR%/server-cert.pem
5 HTTPSCertKeyFile %CERTDIR%/server-key.pem
6 HTTPSCAFile %CERTDIR%/ca.pem
7 HTTPSRequireCert TRUE
8 HTTPSAllowUntrusted FALSE
9 </Input>
10
11 # old syntax
12 #<Input http>
13 # Module im_http
14 # ListenAddr 127.0.0.1
15 # Port 8888
16 # HTTPSCertFile %CERTDIR%/server-cert.pem
17 # HTTPSCertKeyFile %CERTDIR%/server-key.pem
18 # HTTPSCAFile %CERTDIR%/ca.pem
19 #</Input>
This configuration uses the HTTPSCAThumbprint and HTTPSCertThumbprint directives for the verification
of the Certificate Authority and the SSL handshake respectively.
nxlog.conf
1 <Input in_https>
2 Module im_http
3 ListenAddr 127.0.0.1:443
4 HTTPSCAThumbprint c2c902f736d39d37fd65c458afe0180ea799e443
5 HTTPSCertThumbprint 7c2cc5a5fb59d4f46082a510e74df17da95e2152
6 HTTPSSSLProtocol TLSv1.2
7 </Input>
8
9 # old syntax
10 #<Input in_https>
11 # Module im_http
12 # ListenAddr 127.0.0.1
13 # Port 443
14 # HTTPSCAThumbprint c2c902f736d39d37fd65c458afe0180ea799e443
15 # HTTPSCertThumbprint 7c2cc5a5fb59d4f46082a510e74df17da95e2152
16 # HTTPSSSLProtocol TLSv1.2
17 #</Input>
922
Only messages with log level INFO and above are supported. Debug messages are ignored due
NOTE to technical reasons. For debugging purposes the direct logging facility should be used: see the
global LogFile and LogLevel directives.
One must be careful about the use of the im_internal module because it is easy to cause
message loops. For example, consider the situation when internal log messages are sent to
a database. If the database is experiencing errors which result in internal error messages,
WARNING then these are again routed to the database and this will trigger further error messages,
resulting in a loop. In order to avoid a resource exhaustion, the im_internal module will
drop its messages when the queue of the next module in the route is full. It is
recommended to always put the im_internal module instance in a separate route.
If internal messages are required in Syslog format, they must be explicitly converted with
NOTE pm_transformer or the to_syslog_bsd() procedure of the xm_syslog module, because the
$raw_event field is not generated in Syslog format.
See the list of installer packages that provide the im_internal module in the Available Modules chapter of the
NXLog User Guide.
121.14.1. Configuration
The im_internal module accepts the following directive in addition to the common module directives.
LogqueueSize
This optional directive specifies the maximum number of internal log messages that can be queued by this
module. When the queue becomes full (which can happen for example, when FlowControl is in effect), a
warning will be logged, and older queued messages will be dropped in favor of new ones. The default value
for this directive is inherited from the value of the global level LogqueueSize directive.
121.14.2. Fields
The following fields are used by im_internal.
923
The type of the module (such as im_file) which generated the internal log event. Not to be confused with
$SourceModuleType, which will be im_internal.
121.14.3. Examples
Example 622. Forwarding Internal Messages over Syslog UDP
This configuration collects NXLog internal messages, adds BSD Syslog headers, and forwards via UDP.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input internal>
6 Module im_internal
7 </Input>
8
9 <Output udp>
10 Module om_udp
11 Host 192.168.1.1
12 Port 514
13 Exec to_syslog_bsd();
14 </Output>
15
16 <Route internal_to_udp>
17 Path internal => udp
18 </Route>
For the system requirements, installation details and environmental configuration requirements
NOTE
of Java, see the Installing Java section in the Java documentation.
The NXLog Java class provides access to the NXLog functionality in the Java code. This class contains nested
924
classes Logdata and Module with log processing methods, as well as methods for sending messages to the
internal logger.
To have access to log processing methods, the public static method should accept an NXLog.Logdata or
NXLog.Module object as a parameter.
class NXLog.Logdata
This Java class provides the methods to interact with an NXLog event record object:
getField(name)
This method returns the value of the field name in the event.
setField(name, value)
This method sets the value of field name to value.
deleteField(name)
This method removes the field name from the event record.
getFieldnames()
This method returns an array with the names of all the fields currently in the event record.
getFieldtype(name)
This method retrieves the field type using the value from the name field.
post(module)
This method will submit the LogData event to NXLog for processing by the next module in the route.
class NXLog.Module
The methods below allow setting and accessing variables associated with the module instance.
logdataNew()
This method returns a new NXLog.Logdata object.
setReadTimer(delay)
This method sets a trigger for another read after a specified delay in milliseconds.
saveCtx(key,value)
This method saves user data in the module data storage using values from the key and value fields.
loadCtx(key)
This method retrieves data from the module data storage using the value from the key field.
Below is the list of methods for sending messages to the internal logger.
NXLog.logInfo(msg)
This method sends the message msg to to the internal logger at INFO log level. It does the same as the core
log_info() procedure.
NXLog.logDebug(msg)
This method sends the message msg to to the internal logger at DEBUG log level. It does the same as the core
log_debug() procedure.
NXLog.logWarning(msg)
This method sends the message msg to to the internal logger at WARNING log level. It does the same as the
core log_warning() procedure.
925
NXLog.logError(msg)
This method sends the message msg to to the internal logger at ERROR log level. It does the same as the core
log_error() procedure.
121.15.1. Configuration
The NXLog process maintains only one JVM instance for all im_java, om_java, or xm_java running instances. This
means all Java classes loaded by the ClassPath directive will be available for all running instances.
The im_java module accepts the following directives in addition to the common module directives.
ClassPath
This mandatory directive defines the path to the .class files or a .jar file. This directive should be defined at
least once within a module block.
VMOption
This optional directive defines a single Java Virtual Machine (JVM) option.
VMOptions
This optional block directive serves the same purpose as the VMOption directive, but also allows specifying
multiple Java Virtual Machine (JVM) instances, one per line.
JavaHome
This optional directive defines the path to the Java Runtime Environment (JRE). The path is used to search for
the libjvm shared library. If this directive is not defined, the Java home directory will be set to the build-time
value. Only one JRE can be defined for one or multiple NXLog Java instances. Defining multiple JRE instances
causes an error.
Run
This mandatory directive specifies the static method inside the Classpath file which should be called.
This example parses the input, keeps only the entries which belong to the PATH type, and generates log
records line-by-line. Using NXLog facilities, these entries are divided into key-value pairs and converted to
JSON format.
The doInput method of the Input Java class is used to run the processing.
926
nxlog.conf
1 <Input javain>
2 Module im_java
3 # Path to the compiled class
4 Classpath /tmp/Input.jar
5 # Static method which will be called by the im_java module
6 Run Input.doInput
7 # Path to Java Runtime
8 JavaHome /usr/lib/jvm/java-11-openjdk-amd64
9 </Input>
10
11 <Output javaout>
12 Module om_file
13 File "/tmp/output.txt"
14 <Exec>
15 kvp->parse_kvp();
16 delete($EventReceivedTime);
17 delete($SourceModuleName);
18 delete($SourceModuleType);
19 to_json();
20 </Exec>
21 </Output>
927
Input.java
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
if (lines == null)
{
lines = new ArrayList<>();
928
Input Sample
type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"↵
↵
type=PATH msg=audit(1489999368.711:35724): item=0 name="/root/test" inode=528869 dev=08:01
mode=040755 ouid=0 ogid=0 rdev=00:00↵
↵
type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e syscall=2 success=yes exit=3
a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0 uid=0 gid=0 euid=0 suid=0
fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls" exe="/bin/ls" key=(null)↵
Output Sample
{
"type":"PATH",
"msg":"audit(1489999368.711:35724):",
"item":0,"name":"/root/test",
"inode":528869,"dev":"08:01",
"mode":040755,"ouid":0,
"ogid":0,
"rdev":"00:00"
}
See the list of installer packages that provide the im_kafka module in the Available Modules chapter of the NXLog
User Guide.
121.16.1. Configuration
The im_kafka module accepts the following directives in addition to the common module directives. The
BrokerList and Topic directives are required.
BrokerList
This mandatory directive specifies the list of Kafka brokers to connect to for collecting logs. The list should
include ports and be comma-delimited (for example, localhost:9092,192.168.88.35:19092).
Topic
This mandatory directive specifies the Kafka topic to collect records from.
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote brokers. CAFile is required if Protocol is set to ssl. To trust a self-signed certificate presented by
the remote (which is not signed by a CA), provide that certificate instead.
CertFile
This specifies the path of the certificate file to be used for the SSL handshake.
CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
929
is not needed for passwordless private keys.
Option
This directive can be used to pass a custom configuration property to the Kafka library (librdkafka). For
example, the group ID string can be set with Option group.id mygroup. This directive may be used more
than once to specify multiple options. For a list of configuration properties, see the librdkafka
CONFIGURATION.md file.
Passing librdkafka configuration properties via the Option directive should be done with
WARNING care since these properties are used for the fine-tuning of the librdkafka performance
and may result in various side effects.
Partition
This optional integer directive specifies the topic partition to read from. If this directive is not given, messages
are collected from partition 0.
Protocol
This optional directive specifies the protocol to use for connecting to the Kafka brokers. Accepted values
include plaintext (the default) and ssl. If Protocol is set to ssl, then the CAFile directive must also be
provided.
121.16.2. Examples
Example 624. Using the im_kafka Module
This configuration collects events from a Kafka cluster using the brokers specified. Events are read from the
first partition of the nxlog topic.
nxlog.conf
1 <Input in>
2 Module im_kafka
3 BrokerList localhost:9092,192.168.88.35:19092
4 Topic nxlog
5 Partition 0
6 Protocol ssl
7 CAFile /root/ssl/ca-cert
8 CertFile /root/ssl/client_debian-8.pem
9 CertKeyFile /root/ssl/client_debian-8.key
10 KeyPass thisisasecret
11 </Input>
In order for NXLog to read logs from the kernel buffer, it may be necessary to disable the
WARNING
system logger (systemd, klogd, or logd) or configure it to not read events from the kernel.
Special privileges are required for reading kernel logs. For this, NXLog needs to be started as root. With the User
and Group global directives, NXLog can then drop its root privileges while keeping the CAP_SYS_ADMIN capability
for reading the kernel log buffer.
930
Unfortunately it is not possible to read from the /proc/kmsg pseudo file for an unprivileged
process even if the CAP_SYS_ADMIN capability is kept. For this reason the /proc/kmsg interface
NOTE is not supported by the im_kernel module. The im_file module should work fine with the
/proc/kmsg pseudo file if one wishes to collect kernel logs this way, though this will require
NXLog to be running as root.
Log Sample
<6>Some message from the kernel.↵
Kernel messages are valid BSD Syslog messages, with a priority from 0 (emerg) to 7 (debug), but do not contain
timestamp and hostname fields. These can be parsed with the xm_syslog parse_syslog_bsd() procedure, and the
timestamp and hostname fields will be added by NXLog.
See the list of installer packages that provide the im_kernel module in the Available Modules chapter of the NXLog
User Guide.
121.17.1. Configuration
The im_kernel module accepts the following directives in addition to the common module directives.
DeviceFile
This directive sets the device file from which to read events, for non-Linux platforms. If this directive is not
specified, the default is /dev/klog.
PollInterval
This directive specifies how frequently the module will check for new events, in seconds, on Linux. If this
directive is not specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will
check twice every second).
121.17.2. Examples
Example 625. Reading Messages From the Kernel
This configuration collects log messages from the kernel and writes them to file. This should work on Linux,
the BSDs, and macOS (but the system logger may need to be disabled or reconfigured).
nxlog.conf
1 # Drop privileges after being started as root
2 User nxlog
3 Group nxlog
4
5 <Input kernel>
6 Module im_kernel
7 </Input>
8
9 <Output file>
10 Module om_file
11 File "tmp/output"
12 </Output>
Rules must be provided using at least one of the LoadRule and Rules directives. Rules should be specified using
931
the format documented in the Defining Persistent Audit Rules section of the Red Hat Enterprise Linux Security
Guide.
The -e control rule should be included in the ruleset to enable the Audit system (as -e 1 or -e 2). Rules are not
automatically removed, either before applying a ruleset or when NXLog exits. To clear the current ruleset before
setting rules, begin the ruleset with the -D rule. If the Audit configuration is locked when im_linuxaudit starts,
NXLog will print a warning and collect events generated by the active ruleset.
See the list of installer packages that provide the im_linuxaudit module in the Available Modules chapter of the
NXLog User Guide.
121.18.1. Configuration
The im_linuxaudit module accepts the following directives in addition to the common module directives. At least
one of LoadRule and Rules must be specified.
LoadRule
Use this directive to load a ruleset from an external rules file. This directive can be used more than once.
Wildcards can be used to read rules from multiple files.
Rules
This directive, specified as a block, can be used to provide Audit rules directly from the NXLog configuration
file. The following control rules are supported: -b, -D, -e, -f, -r, --loginuid-immutable,
--backlog_wait_time, and --reset-lost; see auditctl(8) for more information.
Include
This directive can be used inside a Rules block to read rules from a separate file. Like the LoadRule
directive, wildcards are supported.
LockConfig
If this boolean directive is set to TRUE, NXLog will lock the Audit system configuration after the rules have
been set. It will not be possible to modify the Audit configuration until after a reboot. The default is FALSE: the
Audit configuration will not be locked.
121.18.2. Fields
The following fields are used by im_linuxaudit.
932
$acct (type: string)
A user’s account name.
933
The minor and major ID of the device that contains the file or directory recorded in an event.
934
The group ID of the inode’s owner.
935
$obj_lev_high (type: string)
The high SELinux level of an object.
936
$pid (type: integer)
The pid field semantics depend on the origin of the value in this field. In fields generated from user-space,
this field holds a process ID. In fields generated by the kernel, this field holds a thread ID. The thread ID is
equal to process ID for single-threaded processes. Note that the value of this thread ID is different from the
values of pthread_t IDs used in user-space. For more information, see the gettid(2) man page.
937
$success (type: string)
Whether a system call was successful or failed.
121.18.3. Examples
Example 626. Collecting Audit Logs With LoadRule Directive
This configuration uses a set of external rule files to configure the Audit system.
nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 LoadRule 'im_linuxaudit_*.rules'
5 </Input>
This configuration lists the rules inside the NXLog configuration file instead of using a separate Audit rules
file.
nxlog.conf
1 <Input audit>
2 Module im_linuxaudit
3 FlowControl FALSE
4 <Rules>
5 # Watch /etc/passwd for modifications and tag with 'passwd'
6 -w /etc/passwd -p wa -k passwd
7 </Rules>
8 </Input>
938
By default, if no module-specific directives are set, a log message will be generated every 30 minutes containing
-- MARK --.
The $raw_event field is not generated in Syslog format. If mark messages are required in Syslog
NOTE
format, they must be explicitly converted with the to_syslog_bsd() procedure.
The functionality of the im_mark module can be also achieved using the Schedule block with a
NOTE log_info("--MARK--") Exec statement, which would insert the messages via the im_internal
module into a route. Using a single module for this task can simplify configuration.
See the list of installer packages that provide the im_mark module in the Available Modules chapter of the NXLog
User Guide.
121.19.1. Configuration
The im_mark module accepts the following directives in addition to the common module directives.
Mark
This optional directive sets the string for the mark message. The default is -- MARK --.
MarkInterval
This optional directive sets the interval for mark messages, in minutes. The default is 30 minutes.
121.19.2. Fields
The following fields are used by im_mark.
121.19.3. Examples
939
Example 628. Using the im_mark Module
Here, NXLog will write the specified string to file every minute.
nxlog.conf
1 <Input mark>
2 Module im_mark
3 MarkInterval 1
4 Mark -=| MARK |=-
5 </Input>
6
7 <Output file>
8 Module om_file
9 File "tmp/output"
10 </Output>
11
12 <Route mark_to_file>
13 Path mark => file
14 </Route>
Windows Vista, Windows 2008, and later use a new EventLog API which is not backward
compatible. Messages in some events produced by sources in this new format cannot be
resolved with the old API which is used by this module. If such an event is encountered, a
$Message similar to the following will be set: The description for EventID XXXX from
source SOURCE cannot be read by im_mseventlog because this does not support
NOTE
the newer WIN2008/Vista EventLog API. Consider using the im_msvistalog module
instead.
Though the majority of event messages can be read with this module even on Windows
2008/Vista and later, it is recommended to use the im_msvistalog module instead.
Strings are stored in DLL and executable files and need to be read by the module when reading
EventLog messages. If a program (DLL/EXE) is already uninstalled and is not available for looking
NOTE up a string, the following message will appear instead:
The description for EventID XXXX from source SOURCE cannot be found.
See the list of installer packages that provide the im_mseventlog module in the Available Modules chapter of the
NXLog User Guide.
121.20.1. Configuration
The im_mseventlog module accepts the following directives in addition to the common module directives.
ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved position value could be read, the module will resume reading from this saved position. If
ReadFromLast is FALSE, the module will read all logs from the EventLog. This can result in quite a lot of
940
messages, and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.
SavePos
This boolean directive specifies that the file position should be saved when NXLog exits. The file position will
be read from the cache file upon startup. The default is TRUE: the file position will be saved if this directive is
not specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.
Sources
This optional directive takes a comma-separated list of EventLog filenames, such as Security,
Application, to select specific EventLog sources for reading. If this directive is not specified, then all
available EventLog sources are read (as listed in the registry). This directive should not be confused with the
$SourceName field contained within the EventLog and it is not a list of such names. The value of this is stored
in the FileName field.
UTF8
If this optional boolean directive is set to TRUE, all strings will be converted to UTF-8 encoding. Internally this
calls the convert_fields procedure. The xm_charconv module must be loaded for the character set conversion
to work. The default is TRUE, but conversion will only occur if the xm_charconv module is loaded, otherwise
strings will be in the local codepage.
121.20.2. Fields
The following fields are used by im_mseventlog.
941
AUDIT_SUCCESS, INFO, WARNING, and UNKNOWN.
1/Critical 5/CRITICAL
2/Error 4/ERROR
3/Warning 3/WARNING
4/Information 2/INFO
5/Verbose 1/DEBUG
121.20.3. Examples
942
Example 629. Forwarding EventLogs from a Windows Machine to a Remote Host
This configuration collects Windows EventLog and forwards the messages to a remote host via TCP.
nxlog.conf
1 <Input eventlog>
2 Module im_mseventlog
3 </Input>
4
5 <Output tcp>
6 Module om_tcp
7 Host 192.168.1.1
8 Port 514
9 </Output>
10
11 <Route eventlog_to_tcp>
12 Path eventlog => tcp
13 </Route>
For Windows 2003 and earlier, use the im_mseventlog module because the new Windows Event
NOTE
Log API is only available in Windows Vista, Windows 2008, and later.
Use the im_etw module to collect Analytic and Debug logs as the Windows Event Log subsystem,
NOTE
which im_msvistalog uses, does not support subscriptions to Debug or Analytic channels.
In addition to the standard set of fields which are listed under the System section, event providers can define
their own additional schema which enables logging additional data under the EventData section. The Security log
makes use of this new feature and such additional fields can be seen as in the following XML snippet:
943
<EventData>
<Data Name="SubjectUserSid">S-1-5-18</Data>
<Data Name="SubjectUserName">WIN-OUNNPISDHIG$</Data>
<Data Name="SubjectDomainName">WORKGROUP</Data>
<Data Name="SubjectLogonId">0x3e7</Data>
<Data Name="TargetUserSid">S-1-5-18</Data>
<Data Name="TargetUserName">SYSTEM</Data>
<Data Name="TargetDomainName">NT AUTHORITY</Data>
<Data Name="TargetLogonId">0x3e7</Data>
<Data Name="LogonType">5</Data>
<Data Name="LogonProcessName">Advapi</Data>
<Data Name="AuthenticationPackageName">Negotiate</Data>
<Data Name="WorkstationName" />
<Data Name="LogonGuid">{00000000-0000-0000-0000-000000000000}</Data>
<Data Name="TransmittedServices">-</Data>
<Data Name="LmPackageName">-</Data>
<Data Name="KeyLength">0</Data>
<Data Name="ProcessId">0x1dc</Data>
<Data Name="ProcessName">C:\Windows\System32\services.exe</Data>
<Data Name="IpAddress">-</Data>
<Data Name="IpPort">-</Data>
</EventData>
NXLog can extract this data when fields are logged using this schema. The values will be available in the fields of
the internal NXLog log structure. This is especially useful because there is no need to write pattern matching
rules to extract this data from the message. These fields can be used in filtering rules, be written into SQL tables,
or be used to trigger actions. The Exec directive can be used for filtering:
1 <Input in>
2 Module im_msvistalog
3 Exec if ($TargetUserName == 'SYSTEM') OR \
4 ($EventType == 'VERBOSE') drop();
5 </Input>
See the list of installer packages that provide the im_msvistalog module in the Available Modules chapter of the
NXLog User Guide.
121.21.1. Configuration
The im_msvistalog module accepts the following directives in addition to the common module directives.
AddPrefix
If this boolean directive is set to TRUE, names of fields parsed from the <EventData> portion of the event
XML will be prefixed with EventData.. For example, $EventData.SubjectUserName will be added to the
event record instead of $SubjectUserName. The same applies to <UserData>. This directive defaults to
FALSE: field names will not be prefixed.
ReadBatchSize
This optional directive can be used to specify the number of event records the EventLog API will pass to the
module for processing. Larger sizes may increase throughput. Note that there is a known issue in the
Windows EventLog subsystem: when this value is higher than 31 it may fail to retrieve some events on busy
systems, returning the error "EvtNext failed with error 1734: The array bounds are invalid." For this reason,
increasing this value is not recommended. The default is 31.
CaptureEventXML
This boolean directive defines whether the module should store raw XML-formatted event data. If set to
TRUE, the module stores raw XML data in the $EventXML field. By default, the value is set to FALSE, and the
$EventXML field is not added to the record.
944
Channel
The name of the Channel to query. If not specified, the module will read from all sources defined in the
registry. See the MSDN documentation about Event Selection.
File
This optional directive can be used to specify a full path to a log file. Log file types that can be used have the
following extensions: .evt, .evtx, and .etl. The path of the file must not be quoted (as opposed to im_file
and om_file). If the File directive is specified, the SavePos directive will be overridden to TRUE. The File
directive can be specified multiple times to read from multiple files. This module finds files only when the
module instance is started; any files added later will not be read until it is restarted. If the log file specified by
this directive is updated with new event records while NXLog is running (the file size or modification date
attribute changes), the module detects the newly appended records on the fly without requiring the module
instance to be restarted. Reading an EventLog file directly is mostly useful for forensics purposes. The System
log would be read directly with the following:
File C:\Windows\System32\winevt\Logs\System.evtx
You can use wildcards to specify file names and directories. Wildcards are not regular expressions, but are
patterns commonly used by Unix shells to expand filenames (also known as "globbing").
?
Matches any single character.
*
Matches any string, including the empty string.
\*
Matches the asterisk (*) character.
\?
Matches the question mark (?) character.
[…]
Matches one character specified within the brackets. The brackets should contain a single character (for
example, [a]) or a range of characters ([a-z]). If the first character in the brackets is ^ or !, it reverses the
wildcard matching logic (the wildcard matches any character not in the brackets). The backslash (\)
characters are ignored and should be used to escape ] and - characters as well as ^ and ! at the
beginning of the pathname.
Language
This optional directive specifies a language to use for rendering the events. The language should be given as a
hyphen-separated language/region code (for example, fr-FR for French). Note that the required language
support must be installed on the system. If this directive is not given, the system’s default locale is used.
PollInterval
This directive specifies how frequently the module will check for new events, in seconds. If this directive is not
specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will check twice
every second).
Query
This directive specifies the query for pulling only specific EventLog sources. See the MSDN documentation
about Event Selection. Note that this directive requires a single-line parameter, so multi-line query XML
should be specified using line continuation:
945
1 Query <QueryList> \
2 <Query Id='1'> \
3 <Select Path='Security'>*[System/Level=4]</Select> \
4 </Query> \
5 </QueryList>
When the Query contains an XPath style expression, the Channel must also be specified. Otherwise if an XML
Query is specified, the Channel should not be used.
QueryXML
This directive is the same as the Query directive above, except it can be used as a block. Multi-line XML
queries can be used without line continuation, and the XML Query can be copied directly from Event Viewer.
1 <QueryXML>
2 <QueryList>
3 <!-- XML-style comments can
4 span multiple lines in
5 QueryXML blocks like this.
6 -->
7 <Query Id='1'>
8 <Select Path='Security'>*[System/Level=4]</Select>
9 </Query>
10 </QueryList>
11 </QueryXML>
Commenting with the # mark does not work within multi-line Query directives or QueryXML
blocks. In this case, use XML-style comments <!-- --> as shown in the example above.
CAUTION Failure to follow this syntax for comments within queries will render the module instance
useless. Since NXLog does not parse the content of QueryXML blocks, this behavior is
expected.
ReadFromLast
This optional boolean directive instructs the module to only read logs which arrived after NXLog was started if
the saved position could not be read (for example on first start). When SavePos is TRUE and a previously
saved position value could be read, the module will resume reading from this saved position. If
ReadFromLast is FALSE, the module will read all logs from the EventLog. This can result in quite a lot of
messages, and is usually not the expected behavior. If this directive is not specified, it defaults to TRUE.
RemoteAuthMethod
This optional directive specifies the authentication method to use. Available values are Default, Negotiate,
Kerberos, and NTLM. When the directive is not specified, Default is used, which is actually Negotiate.
RemoteDomain
Domain of the user used for authentication when logging on the remote server to collect event logs.
RemotePassword
Password of the user used for authentication when logging on the remote server to collect event logs.
RemoteServer
This optional directive specifies the name of the remote server to collect event logs from. If not specified, the
module will collect locally.
RemoteUser
Name of the user used for authentication when logging on the remote server to collect event logs.
ResolveGUID
This optional boolean directive specifies that GUID values should be resolved to their object names in the
946
$Message field. If ResolveGUID is set to TRUE, it produces two output fields. One that retains the non-
resolved form of the GUID, and another which resolves to the above mentioned object name. To differentiate
the two output fields, the resolved field name will have the DN suffix added to it. If the field already exists with
the same name the resolved field will not be added and the original is preserved. The default setting is FALSE;
the module will not resolve GUID values. Windows Event Viewer shows the Message with the GUID values
resolved, and this must be enabled to get the same output with NXLog.
ResolveSID
This optional boolean directive specifies that SID values should be resolved to user names in the $Message
field. If ResolveSID is set to TRUE, it produces two output fields. One that retains the non-resolved form of
the SID, and another which resolves to the above mentioned user name. To differentiate the two output
fields, the resolved field name will have the Name suffix added to it. If the field already exists with the same
name the resolved field will not be added and the original is preserved. The default setting is FALSE; the
module will not resolve SID values. Windows Event Viewer shows the Message with the SID values resolved,
and this must be enabled to get the same output with NXLog.
SavePos
This boolean directive specifies that the file position should be saved when NXLog exits. The file position will
be read from the cache file upon startup. The default is TRUE: the file position is saved if this directive is not
specified. Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.
TolerateQueryErrors
This boolean directive specifies that im_msvistalog should ignore any invalid sources in the query. The default
is FALSE: im_msvistalog will fail to start if any source is invalid.
121.21.2. Fields
The following fields are used by im_msvistalog.
947
$EventID (type: integer)
The event ID (specific to the event source) from the EvtSystemEventID field.
948
Event Log Normalized
Severity Severity
0/Audit Failure 4/ERROR
1/Critical 5/CRITICAL
2/Error 4/ERROR
3/Warning 3/WARNING
4/Information 2/INFO
5/Verbose 1/DEBUG
121.21.3. Examples
Due to a bug or limitation of the Windows Event Log API, 23 or more clauses in a query will
result in a failure with the following error message: ERROR failed to subscribe to
NOTE
msvistalog events, the Query is invalid: This operator is unsupported by this
implementation of the filter.; [error code: 15001]
949
Example 630. Forwarding Windows EventLog from Windows to a Remote Host in Syslog Format
This configuration collects Windows EventLog with the specified query. BSD Syslog headers are added and
the messages are forwarded to a remote host via TCP.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input eventlog>
6 Module im_msvistalog
7 <QueryXML>
8 <QueryList>
9 <Query Id='0'>
10 <Select Path='Application'>*</Select>
11 <Select Path='Security'>*[System/Level<4]</Select>
12 <Select Path='System'>*</Select>
13 </Query>
14 </QueryList>
15 </QueryXML>
16 </Input>
17
18 <Output tcp>
19 Module om_tcp
20 Host 192.168.1.1
21 Port 514
22 Exec to_syslog_bsd();
23 </Output>
24
25 <Route eventlog_to_tcp>
26 Path eventlog => tcp
27 </Route>
See the list of installer packages that provide the im_null module in the Available Modules chapter of the NXLog
User Guide.
WARNING This module is deprecated, please use the im_odbc module instead.
121.23.1. Configuration
The im_oci module accepts the following directives in addition to the common module directives. The DBname,
Password, and UserName directives are required.
DBname
Name of the database to read the logs from.
950
Password
Password for authenticating to the database server.
UserName
Username for authenticating to the database server.
ORACLE_HOME
This optional directive specifies the directory of the Oracle installation.
SavePos
This boolean directive specifies that the last row ID should be saved when NXLog exits. The row ID will be
read from the cache file upon startup. The default is TRUE: the row ID is saved if this directive is not specified.
Even if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.
121.23.2. Examples
Example 631. Reading Logs from an Oracle Database
This configuration will read logs from the specified database and write them to file.
nxlog.conf
1 <Input oci>
2 Module im_oci
3 dbname //192.168.1.1:1521/orcl
4 username user
5 password secret
6 #oracle_home /home/oracle/instantclient_11_2
7 </Input>
8
9 <Output file>
10 Module om_file
11 File tmp/output
12 </Output>
13
14 <Route oci_to_file>
15 Path oci => file
16 </Route>
Setting up the ODBC data source is not in the scope of this document. Please consult the relevant ODBC guide:
the unixODBC documentation or the Microsoft ODBC Data Source Administrator guide. The data source must be
accessible by the user NXLog is running under.
In order to continue reading only new log entries after a restart, the table must contain an auto increment, serial,
or timestamp column named id in the returned result set. The value of this column is substituted into the ?
contained in the SELECT (see the SQL directive).
Some data types are not supported by im_odbc. If a column of an unsupported type is included in the result set,
im_odbc will log an unsupported odbc type error to the internal log. To read values from data types that are not
951
directly supported, use the CAST() function to convert to a supported type. See the Reading Unsupported Types
example below. Additionally, due to a change in the internal representation of datetime values in SQL Server,
some timestamp values cannot be compared correctly (when used as the id) without an explicit casting in the
WHERE clause. See the SQL Server Reading Logs by datetime ID example in the User Guide.
See the list of installer packages that provide the im_odbc module in the Available Modules chapter of the NXLog
User Guide.
121.24.1. Configuration
The im_odbc module accepts the following directives in addition to the common module directives. The
ConnectionString and SQL directives are required.
ConnectionString
This specifies the connection string containing the ODBC data source name.
SQL
This mandatory parameter sets the SQL statement the module will execute in order to query data from the
data source. The select statement must contain a WHERE clause using the column aliased as id.
Note that WHERE RecordNumber > ? is crucial: without this clause the module will read logs in an endless
loop. The result set returned by the select must contain this id column which is then stored and used for the
next query.
IdIsTimestamp
When this directive is set to TRUE, it instructs the module to treat the id field as TIMESTAMP type. If this
directive is not specified, it defaults to FALSE: the id field is treated as an INTEGER/NUMERIC type.
WARNING This configuration directive has been obsoleted in favor of IdType timestamp.
IdType
This directive specifies the type of the id field and accepts the following values: integer, timestamp, and
uniqueidentifier. If this directive is not specified, it defaults to integer and the id field is treated as an
INTEGER/NUMERIC type.
The timestamp type in Microsoft SQL Server is not a real timestamp; see rowversion
NOTE (Transact-SQL) on Microsoft Docs. To use an SQL Server timestamp type field as the id, set
IdType to integer.
The Microsoft SQL Server uniqueidentifier type is only sequential when initialized with
NOTE the NEWSEQUENTIALID function. Even then, the IDs are not guaranteed to be sequential in all
cases. For more information, see uniqueidentifier and NEWSEQUENTIALID on Microsoft
Docs.
The im_odbc module parses timestamps as local time, converted to UTC, and then saves
NOTE them in the event record. This module does not apply any time offset for fields that include
time zone information.
MaxIdSQL
This directive can be used to specify an SQL select statement for fetching the last record. MaxIdSQL is
952
required if ReadFromLast is set to TRUE. The statement must alias the ID column as maxid and return at least
one row with at least that column.
PollInterval
This directive specifies how frequently, in seconds, the module will check for new records in the database by
executing the SQL SELECT statement. If this directive is not specified, the default is 1 second. Fractional
seconds may be specified (PollInterval 0.5 will check twice every second).
ReadFromLast
This boolean directive instructs the module to only read logs that arrived after NXLog was started if the saved
position could not be read (for example on first start). When SavePos is TRUE and a previously saved position
value could be read, the module will resume reading from this saved position. If ReadFromLast is TRUE, the
MaxIDSQL directive must be set. If this directive is not specified, it defaults to FALSE.
SavePos
This boolean directive specifies that the last row id should be saved when NXLog exits. The row id will be read
from the cache file upon startup. The default is TRUE: the row id is saved if this directive is not specified. Even
if SavePos is enabled, it can be explicitly turned off with the global NoCache directive.
121.24.2. Fields
The following fields are used by im_odbc.
In addition to the field below, each column name returned in the result set is mapped directly to an NXLog field
name.
• the EventTime column or the current time if EventTime was not returned in the result set;
• the Hostname column or the hostname of the local system if Hostname was not returned in the result set;
• the Severity column or INFO if Severity was not returned in the result set; and
121.24.3. Examples
953
Example 632. Reading from an ODBC Data Source
This example uses ODBC to connect to the mydb database and retrieve log messages. The messages are
then forwarded to another agent in the NXLog binary format.
nxlog.conf
1 <Input odbc>
2 Module im_odbc
3 ConnectionString DSN=mssql;database=mydb;
4 SQL SELECT RecordNumber AS id, \
5 DateOccured AS EventTime, \
6 data AS Message \
7 FROM logtable WHERE RecordNumber > ?
8 </Input>
9
10 <Output tcp>
11 Module om_tcp
12 Host 192.168.1.1
13 Port 514
14 OutputType Binary
15 </Output>
This example reads from an SQL Server database. The LogTime field uses the datetimeoffset type, which is
not directly supported by im_odbc. The following configuration uses a SELECT statement that returns two
columns for this field: EventTime for the timestamp and TZOffset for the time-zone offset value.
nxlog.conf
1 <Input mssql_datetimeoffset>
2 Module im_odbc
3 ConnectionString Driver={ODBC Driver 17 for SQL Server}; Server=MSSQL-HOST; \
4 Trusted_Connection=yes; Database=TESTDB
5 IdType integer
6 SQL SELECT RecordID AS id, \
7 CAST(LogTime AS datetime2) AS EventTime, \
8 DATEPART(tz, LogTime) AS TZOffset, \
9 Message \
10 FROM dbo.test1 WHERE RecordID > ?
11 Exec rename_field($id, $RecordID);
12 </Input>
121.25.1. Configuration
The im_pcap module accepts the following directives in addition to the common module directives.
Dev
This optional directive can only occur once. It specifies the name of a network device/interface on which
954
im_pcap will capture packets. This directive is mutually exclusive with the File directive.
File
This optional directive can only occur once. It specifies the path to the file which contains capture packet data.
The file path do not need to be enclosed in quotation marks, although both single quoted and double quoted
paths are accepted. This directive is mutually exclusive with the Dev directive.
Protocol
This is an optional group directive. It specifies the protocol, port number and protocol-specific fields which
should be captured. May be used multiple times in the module definition, to specify multiple protocols. If no
Protocol directive is specified, then all protocols will be captured. It has the following sub-directives:
Type
Defines the name of a protocol to capture. Allowed types are; ethernet, ipv4, ipv6, ip, tcp, udp, http,
arp, vlan, icmp, pppoe, dns, mpls, gre, ppp_pptp, ssl, sll, dhcp, null_loopback, igmp, vxlan, sip, sdp,
radius.
Port
A comma-separated list of custom port numbers to capture for the protocol specified in this Protocol
group directive. If omitted, the following standard port number(s) corresponding to this protocol will be
used:
DHCP
67, 68
VLAN
4789
DNS
53, 5353, 5355
SIP
5060, 5061
RADIUS
1812
HTTP
80, 8081
SSL
443, 465, 636, 989, 990, 992, 993, 995
Filter
An optional directive that defines a filter, which can be used to further limit the packets that should be
captured and handled by the module. Filters do not need to be enclosed in quotation marks, although both
single quoted and double quoted filters are accepted. If this directive is not used, then no filtering will be
done.
Filtering is done by the libpcap library. See the Manpage of PCAP-FILTER in the libpcap
NOTE
documentation for the syntax.
955
121.25.2. Fields
The following fields are used by im_pcap.
956
dhcp.transaction_id
957
$http.request.method (type: string)
http.request.method
958
$modbus.function_code (type: string)
Modbus function code
959
$modbus.query.read_file_record.byte_count (type: string)
modbus.query.read_file_record.byte_count
[[im_pcap_field_modbus_query_read_file_record_sub_req_*_file_number]]
$modbus.query.read_file_record.sub_req.*.file_number (type: string)::
modbus.query.read_file_record.sub_req.*.file_number
[[im_pcap_field_modbus_query_read_file_record_sub_req_*_record_length]]
$modbus.query.read_file_record.sub_req.*.record_length (type: string)::
modbus.query.read_file_record.sub_req.*.record_length
[[im_pcap_field_modbus_query_read_file_record_sub_req_*_record_number]]
$modbus.query.read_file_record.sub_req.*.record_number (type: string)::
modbus.query.read_file_record.sub_req.*.record_number
[[im_pcap_field_modbus_query_read_file_record_sub_req_*_reference_type]]
$modbus.query.read_file_record.sub_req.*.reference_type (type: string)::
modbus.query.read_file_record.sub_req.*.reference_type
960
modbus.query.rw_multiple_regs.reg.*
[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_file_number]]
$modbus.query.write_file_record.sub_rec.*.file_number (type: string)::
modbus.query.write_file_record.sub_rec.*.file_number
[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_record_length]]
$modbus.query.write_file_record.sub_rec.*.record_length (type: string)::
modbus.query.write_file_record.sub_rec.*.record_length
[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_record_number]]
$modbus.query.write_file_record.sub_rec.*.record_number (type: string)::
modbus.query.write_file_record.sub_rec.*.record_number
[[im_pcap_field_modbus_query_write_file_record_sub_rec_*_reference_type]]
$modbus.query.write_file_record.sub_rec.*.reference_type (type: string)::
modbus.query.write_file_record.sub_rec.*.reference_type
[[im_pcap_field_modbus_query_write_multiple_coils_bit_*]] $modbus.query.write_multiple_coils.bit.*
(type: integer)::
modbus.query.write_multiple_coils.bit.*
961
$modbus.query.write_multiple_registers.qty_of_regs (type: string)
modbus.query.write_multiple_registers.qty_of_regs
[[im_pcap_field_modbus_query_write_multiple_registers_reg_*]]
$modbus.query.write_multiple_registers.reg.* (type: integer)::
modbus.query.write_multiple_registers.reg.*
[[im_pcap_field_modbus_response_get_comm_event_log_event_*]]
$modbus.response.get_comm_event_log.event.* (type: integer)::
962
+
modbus.response.get_comm_event_log.event.*
modbus.response.read_coils.bit.*
963
$modbus.response.read_device_id.number_of_objects (type: string)
modbus.response.read_device_id.number_of_objects
[[im_pcap_field_modbus_response_read_device_id_object_*_object_id]]
$modbus.response.read_device_id.object.*.object_id (type: string)::
modbus.response.read_device_id.object.*.object_id
[[im_pcap_field_modbus_response_read_device_id_object_*_object_length]]
$modbus.response.read_device_id.object.*.object_length (type: string)::
modbus.response.read_device_id.object.*.object_length
[[im_pcap_field_modbus_response_read_device_id_object_*_object_value]]
$modbus.response.read_device_id.object.*.object_value (type: string)::
modbus.response.read_device_id.object.*.object_value
[[im_pcap_field_modbus_response_read_discrete_inputs_bit_*]]
$modbus.response.read_discrete_inputs.bit.* (type: integer)::
modbus.response.read_discrete_inputs.bit.*
[[im_pcap_field_modbus_response_read_fifo_queue_fifo_value_register_*]]
$modbus.response.read_fifo_queue.fifo_value_register.* (type: string)::
964
modbus.response.read_fifo_queue.fifo_value_register.*
[[im_pcap_field_modbus_response_read_file_record_sub_rec_*_file_resp_len]]
$modbus.response.read_file_record.sub_rec.*.file_resp_len (type: string)::
modbus.response.read_file_record.sub_rec.*.file_resp_len
[[im_pcap_field_modbus_response_read_file_record_sub_rec_*_reference_type]]
$modbus.response.read_file_record.sub_rec.*.reference_type (type: string)::
modbus.response.read_file_record.sub_rec.*.reference_type
[[im_pcap_field_modbus_response_read_holding_regs_reg_*]] $modbus.response.read_holding_regs.reg.*
(type: string)::
modbus.response.read_holding_regs.reg.*
modbus.response.read_input_regs.reg.*
965
$modbus.response.rw_multiple_regs.byte_count (type: string)
modbus.response.rw_multiple_regs.byte_count
[[im_pcap_field_modbus_response_rw_multiple_regs_reg_*]] $modbus.response.rw_multiple_regs.reg.*
(type: string)::
modbus.response.rw_multiple_regs.reg.*
[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_file_number]]
$modbus.response.write_file_record.sub_rec.*.file_number (type: string)::
modbus.response.write_file_record.sub_rec.*.file_number
[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_record_length]]
$modbus.response.write_file_record.sub_rec.*.record_length (type: string)::
modbus.response.write_file_record.sub_rec.*.record_length
[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_record_number]]
$modbus.response.write_file_record.sub_rec.*.record_number (type: string)::
modbus.response.write_file_record.sub_rec.*.record_number
[[im_pcap_field_modbus_response_write_file_record_sub_rec_*_reference_type]]
$modbus.response.write_file_record.sub_rec.*.reference_type (type: string)::
modbus.response.write_file_record.sub_rec.*.reference_type
966
$modbus.response.write_multiple_registers.exc_code (type: string)
modbus.response.write_multiple_registers.exc_code
967
$payload.length (type: string)
payload.length
968
$spd.field (type: string)
spd.field
121.25.3. Examples
969
Example 634. Reading from a PCAP File While Applying a Packet Filter
In this example, the File directive defines the path and filename of a .pcap file containing packets saved by
Wireshark. The Filter directive defines a filter that selects only TCP packets targeted for port 443. The output
is formatted as JSON while written to file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input pcap>
6 Module im_pcap
7 File "tmp/example.pcap"
8 Filter tcp dst port 443
9 </Input>
10
11 <Output file>
12 Module om_file
13 File "tmp/output"
14 Exec to_json();
15 </Output>
970
Example 635. Capturing TCP, Ethernet, and HTTP Traffic to a Single File
In this example, the configuration illustrates how the Protocol group directive can be defined multiple times
within the same module instance. Three types of network packets are to be captured: HTTP requests; TCP
for the source and destination ports of all visible TCP traffic; and Ethernet to log the MAC addresses of
packet sources and their destinations. The events are formatted to JSON while writing to a file.
This approach has two distinct advantages. It produces events that include all fields of all three protocols,
which enables correlation between protocols that yield source and destination information with those
protocols that do not provide such fields. Additionally, it achieves this goal using a single module instance
instead of multiple instances, which reduces system resource consumption.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input pcap>
6 Module im_pcap
7 Dev enp0s3
8 <Protocol>
9 Type http
10 Field http.request.uri
11 Field http.request.method
12 Field http.response.code
13 Field http.response.phrase
14 </Protocol>
15 <Protocol>
16 Type tcp
17 Field tcp.src_port
18 Field tcp.dst_port
19 Field tcp.flag
20 </Protocol>
21 <Protocol>
22 Type ethernet
23 Field eth.src_mac
24 Field eth.dest.mac
25 </Protocol>
26 </Input>
27
28 <Output file>
29 Module om_file
30 File "tmp/output"
31 Exec to_json();
32 </Output>
971
Example 636. Capturing TCP, Ethernet, and HTTP Traffic to Separate Files
In this example, each of the three protocols are managed by a separate module instance. The events are
formatted to JSON while being written to each of their respective files. This approach can be used when
there is a need to analyze each protocol in isolation from each other. Because three input instances are
used, more system resources will be consumed when compared to the multi-protocol, single-instance
approach.
nxlog.conf (truncated)
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input pcap_tcp>
6 Module im_pcap
7 Dev enp0s3
8 <Protocol>
9 Type tcp
10 Field tcp.src_port
11 Field tcp.dst_port
12 Field tcp.flag
13 </Protocol>
14 </Input>
15
16 <Input pcap_http>
17 Module im_pcap
18 Dev enp0s3
19 <Protocol>
20 Type http
21 Field http.request.uri
22 Field http.request.method
23 Field http.response.code
24 Field http.response.phrase
25 </Protocol>
26 </Input>
27
28 <Input pcap_eth>
29 [...]
This module makes it possible to execute Perl code in an input module to capture and inject event data directly
into NXLog. See also the om_perl and xm_perl modules.
The module will parse the file specified in the PerlCode directive when NXLog starts the module. The Perl code
must implement the read_data subroutine which will be called by the module. To generate event data, the
Log::Nxlog Perl module must be included, which provides the following methods.
To use the im_perl module on Windows, a separate Perl environment must be installed, such as
NOTE
Strawberry Perl. Currently, the im_perl module on Windows requires Strawberry Perl 5.28.0.1.
972
log_debug(msg)
Send the message msg to the internal logger on DEBUG log level. This method does the same as the
log_debug() procedure in NXLog.
log_info(msg)
Send the message msg to the internal logger on INFO log level. This method does the same as the log_info()
procedure in NXLog.
log_warning(msg)
Send the message msg to the internal logger on WARNING log level. This method does the same as the
log_warning() procedure in NXLog.
log_error(msg)
Send the message msg to the internal logger on ERROR log level. This method does the same as the
log_error() procedure in NXLog.
add_input_data(event)
Pass the event record to the next module instance in the route. Failure to call this method will result in a
memory leak.
logdata_new()
Create a new event record. The return value can be used with the set_field_*() methods to insert data.
set_read_timer(delay)
Set the timer in seconds to invoke the read_data method again.
The set_read_timer() method should be called in order to invoke read_data again. This is typically
NOTE
used for polling data. The read_data method must not block.
For the full NXLog Perl API, see the POD documentation in Nxlog.pm. The documentation can be read with
perldoc Log::Nxlog.
See the list of installer packages that provide the im_perl module in the Available Modules chapter of the NXLog
User Guide.
121.26.1. Configuration
The im_perl module accepts the following directives in addition to the common module directives.
PerlCode
This mandatory directive expects a file containing valid Perl code that implements the read_data subroutine.
This file is read and parsed by the Perl interpreter.
973
On Windows, the Perl script invoked by the PerlCode directive must define the Perl library
paths at the beginning of the script to provide access to the Perl modules.
nxlog-windows.pl
NOTE
use lib 'c:\Strawberry\perl\lib';
use lib 'c:\Strawberry\perl\vendor\lib';
use lib 'c:\Strawberry\perl\site\lib';
use lib 'c:\Program Files\nxlog\data';
Config
This optional directive allows you to pass configuration strings to the script file defined by the PerlCode
directive. This is a block directive and any text enclosed within <Config></Config> is submitted as a single
string literal to the Perl code.
If you pass several values using this directive (for example, separated by the \n delimiter) be
NOTE
sure to parse the string correspondingly inside the Perl code.
Call
This optional directive specifies the Perl subroutine to invoke. With this directive, you can call only specific
subroutines from your Perl code. If the directive is not specified, the default subroutine read_data is invoked.
121.26.2. Examples
974
Example 637. Using im_perl to Generate Event Data
In this example, logs are generated by a Perl function that increments a counter and inserts it into the
generated line.
nxlog.conf
1 <Output file2>
2 Module om_file
3 File 'tmp/output2'
4 </Output>
5
6
7 <Input perl>
8 Module im_perl
9 PerlCode modules/input/perl/perl-input.pl
10 Call read_data1
11 </Input>
12
13 <Input perl2>
14 Module im_perl
15 PerlCode modules/input/perl/perl-input2.pl
16 </Input>
17
18 <Route r1>
19 Path perl => file
20 </Route>
21
22 <Route r2>
23 Path perl2 => file2
24 </Route>
perl-input.pl
use strict;
use warnings;
use Log::Nxlog;
my $counter;
sub read_data1
{
my $event = Log::Nxlog::logdata_new();
$counter //= 1;
my $line = "Input1: this is a test line ($counter) that should appear in the output";
$counter++;
Log::Nxlog::set_field_string($event, 'raw_event', $line);
Log::Nxlog::add_input_data($event);
if ( $counter <= 100 )
{
Log::Nxlog::set_read_timer(0);
}
}
975
121.27.1. Configuration
The im_pipe module accepts the following directives in addition to the common module directives.
Pipe
This mandatory directive specifies the name of the input pipe file. The module checks if the specified pipe file
exists and creates it in case it does not. If the specified pipe file is not a named pipe, the module does not
start.
InputType
This directive specifies the input data format. The default value is LineBased. See the InputType directive in
the list of common module directives.
121.27.2. Examples
This example provides the NXLog configuration for processing messages from a named pipe on a UNIX-like
operating system.
With this configuration, NXLog reads messages from a named pipe and forwards them via TCP. No
additional processing is done.
nxlog.conf
<Input in>↵
Module im_pipe↵
Pipe "tmp/pipe"↵
</Input>↵
↵
<Output out>↵
Module om_tcp↵
Host 192.168.1.2↵
Port 514↵
</Output>↵
The Python script should import the nxlog module, and will have access to the following classes and functions.
nxlog.log_debug(msg)
Send the message msg to the internal logger at DEBUG log level. This function does the same as the core
log_debug() procedure.
nxlog.log_info(msg)
Send the message msg to the internal logger at INFO log level. This function does the same as the core
log_info() procedure.
nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This function does the same as the core
log_warning() procedure.
976
nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This function does the same as the core
log_error() procedure.
class nxlog.Module
This class will be instantiated by NXLog and passed to the read_data() method in the script.
logdata_new()
This method returns a new LogData event object.
set_read_timer(delay)
This method sets a trigger for another read after a specified delay in seconds (float).
class nxlog.LogData
This class represents a Logdata event object.
delete_field(name)
This method removes the field name from the event record.
field_names()
This method returns a list with the names of all the fields currently in the event record.
get_field(name)
This method returns the value of the field name in the event.
post()
This method will submit the LogData event to NXLog for processing by the next module in the route.
set_field(name, value)
This method sets the value of field name to value.
module
This attribute is set to the Module object associated with the event.
See the list of installer packages that provide the im_python module in the Available Modules chapter of the
NXLog User Guide.
121.28.1. Configuration
The im_python module accepts the following directives in addition to the common module directives.
PythonCode
This mandatory directive specifies a file containing Python code. The im_python instance will call a
read_data() function which must accept an nxlog.Module object as its only argument.
Call
This optional directive specifies the Python method to invoke. With this directive, you can call only specific
methods from your Python code. If the directive is not specified, the default method read_data is invoked.
121.28.2. Examples
977
Example 639. Using im_python to Generate Event Data
In this example, a Python script is used to read Syslog events from multiple log files bundled in tar archives,
which may be compressed. The parse_syslog() procedure is also used to parse the events.
To avoid re-reading archives, each one should be removed after reading (see the
NOTE
comments in the script) or other similar functionality implemented.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input in>
6 Module im_python
7 PythonCode modules/input/python/2_python.py
8 Exec parse_syslog();
9 </Input>
2_python.py (truncated)
import os
import tarfile
import nxlog
LOG_DIR = 'modules/input/python/2_logdir'
POLL_INTERVAL = 30
def read_data(module):
nxlog.log_debug('Checking for new archives')
for file in os.listdir(LOG_DIR):
path = os.path.join(LOG_DIR, file)
nxlog.log_debug("Attempting to read from '{}'".format(path))
try:
for line in read_tar(path):
event = module.logdata_new()
event.set_field('ImportFile', path)
event.set_field('raw_event', line)
[...]
The output counterpart, om_redis, can be used to populate the Redis server with data.
See the list of installer packages that provide the im_redis module in the Available Modules chapter of the NXLog
User Guide.
121.29.1. Configuration
The im_redis module accepts the following directives in addition to the common module directives. The Host
directive is required.
Host
978
This mandatory directive specifies the IP address or DNS hostname of the Redis server to connect to.
Channel
This optional directive defines the Redis channel this module will subscribe to. This directive can be specified
multiple times within the module definition. When the Command directive is set to PSUBSCRIBE, each
Channel directive specifies a glob that will be matched by the Redis server against its available channels. For
the SUBSCRIBE command, Channel specifies the channel names which will be matched as is (no globbing).
The usage of this directive is mutually exclusive with the usage of the LPOP and RPOP commands in the
Command directive.
Command
This optional directive can be used to choose between the LPOP, RPOP, SUBSCRIBE and PSUBSCRIBE
commands. The default Command is set to LPOP, if this directive is not specified.
InputType
See the InputType directive in the list of common module directives. The default is the Dgram reader function,
which expects a plain string. To preserve structured data Binary can be used, but it must also be set on the
other end.
Key
This specifies the Key used by the LPOP command. The default is nxlog. The usage of this directive is
mutually exclusive with the usage of the SUBSCRIBE and PSUBSCRIBE commands in the Command directive.
PollInterval
This directive specifies how frequently the module will check for new data, in seconds. If this directive is not
specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will check twice
every second). The usage of this directive is mutually exclusive with the usage of the SUBSCRIBE and
PSUBSCRIBE commands in the Command directive.
Port
This specifies the port number of the Redis server. The default is port 6379.
121.29.2. Fields
The following fields are used by im_redis.
See the list of installer packages that provide the im_regmon module in the Available Modules chapter of the
NXLog User Guide.
979
121.30.1. Configuration
The im_regmon module accepts the following directives in addition to the common module directives. The
RegValue directive is required.
RegValue
This mandatory directive specifies the name of the registry entry. It must be a string type expression.
Wildcards are also supported. See the File directive of im_file for more details on how wildcarded entries can
be specified. More than one occurrence of the RegValue directive can be specified. The path of the registry
entry specified with this directive must start with one of the following: HKCC, HKU, HKCU, HKCR, or HKLM.
64BitView
If set to TRUE, this boolean directive indicates that the 64 bit registry view should be monitored. The default is
TRUE.
Digest
This specifies the digest method (hash function) to be used to calculate the checksum. The default is sha1.
The following message digest methods can be used: md2, md5, mdc2, rmd160, sha, sha1, sha224, sha256,
sha384, and sha512.
Exclude
This directive specifies a single registry path or a set of registry values (using wildcards) to be excluded from
the scan. More than one occurrence of the Exclude directive can be used.
Recursive
If set to TRUE, this boolean directive specifies that registry entries set with the RegValue directive should be
scanned recursively under subkeys. For example, HKCU\test\value will match HKCU\test\subkey\value.
Wildcards can be used in combination with Recursive: HKCU\test\value* will match
HKCU\test\subkey\value2. This directive only causes scanning under the given path: HKCU\*\value will not
match HKCU\test\subkey\value. The default is FALSE.
ScanInterval
This directive specifies how frequently, in seconds, the module will check the registry entry or entries for
modifications. The default is 86400 (1 day). The value of ScanInterval can be set to 0 to disable periodic
scanning and instead invoke scans via the start_scan() procedure.
121.30.2. Procedures
The following procedures are exported by im_regmon.
start_scan();
Trigger the Windows registry integrity scan. This procedure returns before the scan is finished.
121.30.3. Fields
The following fields are used by im_regmon.
980
$DigestName (type: string)
The name of the digest used to calculate the checksum value (for example, SHA1).
121.30.4. Examples
981
Example 640. Periodic Registry Monitoring
This example monitors the registry entry recursively, and scans every 10 seconds. Messages generated by
any detected changes will be written to file in JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input regmon>
6 Module im_regmon
7 RegValue 'HKLM\Software\Policies\*'
8 ScanInterval 10
9 </Input>
10
11 <Output file>
12 Module om_file
13 File 'C:\test\regmon.log'
14 Exec to_json();
15 </Output>
16
17 <Route regmon_to_file>
18 Path regmon => file
19 </Route>
The im_regmon module provides a start_scan() procedure that can be called to invoke the scan. The
following configuration will trigger the scan every day at midnight.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input regmon>
6 Module im_regmon
7 RegValue 'HKLM\Software\*'
8 Exclude 'HKLM\Software\Program Groups\*'
9 ScanInterval 0
10 <Schedule>
11 When @daily
12 Exec start_scan();
13 </Schedule>
14 </Input>
15
16 <Output file>
17 Module om_file
18 File 'C:\test\regmon.log'
19 Exec to_json();
20 </Output>
21
22 <Route dailycheck>
23 Path regmon => file
24 </Route>
982
121.31. Ruby (im_ruby)
This module provides support for collecting log data with methods written in the Ruby language. See also the
xm_ruby and om_ruby modules.
Nxlog.log_info(msg)
Send the message msg to the internal logger at DEBUG log level. This method does the same as the core
log_debug() procedure.
Nxlog.log_debug(msg)
Send the message msg to the internal logger at INFO log level. This method does the same as the core
log_info() procedure.
Nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This method does the same as the core
log_warning() procedure.
Nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This method does the same as the core
log_error() procedure.
class Nxlog.Module
This class will be instantiated by NXLog and passed to the method specified by the Call directive.
logdata_new()
This method returns a new LogData object.
set_read_timer(delay)
This method sets a trigger for another read after a specified delay in seconds (float).
class Nxlog.LogData
This class represents an event.
field_names()
This method returns an array with the names of all the fields currently in the event record.
get_field(name)
This method returns the value of the field name in the event.
post()
This method will submit the event to NXLog for processing by the next module in the route.
set_field(name, value)
This method sets the value of field name to value.
See the list of installer packages that provide the im_ruby module in the Available Modules chapter of the NXLog
User Guide.
121.31.1. Configuration
The im_ruby module accepts the following directives in addition to the common module directives. The RubyCode
directive is required.
RubyCode
983
This mandatory directive specifies a file containing Ruby code. The im_ruby instance will call the method
specified by the Call directive. The method must accept an Nxlog.Module object as its only argument.
Call
This optional directive specifies the Ruby method to call. The default is read_data.
121.31.2. Examples
Example 642. Using im_ruby to Generate Events
In this example, events are generated by a simple Ruby method that increments a counter. Because this
Ruby method does not set the $raw_event field, it would be reasonable to use to_json() or some other way
to preserve the fields for further processing.
nxlog.conf
1 <Input in>
2 Module im_ruby
3 RubyCode ./modules/input/ruby/input2.rb
4 Call read_data
5 </Input>
input2.rb
$index = 0
def read_data(mod)
Nxlog.log_debug('Creating new event via input.rb')
$index += 1
event = mod.logdata_new
event.set_field('Counter', $index)
event.set_field('Message', "This is message #{$index}")
event.post
mod.set_read_timer 0.3
end
See the list of installer packages that provide the im_ssl module in the Available Modules chapter of the NXLog
User Guide.
121.32.1. Configuration
The im_ssl module accepts the following directives in addition to the common module directives.
ListenAddr
The module will accept connections on this IP address or DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).
984
Formerly called Host, this directive is now ListenAddr. Host in this context will become
IMPORTANT
deprecated from NXLog EE 6.0.
Port
The module will listen for incoming connections on this port number. The default is port 514.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in ListenAddr.
AllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all connections must present a trusted certificate.
CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.
CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.
CertFile
This specifies the path of the certificate file to be used for the SSL handshake.
CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.
CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.
CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
OpenSSL hashed format.
CRLFile
985
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket.
RequireCert
This boolean value specifies that the remote must present a certificate. If set to TRUE and there is no
certificate presented during the connection handshake, the connection will be refused. The default value is
TRUE: each connection must use a certificate.
SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.
SSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
121.32.2. Fields
The following fields are used by im_ssl.
121.32.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
986
Example 643. Accepting Binary Logs From Another NXLog Agent
This configuration accepts secured log messages in the NXLog binary format and writes them to file.
nxlog.conf
1 <Input ssl>
2 Module im_ssl
3 ListenAddr localhost:23456
4 CAFile %CERTDIR%/ca.pem
5 CertFile %CERTDIR%/client-cert.pem
6 CertKeyFile %CERTDIR%/client-key.pem
7 KeyPass secret
8 InputType Binary
9 </Input>
10
11 # old syntax
12 #<Input ssl>
13 # Module im_ssl
14 # ListenAddr localhost
15 # Port 23456
16 # CAFile %CERTDIR%/ca.pem
17 # CertFile %CERTDIR%/client-cert.pem
18 # CertKeyFile %CERTDIR%/client-key.pem
19 # KeyPass secret
20 # InputType Binary
21 #</Input>
To enable running the im_systemd module under the nxlog user, the latter must be added to
NOTE the systemd-journal group. For example, this could be the following command:
$ sudo gpasswd -a nxlog -g systemd-journal
121.33.1. Configuration
The im_systemd module accepts the following directive in addition to the common module directives.
ReadFromLast
If set to TRUE, this optional boolean directive will read only new entries from the journal.
121.33.2. Fields
The following fields are used by im_systemd.
987
$AuditUID (type: string)
Login UID of the process the journal entry originates from, as maintained by the kernel audit subsystem.
988
$KernelDevice (type: string)
Device name of the kernel. If the entry is associated to a block device, the field contains the major and minor
of the device node, separated by ":" and prefixed by "b". Similar for character devices but prefixed by "c". For
network devices, this is the interface index prefixed by "n". For all other devices, this is the subsystem name
prefixed by "+", followed by ":", followed by the kernel device name.
989
$ObjSystemdOwnerUID (type: integer)
This field contains the same value as the 'SystemdOwnerUID', except that the process identified by PID is
described, instead of the process which logged the message.
990
$SystemdOwnerUID (type: string)
Owner UID of the systemd session (if any) of the process the journal entry originates from.
121.33.3. Examples
Example 644. Using the im_systemd Module to Read the Systemd Journal
nxlog.conf
1 <Input systemd>
2 Module im_systemd
3 ReadFromLast TRUE
4 </Input>
Below is the sample of a systemd journal message after it has been accepted by the im_systemd module
and converted into JSON format using the xm_json module.
Event Sample
{"Severity":"info","SeverityValue":6,"Facility":"auth","FacilityValue":3,↵
"Message":"Reached target User and Group Name Lookups.","SourceName":"systemd",↵
"ProcessID":"1","BootID":"179e1f0a40c64b6cb126ed97278aef89",↵
"MachineID":"0823d4a95f464afeb0021a7e75a1b693","Hostname":"user",↵
"Transport":"kernel","EventReceivedTime":"2020-02-05T14:46:09.809554+00:00",↵
"SourceModuleName":"systemd","SourceModuleType":"im_systemd"}↵
This module provides no access control. Firewall rules can be used to deny connections from
NOTE
certain hosts.
See the list of installer packages that provide the im_tcp module in the Available Modules chapter of the NXLog
991
User Guide.
121.34.1. Configuration
The im_tcp module accepts the following directives in addition to the common module directives.
ListenAddr
The module will accept connections on this IP address or DNS hostname. For security, the default listen
address is localhost (the localhost loopback address is not accessible from the outside). To receive logs from
remote hosts, the address specified here must be accessible. The any address 0.0.0.0 is commonly used
here. Add the port number to the end of a host using a colon as a separator (host:port).
Formerly called Host, this directive is now ListenAddr. Host in this context will become
IMPORTANT
deprecated from NXLog EE 6.0.
Port
The module will listen for incoming connections on this port number. The default port is 514 if this directive is
not specified.
Port directive will become deprecated from NXLog EE 6.0. Provide the port using
IMPORTANT
ListenAddr.
ReusePort
This optional boolean directive enables synchronous listening on the same port by multiple module
instances. Each module instance runs in its own thread, allowing NXLog to process incoming data
simultaneously to take better advantage of multiprocessor systems. The default value is FALSE.
To enable synchronous listening, the configuration file should contain multiple im_tcp module instances
listening on the same port and the ReusePort directive set to TRUE, see the Examples section.
121.34.2. Fields
The following fields are used by im_tcp.
121.34.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
992
Example 645. Using the im_tcp Module
With this configuration, NXLog listens for TCP connections on port 1514 and writes the received log
messages to a file.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Host 0.0.0.0
4 Port 1514
5 </Input>
6
7 <Output file>
8 Module om_file
9 File "tmp/output"
10 </Output>
11
12 <Route tcp_to_file>
13 Path tcp => file
14 </Route>
The configuration below provides two im_tcp module instances to reuse port 1514 via the ReusePort
directive. Received messages are written to the /tmp/output file.
nxlog.conf
1 <Input tcp_one>
2 Module im_tcp
3 Host 192.168.31.11
4 Port 1514
5 ReusePort TRUE
6 </Input>
7
8 <Input tcp_two>
9 Module im_tcp
10 Host 192.168.31.11
11 Port 1514
12 ReusePort TRUE
13 </Input>
14
15 <Output file>
16 Module om_file
17 File "tmp/output"
18 </Output>
19
20 <Route tcp_to_file>
21 Path tcp_one, tcp_two => file
22 </Route>
See the list of installer packages that provide the im_testgen module in the Available Modules chapter of the
993
NXLog User Guide.
121.35.1. Configuration
The im_testgen module accepts the following directives in addition to the common module directives.
MaxCount
The module will generate this many events, and then stop generating events. If this directive is not specified,
im_testgen will continue generating events until the module is stopped or NXLog exits.
UDP is an unreliable transport protocol, and does not guarantee delivery. Messages may
WARNING not be received or may be truncated. It is recommended to use the TCP or SSL transport
modules instead, if possible.
This module provides no access control. Firewall rules can be used to drop log events from
NOTE
certain hosts.
For parsing Syslog messages, see the pm_transformer module or the parse_syslog_bsd() procedure of xm_syslog.
See the list of installer packages that provide the im_udp module in the Available Modules chapter of the NXLog
User Guide.
121.36.1. Configuration
The im_udp module accepts the following directives in addition to the common module directives.
ListenAddr
The module will accept connections on this IP address or a DNS hostname. The default is localhost. Add the
port number to listen on to the end of a host using a colon as a separator (host:port).
Formerly called Host, this directive is now ListenAddr. Host in this context will become
IMPORTANT
deprecated from NXLog EE 6.0.
Port
The module will listen for incoming connections on this port number. The default is port 514.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in ListenAddr.
994
ReusePort
This optional boolean directive enables synchronous listening on the same port by multiple module
instances. Each module instance runs in its own thread, allowing NXLog to process incoming data
simultaneously to take better advantage of multiprocessor systems. The default value is FALSE.
To enable synchronous listening, the configuration file should contain multiple im_udp module instances
listening on the same port and the ReusePort directive set to TRUE, see the Examples section.
SockBufSize
This optional directive sets the socket buffer size (SO_RCVBUF) to the value specified. If not set, the operating
system defaults are used. If UDP packet loss is occurring at the kernel level, setting this to a high value (such
as 150000000) may help. On Windows systems the default socket buffer size is extremely low, and using this
option is highly recommended.
UseRecvmmsg
This boolean directive specifies that the recvmmsg() system call should be used, if available, to receive
multiple messages per call to improve performance. The default is TRUE.
121.36.2. Fields
The following fields are used by im_udp.
121.36.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
This configuration accepts log messages via UDP and writes them to a file.
nxlog.conf
1 <Input udp>
2 Module im_udp
3 ListenAddr 192.168.1.1:514
4 </Input>
5
6 # old syntax
7 #<Input udp>
8 # Module im_udp
9 # Host 192.168.1.1
10 # Port 514
11 #</Input>
12
13 <Output file>
14 Module om_file
15 File "tmp/output"
16 </Output>
17
18 <Route udp_to_file>
19 Path udp => file
995
Example 648. Reusing the Single Port by Multiple Module Instances
The configuration below provides two im_udp module instances to reuse port 514 via the ReusePort
directive. Received messages are written to the /tmp/output file.
nxlog.conf
1 <Input udp_one>
2 Module im_udp
3 Host 192.168.1.1
4 Port 514
5 ReusePort TRUE
6 </Input>
7
8 <Input udp_two>
9 Module im_udp
10 Host 192.168.1.1
11 Port 514
12 ReusePort TRUE
13 </Input>
14
15 <Output file>
16 Module om_file
17 File "tmp/output"
18 </Output>
19
20 <Route udp_to_file>
21 Path udp_one, udp_two => file
22 </Route>
It is recommended to disable FlowControl when this module is used to collect local Syslog
messages from the /dev/log Unix domain socket. Otherwise, if the corresponding Output queue
NOTE
becomes full, the syslog() system call will block in any programs trying to write to the system log
and an unresponsive system may result.
For parsing Syslog messages, see the pm_transformer module or the parse_syslog_bsd() procedure of xm_syslog.
See the list of installer packages that provide the im_uds module in the Available Modules chapter of the NXLog
User Guide.
121.37.1. Configuration
The im_uds module accepts the following directives in addition to the common module directives.
UDS
This specifies the path of the Unix domain socket. The default is /dev/log.
CreateDir
If set to TRUE, this optional boolean directive instructs the module to create the directory where the UDS
socket file is located, if it does not already exist. The default is FALSE.
996
UDSType
This directive specifies the domain socket type. Supported values are dgram and stream. The default is dgram.
InputType
See the InputType directive in the list of common module directives. This defaults to dgram if UDSType is set
to dgram or to linebased if UDSType is set to stream.
UDSGroup
Use this directive to set the group ownership for the created socket. By default, this is the group NXLog is
running as, (which may be specified by the global Group directive).
UDSOwner
Use this directive to set the user ownership for the created socket. By default, this is the user NXLog is
running as (which may be specified by the global User directive).
UDSPerms
This directive specifies the permissions to use for the created socket. This must be a four-digit octal value
beginning with a zero. By default, universal read/write permissions will be set (octal value 0666).
121.37.2. Examples
Example 649. Using the im_uds Module
This configuration will accept logs via the specified socket and write them to file.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 FlowControl False
5 </Input>
This configuration accepts logs via the specified socket, and also specifies ownership and permissions to
use for the socket.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /opt/nxlog/var/spool/nxlog/socket
4 UDSOwner root
5 UDSGroup adm
6 UDSPerms 0660
7 </Input>
997
NOTE This module is only available on Microsoft Windows.
If performance counters are not working or some counters are missing, it may be necessary to
rebuild the performance counter registry settings by running C:\windows\system32\lodctr.exe
TIP
/R. See How to rebuild performance counters on Windows Vista/Server2008/7/Server2008R2 on
TechNet for more details, including how to save a backup before rebuilding.
See the list of installer packages that provide the im_winperfcount module in the Available Modules chapter of the
NXLog User Guide.
121.38.1. Configuration
The im_winperfcount module accepts the following directives in addition to the common module directives. The
Counter directive is required.
Counter
This mandatory directive specifies the name of the performance counter that should be polled, such as
\Memory\Available Bytes. More than one Counter directive can be specified to poll multiple counters at
once. Available counter names can be listed with typeperf -q (see the typeperf command reference on
Microsoft Docs).
PollInterval
This directive specifies how frequently, in seconds, the module will poll the performance counters. If this
directive is not specified, the default is 1 second. Fractional seconds may be specified (PollInterval 0.5 will
check twice every second).
UseEnglishCounters
This optional boolean directive specifies whether to use English counter names. This makes it possible to use
the same NXLog configuration across all deployments even if the localization differs. If this directive is not
specified it defaults to FALSE (native names will be used).
AllowInvalidCounters
If set to TRUE, invalid counter names will be ignored and a warning will be logged instead of stopping with an
error. If this directive is not specified it defaults to FALSE.
121.38.2. Fields
The following fields are used by im_winperfcount.
998
$Severity (type: string)
The severity name: INFO.
121.38.3. Examples
Example 651. Polling Windows Performance Counters
With this configuration, NXLog will retrieve the specified counters every 60 seconds. The resulting messages
will be written to file in JSON format.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input counters>
6 Module im_winperfcount
7 Counter \Memory\Available Bytes
8 Counter \Process(_Total)\Working Set
9 PollInterval 60
10 </Input>
11
12 <Output file>
13 Module om_file
14 File 'C:\test\counter.log'
15 Exec to_json();
16 </Output>
17
18 <Route perfcount>
19 Path counters => file
20 </Route>
The im_mseventlog module requires NXLog to be installed as an agent on the source host. The im_msvistalog
module can be configured to pull Windows EventLog remotely from Windows hosts with a NXLog agent running
on Windows. The im_wseventing module, can be used on all supported platforms including GNU/Linux systems to
remotely collect Windows EventLog without requiring any software to be installed on the source host. Windows
clients can be configured through Group Policy to forward EventLog to the system running the im_wseventing
module, without the need to list each client machine individually in the configuration.
The WS-Eventing protocol and im_wseventing support HTTPS using X509 certificates and Kerberos to authenticate
and securely transfer EventLog.
999
While there are other products implementing the WS-Eventing protocol (such as IBM
WebSphere DataPower), this module was implemented with the primary purpose of collecting
NOTE
and parsing forwarded events from Microsoft Windows. Compatibility with other products has
not been assessed.
See the list of installer packages that provide the im_wseventing module in the Available Modules chapter of the
NXLog User Guide.
b. Set the nameserver and static IP address (substitute the correct interface name).
# nano /etc/sysconfig/network-scripts/ifcfg-enp0s3
Set to:
BOOTPROTO=static
IPADDR=192.168.0.3
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.2
# ntpdate ad.domain.com
3. Go to the domain controller ad.domain.com and create a new user linux (the name of the user should
match the hostname of the Linux node).
a. Go to Administrative Tools → Active Directory Users and Computers → ad.domain.com → Users.
b. Right click and choose New → User.
i. First name: linux
1000
4. In the DNS settings on the domain controller, create an A record for linux.domain.com.
5. Open a Command Prompt on ad.domain.com and execute these commands. Use the same <password> as
in step 3b.
a. Confirm that the Kerberos krb5 client and utility software are installed on the system. The required
package can be installed with (for example) yum install krb5-workstation or apt install krb5-
user.
.domain.com = DOMAIN.COM
domain.com = DOMAIN.COM
DOMAIN.COM = {
kdc = domain.com
admin_server = domain.com
}
# ktutil
ktutil: rkt /root/hosts-nxlog.keytab
ktutil: rkt /root/nxlog.keytab
ktutil: wkt /root/nxlog-result.keytab
ktutil: q
1001
# klist -e -k -t /root/nxlog-result.keytab
Keytab name: FILE:/root/nxlog-result.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
5 17.02.2016 04:16:37 hosts/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)
4 17.02.2016 04:16:37 http/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)
e. Either copy the keytab into place, or merge if there are already keys in /etc/krb5.keytab.
cp /root/nxlog-result.keytab /etc/krb5.keytab
# ktutil
ktutil: rkt /etc/krb5.keytab
ktutil: rkt /root/nxlog-result.keytab
ktutil: wkt /etc/krb5.keytab
ktutil: q
# klist -e -k -t /etc/krb5.keytab
Keytab name: FILE:/etc/krb5.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
5 31.12.1969 15:00:00 HTTP/linux.domain.com@PEROKSID.COM (aes256-cts-hmac-sha1-96)
5 17.02.2016 04:20:08 HTTP/linux.domain.com@PEROKSID.COM (aes256-cts-hmac-sha1-96)
5 17.02.2016 04:20:08 hosts/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)
4 17.02.2016 04:20:08 http/linux.domain.com@DOMAIN.COM (aes256-cts-hmac-sha1-96)
9. Make sure the port defined in the im_wseventing configuration is accessible from the Windows clients. The
local firewall rules on the Linux node may need to be updated.
10. Configure and run NXLog. See the configuration example below.
• X509 certificate generation using either OpenSSL or the Windows certificate manager,
• configuration of the NXLog im_wseventing module.
• configuration of Windows Remote Management (WinRM) on each Windows source host,
We will refer to the host running NXLog with the im_wseventing module as server. Under
NOTE Windows the Subscription Manager refers to the same entity since im_wsevening is what
manages the subscription. We will use the name client when referring to the Windows hosts
sending the logs using WEF.
The client certificate must have the X509 v3 Extended Key Usage: TLS Web Client Authentication
extension and the server certificate needs the X509 v3 Extended Key Usage: TLS Web Server
Authentication extension. You will likely encounter an error when trying to configure WEF and the connection
to the server will fail without these extended key usage attributes. Also make sure that the intended purpose of
the certificates are set to Server Authentication and Client Authentication respectively.
When generating the certificates please ensure that the CN in the server certificate subject matches the reverse
DNS name, otherwise you may get errors in the Microsoft Windows/Event-ForwardingPlugin/Operational
eventlog saying The SSL certificate contains a common name (CN) that does not match the
1002
hostname.
For OpenSSL based certificate generation see the scripts in our public git repository.
SUBJ="/CN=NXLog-WEF-CA/O=nxlog.org/C=HU/ST=state/L=location"
openssl req -x509 -nodes -newkey rsa:2048 -keyout ca-key.pem -out ca-cert.pem -batch -subj "$SUBJ"
-config gencert.cnf
openssl x509 -outform der -in ca-cert.pem -out ca-cert.crt
Generate the client certificate and export it together with the CA in PFX format to be imported into the Windows
certificate store:
CLIENTSUBJ="/CN=winclient.domain.corp/O=nxlog.org/C=HU/ST=state/L=location"
openssl req -new -newkey rsa:2048 -nodes -keyout client-key.pem -out req.pem -batch -subj
"$CLIENTSUBJ" -config gencert.cnf
openssl x509 -req -days 1024 -in req.pem -CA ca-cert.pem -CAkey ca-key.pem -out client-cert.pem
-set_serial 01 -extensions client_cert -extfile gencert.cnf
openssl pkcs12 -export -out client.pfx -inkey client-key.pem -in client-cert.pem -certfile ca-
cert.pem
SERVERSUBJ="/CN=nxlogserver.domain.corp/O=nxlog.org/C=HU/ST=state/L=location"
openssl req -new -newkey rsa:2048 -nodes -keyout server-key.pem -out req.pem -batch -subj
"$SERVERSUBJ" -config gencert.cnf
openssl x509 -req -days 1024 -in req.pem -CA ca-cert.pem -CAkey ca-key.pem -out server-cert.pem
-set_serial 01 -extensions server_cert -extfile gencert.cnf
openssl x509 -outform der -in server-cert.pem -out server-cert.crt
In order to generate the certificates with the correct extensions the following is needed in gencert.cnf:
[ server_cert ]
basicConstraints=CA:FALSE
nsCertType = server
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer:always
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
#crlDistributionPoints=URI:http://127.0.0.1/crl.pem
[ client_cert ]
basicConstraints=CA:FALSE
nsCertType = client
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer:always
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
If you are using an intermediary CA please make sure that the ca-cert.pem file contains—in
NOTE correct order—the public part of every issuer’s certificate. The easiest way to achieve this is to
'cat' the pem certificates together.
1003
If you have more complex requirements follow this guide on how to set up a CA and generate certificates with
OpenSSl.
For more information on creating certificates under windows see this document: Request Certificates by Using
the Certificate Request Wizard.
Make sure to create the certificates with the required extensions as noted above. Once you have issued the
certificates you will need to export the server certificate in PFX format. The PFX must contain the private key also,
the password may be omitted. The PFX file can then be converted to the PEM format required by im_wseventing
using openssl:
You will also need to export the CA certificate (without the private key) the same way and convert it into ca-
cert.pem.
You will need to use server-key.pem, server-cert.pem and ca-cert.pem for the HTTPSCertKeyFile,
HTTPSCertFile and HTTPSCAFile respectively.
Optionally you can use the QueryXML option to filter on specific channels or events.
See the configuration example below for how your nxlog.conf should look.
Once the configuration is complete you may start the nxlog service.
1. Install, configure, and enable Windows Remote Management (WinRM) on each source host.
a. Make sure the Windows Remote Management (WS-Management) service is installed, running, and set to
Automatic startup type.
b. If WinRM is not already installed, see these instructions on MSDN: Installation and Configuration for
Windows Remote Management.
c. Check that the proper client authentication method (Certificate) is enabled for WinRM. Issue the
following command:
Auth
Basic = false
Digest = true
Kerberos = true
Negotiate = true
Certificate = true
CredSSP = true [Source="GPO"]
1004
Windows Remoting does not support event forwarding over unsecured transport (such
as HTTP). Therefore it is recommended to disable the Basic authentication:
NOTE
winrm set winrm/config/client/auth @{Basic="false"}
d. Import the client authentication certificate if you used OpenSSL to generate these. In the Certificate
MMC snap-in for the Local Computer click More actions - All Tasks - Import…. Import the
client.pfx file. Enter the private key password (if set) and make sure the Include all extended
properties check-box is selected.
After importing is completed, open the Certificates MMC snap-in, select Computer
account and double-click on the client certificate to check if the full certificate chain is
NOTE
available and trusted. You may want to move the CA certificate under the Trusted
Root Certification Authorities in order to make the client certificate trusted.
e. Grant the NetworkService account the proper permissions to access the client certificate using the
Windows HTTP Services Certificate Configuration Tool (WinHttpCertCfg.exe) and check that the
NetworkService account has access to the private key file of the client authentication certificate by
running the following command:
If NetworkService is not listed in the output, grant it permissions by running the following command:
f. In order to access the Security EventLog, the NetworkService account needs to be added to the Event Log
Readers group.
g. Configure the source host security policy to enable event forwarding:
i. Run the Group Policy MMC snap-in (gpedit.msc) and go to Computer Configuration ›
Administrative Templates › Windows Components › Event Forwarding.
ii. Right-click the SubscriptionManager setting and select Properties › ]. Enable the
SubscriptionManager setting › and click [ Show to add a server address.
iii. Add at least one setting that specifies the NXLog collector system. The SubscriptionManager
Properties window contains an Explain tab that describes the syntax for the setting. If you have
used the gencert-server.sh script it should print the subscription manager string that has the
following format:
Server=HTTPS://nxlogserver.domain.corp:5985/wsman/,Refresh=14400,IssuerCA=57F5048548A6A98
3C3A14DA80E0626E4A462FC04
iv. To find the IssuerCA fingerprint, open MMC, add the Certificates snap-in, select the Local Computer
account find the Issuing CA certificate. Copy the Thumbprint from the Details tab. Please make
sure to eliminate spaces and the invisible non-breaking space that is before the first character of the
thumbprint on Windows 2008.
v. After the SubscriptionManager setting has been added, ensure the policy is applied by running:
gpupdate /force
vi. At this point the WinRM service on the Windows client should connect to NXLog and there should be
1005
a connection attempt logged in nxlog.log and you should soon start seeing events arriving.
O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)(A;;0x1;;;NS)
121.39.4. Troubleshooting
WEF if not easy to configure and there may be many things that can go wrong. To troubleshoot WEF you should
check the Windows Eventlog under the following channels:
The CN in the server certificate subject must match the reverse dns, otherwise you may get errors in the
Microsoft Windows/Event-ForwardingPlugin/Operational eventlog saying The SSL certificate
contains a common name (CN) that does not match the hostname. Also in that case the WinRM service
may want to use a CRL url to download the revocation list. If it cannot check the CRL there will be error messages
under Applications and Services Logs/Microsoft Windows/CAPI2 such as this:
In our experience if the FQDN and the reverse DNS of the server is properly set up it shouldn’t fail with the CRL
check.
Unfortunately the diagnostic messages in the Windows Eventlog are in some cases rather sloppy. You may see
messages such as The forwarder is having a problem communicating with the subscription manager
at address https://nxlog:5985/wsman/. Error code is 42424242 and the Error Message is . Note
the empty error message. Other than guessing you may try looking up the error code on the internet…
If the IssuerCA thumbprint is incorrect or it can’t locate the certificate in the certificate store then the above error
will be logged in the Windows EventLog with Error code 2150858882.
The Refresh interval should be set to a higher value (e.g. Refresh=1200), in the GPO Subscription Manager
settings otherwise the windows client will reconnect too frequently resulting in a lot of connection/disconnection
messages in nxlog.log.
By default the module does not log connection attempts which would be otherwise useful for troubleshooting
purposes. This can be turned on with the LogConnections configuration directive. The windows Event Forwarding
service may disconnect during the TLS handshake with the following message logged in nxlog.log when
LogConnections is enabled. This is normal as long as there is another connection attempt right after the
disconnection.
1006
2017-09-28 12:16:01 INFO connection accepted from 10.2.0.161:49381
2017-09-28 12:16:01 ERROR im_wseventing got disconnected during SSL handshake
2017-09-28 12:16:01 INFO connection accepted from 10.2.0.161:49381
See the article on Technet titled Windows Event Forwarding to a workgroup Collector Server for further
instructions and troubleshooting tips.
121.39.5. Configuration
The im_wseventing module accepts the following directives in addition to the common module directives. The
Address and ListenAddr directives are required.
Address
This mandatory directive accepts a URL address. This address is sent to the client to notify it where the events
should be sent. For example, Address https://nxlogserver.domain.corp:5985/wsman.
ListenAddr
This mandatory directive specifies the address of the interface where the module should listen for client
connections. Normally the any address 0.0.0.0 is used.
AddPrefix
If this boolean directive is set to TRUE, names of fields parsed from the <EventData> portion of the event
XML will be prefixed with EventData.. For example, $EventData.SubjectUserName will be added to the
event record instead of $SubjectUserName. The same applies to <UserData>. This directive defaults to
FALSE: field names will not be prefixed.
CaptureEventXML
This boolean directive defines whether the module should store raw XML-formatted event data. If set to
TRUE, the module stores raw XML data in the $EventXML field. By default, the value is set to FALSE, and the
$EventXML field is not added to the event record.
ConnectionRetry
This optional directive specifies the reconnection interval. The default value is PT60.0S (60 seconds).
ConnectionRetryTotal
This optional directive specifies the maximum number of reconnection attempts. The default is 5 attempts. If
the client exceeds the retry count it will consider the subscription to be stale and will not attempt to
reconnect.
Expires
This optional directive can be used to specify a duration after which the subscription will expire, or an
absolute time when the subscription will expire. By default, the subscription will never expire.
HeartBeats
Heartbeats are dummy events that do not appear in the output. These are used by the client to signal that
logging is still functional if no events are generated during the specified time period. The default heartbeat
value of PT3600.000S may be overridden with this optional directive.
HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with an unknown or self-signed certificate. The default value
is FALSE: all HTTPS connections must present a trusted certificate.
1007
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS client. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS client. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS client. The certificate filenames in this directory must be in
the OpenSSL hashed format.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS client.
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
1008
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
LogConnections
This boolean directive can be used to turn on logging of connections. Since WEF connections can be quite
frequent and excessive it could generate a lot of noise. On the other hand it can be useful to troubleshoot
clients. This is disabled by default.
MaxElements
This optional directive specifies the maximum number of event records to be batched by the client. If this is
not specified the default value is decided by the client.
MaxEnvelopeSize
This optional directive can be used to set a limit on the size of the allowed responses, in bytes. The default
size is 153600 bytes. Event records exceeding this size will be dropped by the client and replaced by a drop
notification.
MaxTime
This optional directive specifies the maximum amount of time allowed to elapse for the client to batch events.
The default value is PT30.000S (30 seconds).
Port
This specifies the port on which the module will listen for incoming connections. The default is port 5985.
Query
This directive specifies the query for pulling only specific EventLog sources. See the MSDN documentation
about Event Selection. Note that this directive requires a single-line parameter, so multi-line query XML
should be specified using line continuation:
1 Query <QueryList> \
2 <Query Id='1'> \
3 <Select Path='Security'>*[System/Level=4]</Select> \
4 </Query> \
5 </QueryList>
QueryXML
This directive is the same as the Query directive above, except it can be used as a block. Multi-line XML
queries can be used without line continuation, and the XML Query can be directly copied from Event Viewer.
1009
1 <QueryXML>
2 <QueryList>
3 <!-- XML-style comments can
4 span multiple lines in
5 QueryXML blocks like this.
6 -->
7 <Query Id='1'>
8 <Select Path='Security'>*[System/Level=4]</Select>
9 </Query>
10 </QueryList>
11 </QueryXML>
Commenting with the # mark does not work within multi-line Query directives or QueryXML
blocks. In this case, use XML-style comments <!-- --> as shown in the example above.
CAUTION Failure to follow this syntax for comments within queries will render the module instance
useless. Since NXLog does not parse the content of QueryXML blocks, this behavior is
expected.
SubscriptionName
The default value of NXLog Subscription may be overridden by with this optional directive. This name will
appear in the client logs.
121.39.6. Fields
The following fields are used by im_wseventing.
The actual fields generated will vary depending on the particular event’s source data.
1010
$ExecutionProcessID (type: integer)
The ID identifying the process that generated the event.
1011
Event Log Normalized
Severity Severity
1/Critical 5/CRITICAL
2/Error 4/ERROR
3/Warning 3/WARNING
4/Information 2/INFO
5/Verbose 1/DEBUG
121.39.7. Examples
1012
Example 652. Collecting Forwarded Events Using Kerberos
nxlog.conf
1 SuppressRepeatingLogs FALSE
2
3 <Extension json>
4 Module xm_json
5 </Extension>
6
7 <Input wseventin>
8 Module im_wseventing
9 Address http://LINUX.DOMAIN.COM:80/wsman
10 ListenAddr 0.0.0.0
11 Port 80
12 SubscriptionName test
13 Exec log_info(to_json());
14 <QueryXML>
15 <QueryList>
16 <Query Id="0" Path="Application">
17 <Select Path="Application">*</Select>
18 <Select Path="Security">*</Select>
19 <Select Path="Setup">*</Select>
20 <Select Path="System">*</Select>
21 <Select Path="ForwardedEvents">*</Select>
22 <Select Path="Windows PowerShell">*</Select>
23 </Query>
24 </QueryList>
25 </QueryXML>
26 </Input>
1013
Example 653. Collecting Forwarded Events Using HTTPS
This example Input module instance collects Windows EventLog remotely. Two EventLog queries are
specified, the first for hostnames matching foo* and the second for other hostnames.
nxlog.conf
1 <Input wseventing>
2 Module im_wseventing
3 ListenAddr 0.0.0.0
4 Port 5985
5 Address https://linux.corp.domain.com:5985/wsman
6 HTTPSCertFile %CERTDIR%/server-cert.pem
7 HTTPSCertKeyFile %CERTDIR%/server-key.pem
8 HTTPSCAFile %CERTDIR%/ca.pem
9 <QueryXML>
10 <QueryList>
11 <Computer>foo*</Computer>
12 <Query Id="0" Path="Application">
13 <Select Path="Application">*</Select>
14 </Query>
15 </QueryList>
16 </QueryXML>
17 <QueryXML>
18 <QueryList>
19 <Query Id="0" Path="Application">
20 <Select Path="Application">*</Select>
21 <Select Path="Microsoft-Windows-Winsock-AFD/Operational">*</Select>
22 <Select Path="Microsoft-Windows-Wired-AutoConfig/Operational">*</Select>
23 <Select Path="Microsoft-Windows-Wordpad/Admin">*</Select>
24 <Select Path="Windows PowerShell">*</Select>
25 </Query>
26 </QueryList>
27 </QueryXML>
28 </Input>
See the list of installer packages that provide the im_zmq module in the Available Modules chapter of the NXLog
User Guide.
121.40.1. Configuration
The im_zmq module accepts the following directives in addition to the common module directives. The Address,
ConnectionType, Port, and SocketType directives are required.
Address
This directive specifies the ZeroMQ socket address.
ConnectionType
This mandatory directive specifies the underlying transport protocol. It may be one of the following: TCP, PGM,
or EPGM.
1014
Port
This directive specifies the ZeroMQ socket port.
SocketType
This mandatory directive defines the type of the socket to be used. It may be one of the following: REQ,
DEALER, SUB, XSUB, or PULL. This must be set to SUB if ConnectionType is set to PGM or EPGM.
Connect
If this boolean directive is set to TRUE, im_zmq will connect to the Address specified. If FALSE, im_zmq will bind
to the Address and listen for connections. The default is FALSE.
InputType
See the InputType directive in the list of common module directives. The default is Dgram.
Interface
This directive specifies the ZeroMQ socket interface.
SockOpt
This directive can be used to set ZeroMQ socket options. For example, SockOpt ZMQ_SUBSCRIBE
ANIMALS.CATS. This directive may be used more than once to set multiple options.
121.40.2. Examples
Example 654. Using the im_zmq Module
This example configuration accepts ZeroMQ messages over TCP and writes them to file.
nxlog.conf
1 <Input zmq>
2 Module im_zmq
3 SocketType PULL
4 ConnectionType TCP
5 Address 10.0.0.1
6 Port 1415
7 </Input>
8
9 <Output file>
10 Module om_file
11 File "/var/log/zmq-messages.log"
12 </Output>
13
14 <Route zmq_to_file>
15 Path zmq => file
16 </Route>
1015
Chapter 122. Processor Modules
Processor modules can be used to process log messages in the log message path between configured Input and
Output modules.
See the list of installer packages that provide the pm_blocker module in the Available Modules chapter of the
NXLog User Guide.
122.1.1. Configuration
The pm_blocker module accepts only the common module directives.
122.1.2. Functions
The following functions are exported by pm_blocker.
boolean is_blocking()
Return TRUE if the module is currently blocking the data flow, FALSE otherwise.
122.1.3. Procedures
The following procedures are exported by pm_blocker.
block(boolean mode);
When mode is TRUE, the module will block. A block(FALSE) should be called from a Schedule block or
another module, it might not get invoked if the queue is already full.
122.1.4. Examples
1016
Example 655. Using the pm_blocker Module
In this example messages are received over UDP and forwarded to another host via TCP. The log data is
forwarded during non-working hours (between 7pm and 8am). During working hours, the data is buffered
on the disk.
nxlog.conf (truncated)
1 <Input udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 1514
5 </Input>
6
7 <Processor buffer>
8 Module pm_buffer
9 # 100 MB disk buffer
10 MaxSize 102400
11 Type disk
12 </Processor>
13
14 <Processor blocker>
15 Module pm_blocker
16 <Schedule>
17 When 0 8 * * *
18 Exec blocker->block(TRUE);
19 </Schedule>
20 <Schedule>
21 When 0 19 * * *
22 Exec blocker->block(FALSE);
23 </Schedule>
24 </Processor>
25
26 <Output tcp>
27 Module om_tcp
28 Host 192.168.1.1
29 [...]
The pm_buffer module supports disk- and memory-based log message buffering. If both are required, multiple
pm_buffer instances can be used with different settings. Because a memory buffer can be faster, though its size is
limited, combining memory and disk based buffering can be a good idea if buffering is frequently used.
The disk-based buffering mode stores the log message data in chunks. When all the data is successfully
forwarded from a chunk, it is then deleted in order to save disk space.
Using pm_buffer is only recommended when there is a chance of message loss. The built-in flow
control in NXLog ensures that messages will not be read by the input module until the output
side can send, store, or forward. When reading from files (with im_file) or the Windows EventLog
NOTE
(with im_mseventlog or im_msvistalog) it is rarely necessary to use the pm_buffer module unless
log rotation is used. During a rotation, there is a possibility of dropping some data while the
output module (im_tcp, for example) is being blocked.
1017
See the list of installer packages that provide the pm_buffer module in the Available Modules chapter of the
NXLog User Guide.
122.2.1. Configuration
The pm_buffer module accepts the following directives in addition to the common module directives. The
MaxSize and Type directives are required.
CreateDir
If set to TRUE, this optional boolean directive instructs the module to create the output directory before
opening the file for writing if it does not exist. The default is FALSE.
MaxSize
This mandatory directive specifies the size of the buffer in kilobytes.
Type
This directive can be set to either Mem or Disk to select memory- or disk-based buffering.
Directory
This directory will be used to store the disk buffer file chunks. This is only valid if Type is set to Disk.
WarnLimit
This directive specifies an optional limit, smaller than MaxSize, which will trigger a warning message when
reached. The log message will not be generated again until the buffer size drops to half of WarnLimit and
reaches it again in order to protect against a warning message flood.
122.2.2. Functions
The following functions are exported by pm_buffer.
integer buffer_count()
Return the number of log messages held in the memory buffer.
integer buffer_size()
Return the size of the memory buffer in bytes.
122.2.3. Examples
1018
Example 656. Using a Memory Buffer to Protect Against UDP Message Loss
This configuration accepts log messages via UDP and forwards them via TCP. An intermediate memory-
based buffer allows the im_udp module instance to continue accepting messages even if the om_tcp output
stops working (caused by downtime of the remote host or network issues, for example).
nxlog.conf
1 <Input udp>
2 Module im_udp
3 Host 0.0.0.0
4 Port 514
5 </Input>
6
7 <Processor buffer>
8 Module pm_buffer
9 # 1 MB buffer
10 MaxSize 1024
11 Type Mem
12 # warn at 512k
13 WarnLimit 512
14 </Processor>
15
16 <Output tcp>
17 Module om_tcp
18 Host 192.168.1.1
19 Port 1514
20 </Output>
21
22 <Route udp_to_tcp>
23 Path udp => buffer => tcp
24 </Route>
This module was greatly inspired by the Perl based correlation tool SEC. Some of the rules of the pm_evcorr
module were designed to mimic those available in SEC. This module aims to be a better alternative to SEC with
the following advantages:
• The correlation rules in SEC work with the current time. With pm_evcorr it is possible to specify a time field
which is used for elapsed time calculation making offline event correlation possible.
• SEC uses regular expressions extensively, which can become quite slow if there are many correlation rules. In
contrast, this module can correlate pre-processed messages using fields from, for example, the pattern
matcher and Syslog parsers without requiring the use of regular expressions (though these are also available
for use by correlation rules). Thus testing conditions can be significantly faster when simple comparison is
used instead of regular expression based pattern matching.
• This module was designed to operate on fields, making it possible to correlate structured logs in addition to
simple free-form log messages.
• Most importantly, this module is written in C, providing performance benefits (where SEC is written in pure
Perl).
The rulesets of this module can use a context. A context is an expression which is evaluated during runtime to a
value and the correlation rule is checked in the context of this value. For example, to count the number of failed
1019
logins per user and alert if the failed logins exceed 3 for the user, the $AccountName would be used as the
context. There is a separate context storage for each correlation rule instance. For global contexts accessible
from all rule instances, see module variables and statistical counters.
See the list of installer packages that provide the pm_evcorr module in the Available Modules chapter of the
NXLog User Guide.
122.3.1. Configuration
The pm_evcorr module accepts the following directives in addition to the common module directives.
The pm_evcorr configuration contains correlation rules which are evaluated for each log message processed by
the module. Currently there are seven rule types supported by pm_evcorr: Absence, Group, Pair, Simple, Stop,
Suppressed, and Thresholded. These rules are defined in configuration blocks. The rules are evaluated in the
order they are defined. For example, a correlation rule can change a state, variable, or field which can be then
used by a later rule. File inclusion can be useful to store correlation rules in a separate file.
Absence
This rule type does the opposite of Pair. When TriggerCondition evaluates to TRUE, this rule type will wait
Interval seconds for RequiredCondition to become TRUE. If it does not become TRUE, it executes the
statement(s) in the Exec directive(s).
Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.
Exec
One or more Exec directives must be specified, each taking a statement as argument.
The evaluation of this Exec is not triggered by a log event; thus it does not make sense to
NOTE
use log data related operations such as accessing fields.
Interval
This mandatory directive takes an integer argument specifying the number of seconds to wait for
RequiredCondition to become TRUE. Its value must be greater than 0. The TimeField directive is used to
calculate time.
RequiredCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. When
this evaluates to TRUE after TriggerCondition evaluated to TRUE within Interval seconds, the statement(s)
in the Exec directive(s) are NOT executed.
TriggerCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.
Group
This rule type groups messages together based on the specified correlation context. The Exec block is
executed at each event. The last log data of each context group is available through get_prev_event_data().
This way, fields and information can be propagated from the previous group event to the following one.
Context
This mandatory directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.
Exec
One or more Exec directives must be specified, each taking a statement as an argument.
1020
Pair
When TriggerCondition evaluates to TRUE, this rule type will wait Interval seconds for RequiredCondition to
become TRUE. It then executes the statement(s) in the Exec directive(s).
Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.
Exec
One or more Exec directives must be specified, each taking a statement as argument.
Interval
This directive takes an integer argument specifying the number of seconds to wait for RequiredCondition
to become TRUE. If this directive is 0 or not specified, the rule will wait indefinitely for RequiredCondition
to become TRUE. The TimeField directive is used to calculate time.
RequiredCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. When
this evaluates to TRUE after TriggerCondition evaluated to TRUE within Interval seconds, the statement(s)
in the Exec directive(s) are executed.
TriggerCondition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.
Simple
This rule type is essentially the same as the Exec directive supported by all modules. Because Execs are
evaluated before the correlation rules, the Simple rule was also needed to be able to evaluate a statement as
the other rules do, following the rule order. The Simple block has one directive also with the same name.
Exec
One or more Exec directives must be specified, with a statement as argument.
Stop
This rule type will stop evaluating successive rules if the Condition evaluates to TRUE. The optional Exec
directive will be evaluated in this case.
Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. When
it evaluates to TRUE, the correlation rule engine will stop checking any further rules.
Exec
One or more Exec directives may be specified, each taking a statement as argument. This will be
evaluated when the specified Condition is satisfied. This directive is optional.
Suppressed
This rule type matches the given condition. If the condition evaluates to TRUE, the statement specified with
the Exec directive is evaluated. The rule will then ignore any log messages for the time specified with Interval
directive. This rule is useful for avoiding creating multiple alerts in a short period when a condition is
satisfied.
Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.
Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.
1021
Exec
One or more Exec directives must be specified, each taking a statement as argument.
Interval
This mandatory directive takes an integer argument specifying the number of seconds to ignore the
condition. The TimeField directive is used to calculate time.
Thresholded
This rule type will execute the statement(s) in the Exec directive(s) if the Condition evaluates to TRUE
Threshold or more times during the Interval specified. The advantage of this rule over the use of statistical
counters is that the time window is dynamic and shifts as log messages are processed.
Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value.
Context
This optional directive specifies an expression to be used as the context. It must evaluate to a value.
Usually a field is specified here.
Exec
One or more Exec directives must be specified, each taking a statement as argument.
Interval
This mandatory directive takes an integer argument specifying a time window for Condition to become
TRUE. Its value must be greater than 0. The TimeField directive is used to calculate time. This time window
is dynamic, meaning that it will shift.
Threshold
This mandatory directive takes an integer argument specifying the number of times Condition must
evaluate to TRUE within the given time Interval. When the threshold is reached, the module executes the
statement(s) in the Exec directive(s).
ContextCleanTime
When a Context is used in the correlation rules, these must be purged from memory after they are expired,
otherwise using too many context values could result in a high memory usage. This optional directive
specifies the interval between context cleanups, in seconds. By default a 60 second cleanup interval is used if
any rules use a Context and this directive is not specified.
TimeField
This specifies the name of the field to use for calculating elapsed time, such as EventTime. The name of the
field must be specified without the leading dollar sign ($). If this parameter is not specified, the current time is
assumed. This directive makes it possible to accurately correlate events based on the event time recorded in
the logs and to do non-real-time event correlation.
122.3.2. Functions
The following functions are exported by pm_evcorr.
1022
122.3.3. Examples
Example 657. The Absence Directive
The following configuration shows the Absence directive. In this case, if TriggerCondition evaluates to
TRUE, it waits the seconds defined in Interval for the RequiredCondition to become TRUE. If the
RequiredCondition does not become TRUE within the specified interval, then it executes what is defined in
Exec.
nxlog.conf
1 <Input internal>
2 Module im_internal
3 <Exec>
4 $raw_event = $Message;
5 $EventTime = 2010-01-01 00:01:00;
6 </Exec>
7 </Input>
8
9 <Processor evcorr>
10 Module pm_evcorr
11 TimeField EventTime
12 <Absence>
13 TriggerCondition $Message =~ /^absence-trigger/
14 RequiredCondition $Message =~ /^absence-required/
15 Interval 10
16 <Exec>
17 log_info("'absence-required' not received within 10 secs");
18 </Exec>
19 </Absence>
20 </Processor>
Input Sample
2010-01-01 00:00:26 absence-trigger↵
2010-01-01 00:00:29 absence-required - will not log 'got absence'↵
2010-01-01 00:00:46 absence-trigger↵
2010-01-01 00:00:57 absence-required - will log an additional 'absence-required not received
within 10 secs'↵
Output Sample
absence-trigger↵
absence-required - will not log 'got absence'↵
absence-trigger↵
absence-required - will log an additional 'absence-required not received within 10 secs'↵
'absence-required' not received within 10 secs↵
1023
Example 658. The Group Directive
The following configuration shows rules for the Group directive. It rewrites the events to exclude the date
and time, then rewrites the $raw_event with the context and message. After that, for every matched event,
it adds the $Message field of the newly matched event to it.
nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 ContextCleanTime 10
5 <Group>
6 Context $Context
7 <Exec>
8 if defined get_prev_event_data("raw_event")
9 {
10 $raw_event = get_prev_event_data("raw_event") + ", " + $Message;
11 }
12 else
13 {
14 $raw_event = "Context: " + $Context + " Messages: " + $Message;
15 }
16 </Exec>
17 </Group>
18 </Processor>
Input Sample
2010-01-01 00:00:01 [a] suppressed1↵
2010-01-01 00:00:02 [b] suppressed2↵
2010-01-01 00:00:03 [a] suppressed3↵
2010-01-01 00:00:04 [b] suppressed4↵
2010-01-01 00:00:04 [b] suppressed5↵
2010-01-01 00:00:05 [c] suppressed6↵
2010-01-01 00:00:06 [c] suppressed7↵
2010-01-01 00:00:34 [b] suppressed8↵
2010-01-01 00:01:00 [a] pair-first1↵
Output Sample
Context: a Messages: suppressed1↵
Context: b Messages: suppressed2↵
Context: a Messages: suppressed1, suppressed3↵
Context: b Messages: suppressed2, suppressed4↵
Context: b Messages: suppressed2, suppressed4, suppressed5↵
Context: c Messages: suppressed6↵
Context: c Messages: suppressed6, suppressed7↵
Context: b Messages: suppressed2, suppressed4, suppressed5, suppressed8↵
Context: a Messages: suppressed1, suppressed3, pair-first1↵
1024
Example 659. The Pair Directive
The following configuration shows rules for the Pair directive. In this case, if TriggerCondition evaluates to
TRUE, it waits the seconds defined in Interval for the RequiredCondition to become TRUE, then executes
what is defined in Exec. If the Interval is 0, there is no window for matching.
nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Pair>
5 TriggerCondition $Message =~ /^pair-first/
6 RequiredCondition $Message =~ /^pair-second/
7 Interval 30
8 Exec $raw_event = "got pair";
9 </Pair>
10 </Processor>
Input Sample
2010-01-01 00:00:12 pair-first - now look for pair-second↵
2010-01-01 00:00:22 pair-second - will log 'got pair'↵
2010-01-01 00:00:25 pair-first↵
2010-01-01 00:00:56 pair-second - will not log 'got pair' because it is over the interval↵
Output Sample
pair-first - now look for pair-second↵
got pair↵
pair-first↵
The following configuration shows rules for the Simple directive. In this case, if the $Message field starts
with simple it is rewritten to got simple.
nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Simple>
5 Exec if $Message =~ /^simple/ $raw_event = "got simple";
6 </Simple>
7 </Processor>
Input Sample
2010-01-01 00:00:00 Not simple↵
2010-01-01 00:00:05 Not simple again↵
2010-01-01 00:00:10 simple1↵
2010-01-01 00:00:15 simple2↵
Output Sample
Not simple↵
Not simple again↵
got simple↵
got simple↵
1025
Example 661. The Stop Directive
The following configuration shows a rule for the Stop directive in conjunction with the Simple directive. In
this case, if the Stop condition evaluates to FALSE, the Simple directive returns the output as rewritten.
nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Stop>
5 Condition $EventTime < 2010-01-01 00:00:00
6 Exec log_debug("got stop");
7 </Stop>
8 <Simple>
9 Exec $raw_event = "rewritten";
10 </Simple>
11 </Processor>
Input Sample
2010-01-02 00:00:00 this will be rewritten↵
2010-01-02 00:00:10 this too↵
2010-01-02 00:00:15 as well as this↵
Output Sample
rewritten↵
rewritten↵
rewritten↵
The following configuration shows a rule for the Suppressed directive. In this case, the directive matches
the input event and executes the corresponding action, but only for the time defined in the Interval
condition in seconds. After that, it logs the input as is.
nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Suppressed>
5 Condition $Message =~ /^to be suppressed/
6 Interval 30
7 Exec $raw_event = "suppressed..";
8 </Suppressed>
9 </Processor>
Input Sample
2010-01-01 00:00:01 to be suppressed1 - Suppress kicks in, will log 'suppressed..'↵
2010-01-01 00:00:21 to be suppressed2 - suppressed and logged as is↵
2010-01-01 00:00:23 to be suppressed3 - suppressed and logged as is↵
Output Sample
suppressed..↵
to be suppressed2 - suppressed and logged as is↵
to be suppressed3 - suppressed and logged as is↵
1026
Example 663. The Thresholded Directive
The following configuration shows rules for the Thresholded directive. In this case, if the number of events
exceeds the given threshold within the interval period, the action defined in Exec is carried out.
nxlog.conf
1 <Processor evcorr>
2 Module pm_evcorr
3 TimeField EventTime
4 <Thresholded>
5 Condition $Message =~ /^thresholded/
6 Threshold 3
7 Interval 60
8 Exec $raw_event = "got thresholded";
9 </Thresholded>
10 </Processor>
Input Sample
2010-01-01 00:00:13 thresholded1 - not tresholded will log as is↵
2010-01-01 00:00:15 thresholded2 - not tresholded will log as is↵
2010-01-01 00:00:20 thresholded3 - will log 'got thresholded'↵
2010-01-01 00:00:25 thresholded4 - will log 'got thresholded' again↵
Output Sample
thresholded1 - not tresholded will log as is↵
thresholded2 - not tresholded will log as is↵
got thresholded↵
got thresholded↵
This module has been deprecated and will be removed in a future release. Filtering is now
NOTE
possible in any module with a conditional drop() procedure in an Exec block or directive.
This statement drops the current event if the $raw_event field matches the specified regular expression.
See the list of installer packages that provide the pm_filter module in the Available Modules chapter of the NXLog
User Guide.
122.4.1. Configuration
The pm_filter module accepts the following directives in addition to the common module directives.
Condition
This mandatory directive takes an expression as argument which must evaluate to a boolean value. If the
expression does not evaluate to TRUE, the log message is discarded.
1027
122.4.2. Examples
Example 665. Filtering Messages
This configuration retains only log messages that match one of the regular expressions, all others are
discarded.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Processor filter>
7 Module pm_filter
8 Condition $raw_event =~ /failed/ or $raw_event =~ /error/
9 </Processor>
10
11 <Output file>
12 Module om_file
13 File "/var/log/error"
14 </Output>
15
16 <Route uds_to_file>
17 Path uds => filter => file
18 </Route>
NOTE This module has been deprecated and will be removed in a future release.
When the module starts, it creates an initial random hash value which is signed with the private key and stored in
$nxlog.hmac_initial field. As messages pass through the module, it calculates a hash value using the previous
hash value, the initial hash value, and the fields of the log message. This calculated value is added to the log
message as a new field called $nxlog.hmac, and can be used to later verify the integrity of the message.
If the attacker can insert messages at the source, this module will add a HMAC value and
WARNING the activity will go unnoticed. This method only secures messages that are already
protected with a HMAC value.
For this method to work more securely, the private key should be protected by a password and
NOTE the password should not be stored with the key (the configuration file should not contain the
password). This will force the agent to prompt for the password when it is started.
See the list of installer packages that provide the pm_hmac module in the Available Modules chapter of the
NXLog User Guide.
122.5.1. Configuration
The pm_hmac module accepts the following directives in addition to the common module directives. The
1028
CertKeyFile directive is required.
CertKeyFile
This mandatory directive specifies the path of the private key file to be used to sign the initial hash value.
Fields
This directive accepts a comma-separated list of fields. These fields will be used for calculating the HMAC
value. This directive is optional, and the $raw_event field will be used if it is not specified.
HashMethod
This directive sets the hash function. The following message digest methods can be used: md2, md5, mdc2,
rmd160, sha, sha1, sha224, sha256, sha384, and sha512. The default is md5.
KeyPass
This specifies the password of the CertKeyFile.
122.5.2. Fields
The following fields are used by pm_hmac.
122.5.3. Examples
1029
Example 666. Protecting Messages with a HMAC Value
This configuration uses the im_uds module to read log messages from a socket. It then adds a hash value
to each message. Finally it forwards them via TCP to another NXLog agent in the binary format.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Processor hmac>
7 Module pm_hmac
8 CertKeyFile %CERTDIR%/client-key.pem
9 KeyPass secret
10 HashMethod SHA1
11 </Processor>
12
13 <Output tcp>
14 Module om_tcp
15 Host 192.168.1.1
16 Port 1514
17 OutputType Binary
18 </Output>
19
20 <Route uds_to_tcp>
21 Path uds => hmac => tcp
22 </Route>
NOTE This module has been deprecated and will be removed in a future release.
See the list of installer packages that provide the pm_hmac_check module in the Available Modules chapter of the
NXLog User Guide.
122.6.1. Configuration
The pm_hmac_check module accepts the following directives in addition to the common module directives. The
CertFile directive is required.
CertFile
This mandatory directive specifies the path of the certificate file to be used to verify the signature of the initial
hash value.
HashMethod
This directive sets the hash function. The following message digest methods can be used: md2, md5, mdc2,
rmd160, sha, sha1, sha224, sha256, sha384, and sha512. The default is md5. This must be the same as the
hash method used for creating the HMAC values.
CADir
This optional directive specifies the path to a directory containing certificate authority (CA) certificates, which
1030
will be used to verify the certificate. The certificate filenames in this directory must be in the OpenSSL hashed
format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including a copy
of the certificate in this directory.
CAFile
This optional directive specifies the path of the certificate authority (CA) certificate, which will be used to
verify the certificate. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.
CRLDir
This optional directive specifies the path to a directory containing certificate revocation lists (CRLs), which will
be consulted when checking the certificate. The certificate filenames in this directory must be in the OpenSSL
hashed format.
CRLFile
This optional directive specifies the path of the certificate revocation list (CRL), which will be consulted when
checking the certificate.
Fields
This directive accepts a comma-separated list of fields. These fields will be used for calculating the HMAC
value. This directive is optional, and the $raw_event field will be used if it is not specified.
122.6.2. Fields
The following fields are used by pm_hmac_check.
122.6.3. Examples
1031
Example 667. Verifying Message Integrity
This configuration accepts log messages in the NXLog binary format. The HMAC values are checked, then
the messages are written to file.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Host 192.168.1.1
4 Port 1514
5 InputType Binary
6 </Input>
7
8 <Processor hmac_check>
9 Module pm_hmac_check
10 CertFile %CERTDIR%/client-cert.pem
11 CAFile %CERTDIR%/ca.pem
12 # CRLFile %CERTDIR%/crl.pem
13 HashMethod SHA1
14 </Processor>
15
16 <Output file>
17 Module om_file
18 File "/var/log/msg"
19 </Output>
20
21 <Route tcp_to_file>
22 Path tcp => hmac_check => file
23 </Route>
This module has been deprecated and will be removed in a future release. The functionality of
NOTE
this module can be implemented with Variables.
See the list of installer packages that provide the pm_norepeat module in the Available Modules chapter of the
NXLog User Guide.
122.7.1. Configuration
The pm_norepeat module accepts the following directives in addition to the common module directives.
CheckFields
This optional directive takes a comma-separated list of field names which are used to compare log messages.
Only the fields listed here are compared, the others are ignored. For example, the $EventTime field will be
different in repeating messages, so this field should not be used in the comparison. If this directive is not
specified, the default field to be checked is $Message.
1032
122.7.2. Fields
The following fields are used by pm_norepeat.
122.7.3. Examples
Example 668. Filtering Out Duplicated Messages
This configuration reads log messages from the socket. The $Hostname, $SourceName, and $Message fields
are used to detect duplicates. Then the messages are written to file.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Processor norepeat>
7 Module pm_norepeat
8 CheckFields Hostname, SourceName, Message
9 </Processor>
10
11 <Output file>
12 Module om_file
13 File "/var/log/messages"
14 </Output>
15
16
17 <Route uds_to_file>
18 Path uds => norepeat => file
19 </Route>
1033
122.8. Null (pm_null)
This module does not do any special processing, so basically it does nothing. Yet it can be used with the Exec and
Schedule directives, like any other module.
See the list of installer packages that provide the pm_null module in the Available Modules chapter of the NXLog
User Guide.
See the list of installer packages that provide the pm_pattern module in the Available Modules chapter of the
NXLog User Guide.
122.9.1. Configuration
The pm_pattern module accepts the following directives in addition to the common module directives.
PatternFile
This mandatory directive specifies the name of the pattern database file.
122.9.2. Fields
The following fields are used by pm_pattern.
122.9.3. Examples
Example 669. Using the pm_pattern Module
This configuration reads BSD Syslog messages from the socket, processes the messages with a pattern file,
and then writes them to file in JSON format.
1034
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Extension syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input uds>
10 Module im_uds
11 UDS /dev/log
12 Exec parse_syslog_bsd();
13 </Input>
14
15 <Processor pattern>
16 Module pm_pattern
17 PatternFile /var/lib/nxlog/patterndb.xml
18 </Processor>
19
20 <Output file>
21 Module om_file
22 File "/var/log/out"
23 Exec to_json();
24 </Output>
25
26 <Route uds_to_file>
27 Path uds => pattern => file
28 </Route>
The following pattern database contains two patterns to match SSH authentication messages. The patterns
are under a group named ssh which checks whether the $SourceName field is sshd and only tries to match
the patterns if the logs are indeed from sshd. The patterns both extract AuthMethod, AccountName, and
SourceIP4Address from the log message when the pattern matches the log. Additionally TaxonomyStatus and
TaxonomyAction are set. The second pattern utilizes the Exec block, which is evaluated when the pattern
matches.
For this pattern to work, the logs must be parsed with parse_syslog() prior to processing by
NOTE the pm_pattern module (as in the above example), because it uses the $SourceName and
$Message fields.
patterndb.xml
<?xml version='1.0' encoding='UTF-8'?>
<patterndb>
<created>2010-01-01 01:02:03</created>
<version>42</version>
<group>
<name>ssh</name>
<id>42</id>
<matchfield>
<name>SourceName</name>
<type>exact</type>
<value>sshd</value>
</matchfield>
<pattern>
<id>1</id>
1035
<name>ssh auth success</name>
<matchfield>
<name>Message</name>
<type>regexp</type>
<!-- Accepted publickey for nxlogfan from 192.168.1.1 port 4242 ssh2 -->
<value>^Accepted (\S+) for (\S+) from (\S+) port \d+ ssh2</value>
<capturedfield>
<name>AuthMethod</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>AccountName</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>SourceIP4Address</name>
<type>string</type>
</capturedfield>
</matchfield>
<set>
<field>
<name>TaxonomyStatus</name>
<value>success</value>
<type>string</type>
</field>
<field>
<name>TaxonomyAction</name>
<value>authenticate</value>
<type>string</type>
</field>
</set>
</pattern>
<pattern>
<id>2</id>
<name>ssh auth failure</name>
<matchfield>
<name>Message</name>
<type>regexp</type>
<value>^Failed (\S+) for invalid user (\S+) from (\S+) port \d+ ssh2</value>
<capturedfield>
<name>AuthMethod</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>AccountName</name>
<type>string</type>
</capturedfield>
<capturedfield>
<name>SourceIP4Address</name>
<type>string</type>
</capturedfield>
</matchfield>
<set>
<field>
1036
<name>TaxonomyStatus</name>
<value>failure</value>
<type>string</type>
</field>
<field>
<name>TaxonomyAction</name>
<value>authenticate</value>
<type>string</type>
</field>
</set>
<exec>
$TestField = 'test';
</exec>
<exec>
$TestField = $Testfield + 'value';
</exec>
</pattern>
</group>
</patterndb>
This module has been deprecated and will be removed in a future release. Format conversion is
NOTE now possible in any module by using functions and procedures provided by the following
modules: xm_syslog, xm_csv, xm_json, and xm_xml.
See the list of installer packages that provide the pm_transformer module in the Available Modules chapter of the
NXLog User Guide.
122.10.1. Configuration
The pm_transformer module accepts the following directives in addition to the common module directives. For
conversion to occur, the InputFormat and OutputFormat directives must be specified.
InputFormat
This directive specifies the input format of the $raw_event field so that it is further parsed into fields. If this
directive is not specified, no parsing will be performed.
CSV
Input is parsed as a comma-separated list of values. See xm_csv for similar functionality. The input fields
must be defined by CSVInputFields.
JSON
Input is parsed as JSON. This does the same as the parse_json() procedure.
syslog_bsd
Same as syslog_rfc3164.
syslog_ietf
Same as syslog_rfc5424.
1037
syslog_rfc3164
Input is parsed in the BSD Syslog format as defined by RFC 3164. This does the same as the
parse_syslog_bsd() procedure.
syslog_rfc5424
Input is parsed in the IETF Syslog format as defined by RFC 5424. This does the same as the
parse_syslog_ietf() procedure.
XML
Input is parsed as XML. This does the same as the parse_xml() procedure.
OutputFormat
This directive specifies the output transformation. If this directive is not specified, fields are not converted
and $raw_event is left unmodified.
CSV
Output in $raw_event is formatted as a comma-separated list of values. See xm_csv for similar
functionality.
JSON
Output in $raw_event is formatted as JSON. This does the same as the to_json() procedure.
syslog_bsd
Same as syslog_rfc3164.
syslog_ietf
Same as syslog_rfc5424.
syslog_rfc3164
Output in $raw_event is formatted in the BSD Syslog format as defined by RFC 3164. This does the same
as the to_syslog_bsd() procedure.
syslog_rfc5424
Output in $raw_event is formatted in the IETF Syslog format as defined by RFC 5424. This does the same
as the to_syslog_ietf() procedure.
syslog_snare
Output in $raw_event is formatted in the SNARE Syslog format. This does the same as the
to_syslog_snare() procedure. This should be used in conjunction with the im_mseventlog or im_msvistalog
module to produce an output compatible with Snare Agent for Windows.
XML
Output in $raw_event is formatted in XML. This does the same as the to_xml() procedure.
CSVInputFields
This is a comma-separated list of fields which will be set from the input parsed. The field names must have
the dollar sign ($) prepended.
CSVInputFieldTypes
This optional directive specifies the list of types corresponding to the field names defined in CSVInputFields. If
specified, the number of types must match the number of field names specified with CSVInputFields. If this
directive is omitted, all fields will be stored as strings. This directive has no effect on the fields-to-CSV
conversion.
1038
CSVOutputFields
This is a comma-separated list of message fields which are placed in the CSV lines. The field names must have
the dollar sign ($) prepended.
122.10.2. Examples
Example 670. Using the pm_transformer Module
This configuration reads BSD Syslog messages from file and writes them to another file in CSV format.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input filein>
6 Module im_file
7 File "tmp/input"
8 </Input>
9
10 <Processor transformer>
11 Module pm_transformer
12 InputFormat syslog_rfc3164
13 OutputFormat csv
14 CSVOutputFields $facility, $severity, $timestamp, $hostname, \
15 $application, $pid, $message
16 </Processor>
17
18 <Output fileout>
19 Module om_file
20 File "tmp/output"
21 </Output>
22
23 <Route filein_to_fileout>
24 Path filein => transformer => fileout
25 </Route>
NOTE This module has been deprecated and will be removed in a future release.
A timestamp request is created for each log message received by this module, and the response is appended to
the tsa_response field. The module does not request the certificate to be included in the response as this would
greatly increase the size of the responses. The certificate used by the server for creating timestamps should be
saved manually for later verification. The module establishes one HTTP connection to the server for the time-
stamping by using HTTP Keep-Alive requests and will reconnect if the remote closes the connection.
1039
Since each log message generates a HTTP(S) request to the Time-Stamp server, the message
throughput can be greatly affected. It is recommended that only messages of relevant
importance are time-stamped through the use of proper filtering rules applied to messages
NOTE before they reach the pm_ts module instance.
Creating timestamps in batch mode (requesting one timestamp on multiple messages) is not
supported at this time.
See the list of installer packages that provide the pm_ts module in the Available Modules chapter of the NXLog
User Guide.
122.11.1. Configuration
The pm_ts module accepts the following directives in addition to the common module directives. The URL
directive is required.
URL
This mandatory directive specifies the URL of the Time-Stamp Authority server. The URL must begin with
either http:// for plain HTTP over TCP or https:// for HTTP over SSL.
Digest
This specifies the digest method (hash function) to be used. The SHA1 hash function is used by default. The
following message digest methods can be used: md2, md5, mdc2, rmd160, sha, sha1, sha224, sha256, sha384,
and sha512. Note that the Time-Stamp server must support the digest method specified.
Fields
This directive accepts a comma-separated list of fields. These fields will be used for calculating the hash value
sent to the TSA server. This directive is optional, and the $raw_event field is used if it is not specified.
HTTPSAllowUntrusted
This boolean directive specifies that the connection to the Time-Stamp Authority server should be allowed
without certificate verification. If set to TRUE, the connection will be allowed even if the server provides an
unknown or self-signed certificate. The default value is FALSE: all Time-Stamp Authority servers must present
a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote Time-Stamp Authority server. The certificate filenames in this directory
must be in the OpenSSL hashed format. This directive can only be specified if the URL begins with https. A
remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including a copy of the
certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate of the certificate authority (CA) certificate, which will be used to check
the certificate of the remote Time-Stamp Authority server. This directive can only be specified if the URL
begins with https. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
1040
HTTPSCertFile
This specifies the path of the certificate file to be used for the SSL handshake. This directive can only be
specified if the URL begins with https. If this directive is not specified but the URL begins with https, then an
anonymous SSL connection is attempted without presenting a client-side certificate.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake. This directive can only be
specified if the URL begins with https. If this directive is not specified but the URL begins with https, then an
anonymous SSL connection is attempted without presenting a client-side certificate.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote Time-Stamp Authority server. The certificate filenames in this
directory must be in the OpenSSL hashed format. This directive can only be specified if the URL begins with
https.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote Time-Stamp Authority server. This directive can only be specified if the URL begins
with https.
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
1041
122.11.2. Fields
The following fields are used by pm_ts.
122.11.3. Examples
Example 671. Storing Requested Timestamps in a CSV File
With this configuration, NXLog will read BSD Syslog messages from the socket, add timestamps, and then
save the messages to file in CSV format.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input uds>
6 Module im_uds
7 UDS /dev/log
8 Exec parse_syslog_bsd();
9 </Input>
10
11 <Processor ts>
12 Module pm_ts
13 URL https://tsa-server.com:8080/tsa
14 Digest md5
15 </Processor>
16
17 <Processor csv>
18 Module pm_transformer
19 OutputFormat csv
20 CSVOutputFields $facility, $severity, $timestamp, $hostname, \
21 $application, $pid, $message, $tsa_response
22 </Processor>
23
24 <Output file>
25 Module om_file
26 File "/dev/stdout"
27 </Output>
28
29 <Route uds_to_file>
30 Path uds => ts => csv => file
31 </Route>
1042
Chapter 123. Output Modules
Output modules are responsible for writing event log data to various destinations.
See the list of installer packages that provide the om_batchcompress module in the Available Modules chapter of
the NXLog User Guide.
123.1.1. Configuration
The om_batchcompress module accepts the following directives in addition to the common module directives. The
Host directive is required.
Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.
Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination that does not have a port number specified in the Host
directive. If no port is configured for a destination in either directive, the default port is used, which is port
2514.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.
AllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE the remote will be able to connect with unknown and self-signed certificates. The default value
is FALSE: all connections must present a trusted certificate.
CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.
1043
CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.
CertFile
This specifies the path of the certificate file to be used for the SSL handshake.
CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.
CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.
CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
OpenSSL hashed format.
CRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote socket.
FlushInterval
The module will send a batch of data to the remote destination after this amount of time in seconds, unless
FlushLimit is reached first. This defaults to 5 seconds.
FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
compress and send the batch to the remote. This defaults to 500 events. The FlushInterval directive may
trigger sending the batch before this limit is reached if the log volume is low to ensure that data is sent
promptly.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.
LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used, which is not always ideal in firewalled network environments.
SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.
SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
1044
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
UseSSL
This boolean directive specifies that SSL transfer mode should be enabled. The default is FALSE.
123.1.2. Examples
Example 672. Sending Logs With om_batchcompress
This configuration forwards logs in compressed batches to a remote NXLog agent over the default port.
Batches are sent at least once every two seconds, or more frequently if the buffer reaches 100 events.
nxlog.conf
1 <Output batchcompress>
2 Module om_batchcompress
3 Host rcvr:2514
4 FlushLimit 100
5 FlushInterval 2
6 </Output>
7
8 # old syntax
9 #<Output batchcompress>
10 # Module om_batchcompress
11 # Host 10.0.0.1
12 # Port 2514
13 # FlushLimit 100
14 # FlushInterval 2
15 #</Output>
This configuration sends logs in compressed batches to a remote NXLog agent in a failover configuration
(multiple Hosts defined). The actual destinations used in this case are localhost:2514, 192.168.1.1:2514
and example.com:1234.
nxlog.conf
1 <Output batchcompress>
2 Module om_batchcompress
3 # destination host / IP and destination port
4 Host example1:2514
5 # first fail-over
6 Host example2:2514
7 # originating port
8 LocalPort 15000
9 </Output>
The sleep() procedure can also be used for testing by simulating log message delays.
See the list of installer packages that provide the om_blocker module in the Available Modules chapter of the
1045
NXLog User Guide.
123.2.1. Configuration
The om_blocker module accepts only the common module directives.
123.2.2. Examples
Example 674. Testing Buffering With the om_blocker Module
Because the route in this configuration is blocked, this will test the behavior of the configured memory-
based buffer.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Processor buffer>
7 Module pm_buffer
8 WarnLimit 512
9 MaxSize 1024
10 Type Mem
11 </Processor>
12
13 <Output blocker>
14 Module om_blocker
15 </Output>
16
17 <Route uds_to_blocker>
18 Path uds => buffer => blocker
19 </Route>
The im_dbi and om_dbi modules support GNU/Linux only because of the libdbi library. The
NOTE
im_odbc and om_odbc modules provide native database access on Windows.
libdbi needs drivers to access the database engines. These are in the libdbd-* packages on
Debian and Ubuntu. CentOS 5.6 has a libdbi-drivers RPM package, but this package does not
NOTE contain any driver binaries under /usr/lib64/dbd. The drivers for both MySQL and PostgreSQL
are in libdbi-dbd-mysql. If these are not installed, NXLog will return a libdbi driver initialization
error.
See the list of installer packages that provide the om_dbi module in the Available Modules chapter of the NXLog
User Guide.
1046
123.3.1. Configuration
The om_dbi module accepts the following directives in addition to the common module directives.
Driver
This mandatory directive specifies the name of the libdbi driver which will be used to connect to the
database. A DRIVER name must be provided here for which a loadable driver module exists under the name
libdbdDRIVER.so (usually under /usr/lib/dbd/). The MySQL driver is in the libdbdmysql.so file.
SQL
This directive should specify the INSERT statement to be executed for each log message. The field names
(names beginning with $) will be replaced with the value they contain. String types will be quoted.
Option
This directive can be used to specify additional driver options such as connection parameters. The manual of
the libdbi driver should contain the options available for use here.
123.3.2. Examples
These two examples are for the plain Syslog fields. Other fields generated by parsers, regular expression rules,
the pm_pattern pattern matcher module, or input modules, can also be used. Notably, the im_msvistalog and
im_mseventlog modules generate different fields than those shown in these examples.
1047
Example 675. Storing Syslog in a PostgreSQL Database
The following configuration accepts log messages via TCP and uses libdbi to insert log messages into the
database.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input tcp>
6 Module im_tcp
7 Port 1234
8 Host 0.0.0.0
9 Exec parse_syslog_bsd();
10 </Input>
11
12 <Output dbi>
13 Module om_dbi
14 SQL INSERT INTO log (facility, severity, hostname, timestamp, \
15 application, message) \
16 VALUES ($SyslogFacility, $SyslogSeverity, $Hostname, '$EventTime', \
17 $SourceName, $Message)
18 Driver pgsql
19 Option host 127.0.0.1
20 Option username dbuser
21 Option password secret
22 Option dbname logdb
23 </Output>
24
25 <Route tcp_to_dbi>
26 Path tcp => dbi
27 </Route>
1048
Example 676. Storing Logs in a MySQL Database
This configuration reads log messages from the socket and inserts them into a MySQL database.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input uds>
6 Module im_uds
7 UDS /dev/log
8 Exec parse_syslog_bsd();
9 </Input>
10
11 <Output dbi>
12 Module om_dbi
13 SQL INSERT INTO log (facility, severity, hostname, timestamp, \
14 application, message) \
15 VALUES ($SyslogFacility, $SyslogSeverity, $Hostname, '$EventTime', \
16 $SourceName, $Message)
17 Driver mysql
18 Option host 127.0.0.1
19 Option username mysql
20 Option password mysql
21 Option dbname logdb
22 </Output>
23
24 <Route uds_to_dbi>
25 Path uds => dbi
26 </Route>
This module requires the xm_json extension module to be loaded in order to convert the
NOTE payload to JSON. If the $raw_event field does not start with a left curly bracket ({), the module
will automatically convert the data to JSON.
See the list of installer packages that provide the om_elasticsearch module in the Available Modules chapter of the
NXLog User Guide.
• By default, Elasticsearch will not automatically detect the date format used by NXLog Enterprise Edition 3.x.
As a result, NXLog datetime values, such as $EventTime, will be mapped as strings rather than dates. To fix
this, add an Elasticsearch template for indices matching the specified pattern (nxlog*). Extend the
dynamic_date_formats setting to include additional date formats. For compatibility with indices created
with Elasticsearch 5.x or older, use _default_ instead of _doc (but _default_ will not be supported by
1049
Elasticsearch 7.0.0).
• The IndexType directive should be set to _doc (the default in NXLog Enterprise Edition 3.x is logs). However,
for compatibility with indices created with Elasticsearch 5.x or older, set IndexType as required for the
configured mapping types. See the IndexType directive below for more information.
123.4.2. Configuration
The om_elasticsearch module accepts the following directives in addition to the common module directives. The
URL directive is required.
URL
This mandatory directive specifies the URL for the module to POST the event data. If multiple URL directives
are specified, the module works in a failover configuration. If a destination becomes unavailable, the module
automatically fails over to the next one. If the last destination becomes unavailable, the module will fail over
to the first destination. The module operates in plain HTTP or HTTPS mode depending on the URL provided. If
the port number is not explicitly indicated in the URL, it defaults to port 80 for HTTP and port 443 for HTTPS.
The URL should point to the _bulk endpoint, or Elasticsearch will return 400 Bad Request.
FlushInterval
The module will send a bulk index command to the defined endpoint after this amount of time in seconds,
unless FlushLimit is reached first. This defaults to 5 seconds.
FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
send a bulk index command to the endpoint defined in URL. This defaults to 500 events. The FlushInterval
directive may trigger sending the bulk index request before this limit is reached if the log volume is low to
ensure that data is promptly sent to the indexer.
Index
This directive specifies the index to insert the event data into. It must be a string type expression. If the
expression in the Index directive is not a constant string (it contains functions, field names, or operators), it
will be evaluated for each event to be inserted. The default is nxlog. Typically, an expression with strftime() is
used to generate an index name based on the event’s time or the current time (for example,
strftime(now(), "nxlog-%Y%m%d").
IndexType
This directive specifies the index type to use in the bulk index command. It must be a string type expression.
If the expression in the IndexType directive is not a constant string (it contains functions, field names, or
operators), it will be evaluated for each event to be inserted. The default is _doc. Note that index mapping
types have been deprecated and will be removed in Elasticsearch 7.0.0 (see Removal of mapping types in the
1050
Elasticsearch Reference). IndexType should only be used if required for indices created with Elasticsearch 5.x
or older.
ID
This directive allows to specify a custom _id field for Elasticsearch documents. If the directive is not defined,
Elasticsearch uses a GUID for the _id field. Setting custom _id fields can be useful for correlating
Elasticsearch documents in the future and can help to prevent storing duplicate events in the Elasticsearch
storage. The directive’s argument must be a string type expression. If the expression in the ID directive is not
a constant string (it contains functions, field names, or operators), it will be evaluated for each event to be
submitted. You can use a concatenation of event fields and the event timestamp to uniquely and
informatively identify events in the Elasticsearch storage.
HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-
signed certificate. The default value is FALSE: the remote HTTPS server must present a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS server.
HTTPSKeyPass
1051
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
ProxyAddress
This optional directive is used to specify the IP address of the proxy server in case the module should connect
to the Elasticsearch server through a proxy.
The om_elasticsearch module supports HTTP proxying only. SOCKS4/SOCKS5 proxying is not
NOTE
supported.
ProxyPort
This optional directive is used to specify the port number required to connect to the proxy server.
SNI
This optional directive specifies the host name used for Server Name Indication (SNI) in HTTPS mode.
123.4.3. Examples
1052
Example 677. Sending Logs to an Elasticsearch Server
This configuration reads log messages from file and forwards them to the Elasticsearch server on localhost.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Input file>
6 Module im_file
7 File '/var/log/myapp*.log'
8 # Parse log here if needed
9 # $EventTime should be set here
10 </Input>
11
12 <Output elasticsearch>
13 Module om_elasticsearch
14 URL http://localhost:9200/_bulk
15 FlushInterval 2
16 FlushLimit 100
17 # Create an index daily
18 Index strftime($EventTime, "nxlog-%Y%m%d")
19 # Or use the following if $EventTime is not set
20 # Index strftime(now(), "nxlog-%Y%m%d")
21 </Output>
This configuration sends log messages to an Elasticsearch server in a failover configuration (multiple URLs
defined). The actual destinations used in this case are http://localhost:9200/_bulk,
http://192.168.1.1:9200/_bulk, and http://example.com:9200/_bulk.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Output elasticsearch>
6 Module om_elasticsearch
7 URL http://localhost:9200/_bulk
8 URL http://192.168.1.1:9200/_bulk
9 URL http://example.com:9200/_bulk
10 </Output>
See the list of installer packages that provide the om_eventdb module in the Available Modules chapter of the
NXLog User Guide.
1053
123.5.1. Configuration
The om_eventdb module accepts the following directives in addition to the common module directives. The
DBname, Password, and UserName directives are required along with one of Host and UDS.
DBname
Name of the database to read the logs from.
Host
This specifies the IP address or a DNS hostname the module should connect to (the hostname of the MySQL
server). This directive cannot be used with UDS.
Password
Password for authenticating to the database server.
UDS
For Unix domain socket connections, this directive can be used to specify the path of the socket such as
/var/run/mysqld.sock. This directive cannot be used with the Host and Port directive.
UserName
Username for authenticating to the database server.
BulkLoad
If set to TRUE, this optional boolean directive instructs the module to use a bulk-loading technique to load
data into the database; otherwise traditional INSERT statements are issued to the server. The default is TRUE.
LoadInterval
This directive specifies how frequently bulk loading should occur, in seconds. It can be only used when
BulkLoad is set to TRUE. The default bulk load interval is 20 seconds.
Port
This specifies the port the module should connect to, on which the database is accepting connections. This
directive cannot be used with UDS. The default is port 3306.
TempDir
This directive sets a directory where temporary files are written. It can be only used when BulkLoad is set to
TRUE. If this directive is not specified, the default directory is /tmp. If the chosen directory does not exist, the
module will try to create it.
123.5.2. Examples
1054
Example 679. Storing Logs in an EventDB Database
This configuration accepts log messages via TCP in the NXLog binary format and inserts them into a
database using libdrizzle.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Host localhost
4 Port 2345
5 InputType Binary
6 </Input>
7
8 <Output eventdb>
9 Module om_eventdb
10 Host localhost
11 Port 3306
12 Username joe
13 Password secret
14 Dbname eventdb_test2
15 </Output>
16
17 <Route tcp_to_eventdb>
18 Path tcp => eventdb
19 </Route>
The program or script is started when NXLog starts and must not exit until the module is
NOTE
stopped. To invoke a program or script for each log message, use xm_exec instead.
See the list of installer packages that provide the om_exec module in the Available Modules chapter of the NXLog
User Guide.
123.6.1. Configuration
The om_exec module accepts the following directives in addition to the common module directives. The
Command directive is required.
Command
This mandatory directive specifies the name of the program or script to be executed.
Arg
This is an optional parameter. Arg can be specified multiple times, once for each argument that needs to be
passed to the Command. Note that specifying multiple arguments with one Arg directive, with arguments
separated by spaces, will not work (the Command will receive it as one argument).
Restart
Restart the process if it exits. There is a one second delay before it is restarted to avoid a denial-of-service
1055
when a process is not behaving. Looping should be implemented in the script itself. This directive is only to
provide some safety against malfunctioning scripts and programs. This boolean directive defaults to FALSE:
the Command will not be restarted if it exits.
123.6.2. Examples
Example 680. Piping Logs to an External Program
With this configuration, NXLog will start the specified command, read logs from socket, and write those logs
to the standard input of the command.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Output someprog>
7 Module om_exec
8 Command /usr/bin/someprog
9 Arg -
10 </Output>
11
12 <Route uds_to_someprog>
13 Path uds => someprog
14 </Route>
See the list of installer packages that provide the om_file module in the Available Modules chapter of the NXLog
User Guide.
123.7.1. Configuration
The om_file module accepts the following directives in addition to the common module directives. The File
directive is required.
File
This mandatory directive specifies the name of the output file to open. It must be a string type expression. If
the expression in the File directive is not a constant string (it contains functions, field names, or operators), it
will be evaluated before each event is written to the file (and after the Exec is evaluated). Note that the
filename must be quoted to be a valid string literal, unlike in other directives which take a filename argument.
For relative filenames, note that NXLog changes its working directory to "/" unless the global SpoolDir is set to
something else.
Below are three variations for specifying the same output file on a Windows system:
File 'C:\logs\logmsg.txt'
File "C:\\logs\\logmsg.txt"
File 'C:/logs/logmsg.txt'
CacheSize
In case of dynamic filenames, a cache can be utilized to keep files open. This increases performance by
1056
reducing the overhead caused by many open/close operations. It is recommended to set this to the number
of expected files to be written. Note that this should not be set to more than the number of open files
allowed by the system. This caching provides performance benefits on Windows only. Caching is disabled by
default.
CreateDir
If set to TRUE, this optional boolean directive instructs the module to create the output directory before
opening the file for writing if it does not exist. The default is FALSE.
OutputType
See the OutputType directive in the list of common module directives. If this directive is not specified the
default is LineBased (the module will use CRLF as the record terminator on Windows, or LF on Unix).
This directive also supports stream processors, see the description in the OutputType section.
Sync
This optional boolean directive instructs the module to sync the file after each log message is written,
ensuring that it is really written to disk from the buffers. Because this can hurt performance, the default is
FALSE.
Truncate
This optional boolean directive instructs the module to truncate the file before each write, causing only the
most recent log message to be saved. The default is FALSE: messages are appended to the output file.
123.7.2. Functions
The following functions are exported by om_file.
string file_name()
Return the name of the currently open file which was specified using the File directive. Note that this will be
the old name if the filename changes dynamically; for the new name, use the expression specified for the File
directive instead of using this function.
integer file_size()
Return the size of the currently open output file in bytes. Returns undef if the file is not open. This can
happen if File is not a string literal expression and there was no log message.
123.7.3. Procedures
The following procedures are exported by om_file.
reopen();
Reopen the current file. This procedure should be called if the file has been removed or renamed, for
example with the file_cycle(), file_remove(), or file_rename() procedures of the xm_fileop module. This does
not need to be called after rotate_to() because that procedure reopens the file automatically.
rotate_to(string filename);
Rotate the current file to the filename specified. The module will then open the original file specified with the
File directive. Note that the rename(2) system call is used internally which does not support moving files
across different devices on some platforms. If this is a problem, first rotate the file on the same device. Then
use the xm_exec exec_async() procedure to copy it to another device or file system, or use the xm_fileop
file_copy() procedure.
123.7.4. Examples
1057
Example 681. Storing Raw Syslog Messages into a File
This configuration reads log messages from socket and writes the messages to file. No additional
processing is done.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Output file>
7 Module om_file
8 File "/var/log/messages"
9 </Output>
10
11 <Route uds_to_file>
12 Path uds => file
13 </Route>
1058
Example 682. File Rotation Based on Size
With this configuration, NXLog accepts log messages via TCP and parses them as BSD Syslog. A separate
output file is used for log messages from each host. When the output file size exceeds 15 MB, it will be
automatically rotated and compressed.
nxlog.conf
1 <Extension exec>
2 Module xm_exec
3 </Extension>
4
5 <Extension syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input tcp>
10 Module im_tcp
11 Port 1514
12 Host 0.0.0.0
13 Exec parse_syslog_bsd();
14 </Input>
15
16 <Output file>
17 Module om_file
18 File "tmp/output_" + $Hostname + "_" + month(now())
19 <Exec>
20 if file->file_size() > 15M
21 {
22 $newfile = "tmp/output_" + $Hostname + "_" +
23 strftime(now(), "%Y%m%d%H%M%S");
24 file->rotate_to($newfile);
25 exec_async("/bin/bzip2", $newfile);
26 }
27 </Exec>
28 </Output>
29
30 <Route tcp_to_file>
31 Path tcp => file
32 </Route>
123.8. Go (om_go)
This module provides support for forwarding log data with methods written in the Go language. The file specified
by the ImportLib directive should contain one or more methods which can be called from the Exec directive of
any module. See also the xm_go and im_go modules.
For the system requirements, installation details and environmental configuration requirements
NOTE of Go, see the Getting Started section in the Go documentation. The Go environment is only
needed for compiling the Go file. NXLog does not need the Go environment for its operation.
The Go script imports the NXLog module, and will have access to the following classes and functions.
class nxModule
This class is instantiated by NXLog and can be accessed via the nxLogdata.module attribute. This can be used
to set or access variables associated with the module (see the example below).
1059
nxmodule.NxLogdataNew(*nxLogdata)
This function creates a new log data record.
nxmodule.Post(ld *nxLogdata)
This function puts log data struct for further processing.
nxmodule.AddEvent()
This function adds a READ event to NXLog. This allows to call the READ event later.
nxmodule.AddEventDelayed(mSec C.int)
This function adds a delayed READ event to NXLog. This allows to call the delayed READ event later.
class nxLogdata
This class represents an event. It is instantiated by NXLog and passed to the function specified by the
ImportFunc directive.
nxlogdata.Delete(field string)
This function removes the field from logdata.
nxlogdata.Fields() []string
This function returns an array of fields names in the logdata record.
module
This attribute is set to the module object associated with the event.
See the list of installer packages that provide the om_go module in the Available Modules chapter of the NXLog
User Guide.
For the Go environment to work with NXLog, the gonxlog.go file has to be installed.
1060
go build -o /path/to/yoursofile.so -buildmode=c-shared /path/to/yourgofile.go
123.8.3. Configuration
The om_go module accepts the following directives in addition to the common module directives.
ImportLib
This mandatory directive specifies the file containing the Go code compiled into a shared library .so file.
ImportFunc
This mandatory directive calls the specified function, which must accept an unsafe.Pointer object as its only
argument. This function is called when the module tries to read data. It is a mandatory function.
In this Go file template, the write function is called via the ImportFunc directive.
om_go Template
//export write
func write(ctx unsafe.Pointer) {
// get logdata from the context
if ld, ok := gonxlog.GetLogdata(ctx); ok {
// place your code here
}
}
123.8.5. Examples
1061
Example 683. Using om_go for Forwarding Events
nxlog.conf
1 <Input in>
2 Module im_testgen
3 MaxCount 10
4 </Input>
5
6 <Output out>
7 Module om_go
8 ImportLib "file/output.so"
9 ImportFunc write
10 </Output>
See the list of installer packages that provide the om_http module in the Available Modules chapter of the NXLog
User Guide.
123.9.1. Configuration
The om_http module accepts the following directives in addition to the common module directives. The URL
directive is required.
1062
URL
This mandatory directive specifies the URL for the module to POST the event data. If multiple URL directives
are specified, the module works in a failover configuration. If a destination becomes unavailable, the module
automatically fails over to the next one. If the last destination becomes unavailable, the module will fail over
to the first destination. The module operates in plain HTTP or HTTPS mode depending on the URL provided. If
the port number is not explicitly indicated in the URL, it defaults to port 80 for HTTP and port 443 for HTTPS.
AddHeader
This optional directive specifies an additional header to be added to each HTTP request.
BatchMode
This optional directive sets, whether the data should be sent as a single event per POST request or a batch of
events per POST request. The default setting is none, meaning that data will be sent as a single event per
POST request. The other available values are multipart and multiline. For multipart, the generated POST
request will use the multipart/mixed content type, where each batched event will be added as a separate
body part to the request body. For multiline, batched events will be added to the POST request one per line
(separated by CRLF (\r\n) characters).
The add_http_header() and set_http_request_path() procedures may cause the current batch to
be flushed immediately. For the multiline batching mode, this happens whenever the value of
the URL path or the value of an HTTP header changes, because this requires a new HTTP
NOTE request to be built. In multipart batching mode, only set_http_request_path() will cause a
batch flush when the path value changes, because add_http_header() only modifies the HTTP
header for the HTTP body part corresponding to the event record that is currently being
processed.
ContentType
This directive sets the Content-Type HTTP header to the string specified. The Content-Type is set to text/plain
by default. Note: If the BatchMode directive is set to multipart, then the value specified here will be used as
the Content-Type header for each part of the multipart/mixed HTTP request.
FlushInterval
This directive specifies the time period after which the accumulated data should be sent out in batched mode
in a POST request. This defaults to 50 milliseconds. The configuration of this directive only takes effect, if
BatchMode is set to multipart or multiline.
FlushLimit
This directive specifies the number of events that are merged into a single POST request. This defaults to 500
events. The configuration of this directive only takes effect, if BatchMode is set to multipart or multiline.
HTTPSAllowUntrusted
This boolean directive specifies that the connection should be allowed without certificate verification. If set to
TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-signed
certificate. The default value is FALSE: the remote HTTPS server must present a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
1063
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS server.
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
1064
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
ProxyAddress
This optional directive is used to specify the IP address of the proxy server in case the module should send
event data through a proxy.
The om_http module supports HTTP proxying only. SOCKS4/SOCKS5 proxying is not
NOTE
supported.
ProxyPort
This optional directive is used to specify the port number required to connect to the proxy server.
SNI
This optional directive specifies the host name used for Server Name Indication (SNI) in HTTPS mode.
123.9.2. Procedures
The following procedures are exported by om_http.
This function impacts the way batching works. See the BatchMode directive for more
NOTE
information.
set_http_request_path(string path);
Set the path in the HTTP request to the string specified. This is useful if the URL is dynamic and parameters
such as event ID need to be included in the URL. Note that the string must be URL encoded if it contains
reserved characters.
This function impacts the way batching works. See the BatchMode directive for more
NOTE
information.
123.9.3. Examples
Example 684. Sending Logs over HTTPS
This configuration reads log messages from file and forwards them via HTTPS.
nxlog.conf
1 <Output http>
2 Module om_http
3 URL https://server:8080/
4 AddHeader Auth-Token: 4ddf1d3c9
5 HTTPSCertFile %CERTDIR%/client-cert.pem
6 HTTPSCertKeyFile %CERTDIR%/client-key.pem
7 HTTPSCAFile %CERTDIR%/ca.pem
8 HTTPSAllowUntrusted FALSE
9 BatchMode multipart
10 FlushLimit 100
11 FlushInterval 2
12 </Output>
1065
123.10. Java (om_java)
This module provides support for processing NXLog log data with methods written in the Java language. The Java
classes specified via the ClassPath directives may define one or more class methods which can be called from the
Run or Exec directives of this module. Such methods must be declared with the public and static modifiers in
the Java code to be accessible from NXLog, and the first parameter must be of NXLog.Logdata type. See also the
im_java and xm_java modules.
For the system requirements, installation details and environmental configuration requirements
NOTE
of Java, see the Installing Java section in the Java documentation.
The NXLog Java class provides access to the NXLog functionality in the Java code. This class contains nested
classes Logdata and Module with log processing methods, as well as methods for sending messages to the
internal logger.
class NXLog.Logdata
This Java class provides the methods to interact with an NXLog event record object:
getField(name)
This method returns the value of the field name in the event.
setField(name, value)
This method sets the value of field name to value.
deleteField(name)
This method removes the field name from the event record.
getFieldnames()
This method returns an array with the names of all the fields currently in the event record.
getFieldtype(name)
This method retrieves the field type using the value from the name field.
class NXLog.Module
The methods below allow setting and accessing variables associated with the module instance.
saveCtx(key,value)
This method saves user data in the module data storage using values from the key and value fields.
loadCtx(key)
This method retrieves data from the module data storage using the value from the key field.
Below is the list of methods for sending messages to the internal logger.
NXLog.logInfo(msg)
This method sends the message msg to to the internal logger at INFO log level. It does the same as the core
log_info() procedure.
NXLog.logDebug(msg)
This method sends the message msg to to the internal logger at DEBUG log level. It does the same as the core
log_debug() procedure.
NXLog.logWarning(msg)
This method sends the message msg to to the internal logger at WARNING log level. It does the same as the
core log_warning() procedure.
1066
NXLog.logError(msg)
This method sends the message msg to to the internal logger at ERROR log level. It does the same as the core
log_error() procedure.
123.10.1. Configuration
The NXLog process maintains only one JVM instance for all om_java, im_java or xm_java running instances. This
means all classes loaded by the ClassPath directive will be available for all running Java instances.
The om_java module accepts the following directives in addition to the common module directives.
ClassPath
This mandatory directive defines the path to the .class files or a .jar file. This directive should be defined at
least once within a module block.
VMOption
This optional directive defines a single Java Virtual Machine (JVM) option.
VMOptions
This optional block directive serves the same purpose as the VMOption directive, but also allows specifying
multiple Java Virtual Machine (JVM) instances, one per line.
JavaHome
This optional directive defines the path to the Java Runtime Environment (JRE). The path is used to search for
the libjvm shared library. If this directive is not defined, the Java home directory will be set to the build-time
value. Only one JRE can be defined for one or multiple NXLog Java instances. Defining multiple JRE instances
causes an error.
Run
This mandatory directive specifies the static method inside the Classpath file which should be called.
This is an example of a configuration for adding a timestamp field and writing log processing results to a
file. The run method of the Writer Java class is being used to handle the processing.
nxlog.conf
1 <Output javaout>
2 Module om_java
3 # The Run directive includes the full method name with
4 # the nested and outer classes
5 # The mandatory parameter will be passed automatically
6 Run Output$Writer.run
7 ClassPath /tmp/Output.jar
8 </Output>
1067
Output.java
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.*;
try {
// 1. Retrieves the $raw_event field from the NXLog data record
// 2. Adds the timestamp field with the current time
// 3. Writes the results into the file
if (((String)ld.getField("raw_event")).contains("type=")) {
Files.write(Paths.get(fileName), ("timestamp=" + df.format(currentDate) + "
" + (String) ld.getField("raw_event") + "\n").getBytes(), StandardOpenOption.CREATE,
StandardOpenOption.APPEND);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
Input sample
type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"↵
↵
type=PATH msg=audit(1489999368.711:35724): item=0 name="/root/test" inode=528869 dev=08:01
mode=040755 ouid=0 ogid=0 rdev=00:00↵
↵
type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e syscall=2 success=yes exit=3
a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0 uid=0 gid=0 euid=0 suid=0
fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls" exe="/bin/ls" key=(null)↵
1068
Output Sample
timestamp=02.20.2020.09:19:58 type=CWD msg=audit(1489999368.711:35724): cwd="/root/nxlog"
timestamp=02.20.2020.09:19:58 type=PATH msg=audit(1489999368.711:35724): item=0
name="/root/test" inode=528869 dev=08:01 mode=040755 ouid=0 ogid=0 rdev=00:00
timestamp=02.20.2020.09:19:58 type=SYSCALL msg=audit(1489999368.711:35725): arch=c000003e
syscall=2 success=yes exit=3 a0=12dcc40 a1=90800 a2=0 a3=0 items=1 ppid=15391 pid=12309 auid=0
uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=583 comm="ls"
exe="/bin/ls" key=(null)
The om_kafka module is not supported as the underlying librdkafka library is unstable on
WARNING
AIX. Use it on IBM AIX at your own risk.
The module uses an internal persistent queue to back up event records that should be pushed to a Kafka broker.
Once the module receives an acknowledgement from the Kafka server that the message has been delivered
successfully, the module removes the corresponding message from the internal queue. If the module is unable
to deliver a message to a Kafka broker (for example, due to connectivity issues or the Kafka server being down),
this message is retained in the internal queue (including cases when NXLog restarts) and the module will attempt
to re-deliver the message again.
The number of re-delivery attempts can be specified by passing the message.send.max.retries property via
the Option directive (for example, Option message.send.max.retries 5). By default, the number of retries is
set to 2 and the time interval between two subsequent retries is 5 minutes. Thus, by altering the number of
retries, it is possible to control the total time for a message to remain in the internal queue. If a message cannot
be delivered within the allowed retry attempts, the message is dropped. The maximum size of the internal queue
is controlled by the LogqueueSize directive, which defaults to 100 messages. To increase the size of the internal
queue, follow these steps:
1. Specify the required queue size value using the LogqueueSize directive.
2. Set the directive Option queue.buffering.max.messages N. When this option is not set, the default value
used by librdkafka is 10000000.
For optimum performance, the LogqueueSize directive should be set to a value that is slightly larger than the
value used for the queue.buffering.max.messages option.
See the list of installer packages that provide the om_kafka module in the Available Modules chapter of the
NXLog User Guide.
123.11.1. Configuration
The om_kafka module accepts the following directives in addition to the common module directives. The
BrokerList and Topic directives are required.
BrokerList
This mandatory directive specifies the list of Kafka brokers to connect to for publishing logs. The list should
include ports and be comma-delimited (for example, localhost:9092,192.168.88.35:19092).
Topic
This mandatory directive specifies the Kafka topic to publish records to.
1069
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote brokers. CAFile is required if Protocol is set to ssl or sasl_ssl. To trust a self-signed certificate
presented by the remote (which is not signed by a CA), provide that certificate instead.
CertFile
This specifies the path of the certificate file to be used for the SSL handshake.
CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.
Compression
This directive specifies the compression types to use during transfer. Available types depend on the Kafka
library, and should include none (the default), gzip, snappy, and lz4.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.
Option
This directive can be used to pass a custom configuration property to the Kafka library (librdkafka). For
example, the group ID string can be set with Option group.id mygroup. This directive may be used more
than once to specify multiple options. For a list of configuration properties, see the librdkafka
CONFIGURATION.md file.
Passing librdkafka configuration properties via the Option directive should be done with
WARNING care since these properties are used for the fine-tuning of the librdkafka performance
and may result in various side effects.
Partition
This optional integer directive specifies the topic partition to write to. If this directive is not given, messages
are sent without a partition specified.
Protocol
This optional directive specifies the protocol to use for connecting to the Kafka brokers. Accepted values
include plaintext (the default), ssl, sasl_plaintext and sasl_ssl. If Protocol is set to ssl or sasl_ssl,
then the CAFile directive must also be provided.
SASLKerberosServiceName
This directive specifies the Kerberos service name to be used for SASL authentication. The service name is
required for the sasl_plaintext and sasl_ssl protocols.
SASLKerberosPrincipal
This specifies the client’s Kerberos principal name for the sasl_plaintext and sasl_ssl protocols. This
directive is only available and mandatory on Linux/UNIX. See note below.
SASLKerberosKeytab
Specifies the path to the kerberos keytab file which contains the client’s allocated principal name. This
directive is only available and mandatory on Linux/UNIX.
1070
The SASLKerberosServiceName and SASLKerberosPrincipal directives are only available on
Linux/UNIX. On Windows, the login user’s principal name and credentials are used for
SASL/Kerberos authentication.
NOTE For details about configuring Apache Kafka brokers to accept SASL/Kerberos authentication
from clients, please follow the instructions provided by the librdkafka project:
• For kafka brokers running on Linux and UNIX-likes: Using SASL with librdkafka
• For kafka brokers running on Windows: Using SASL with librdkafka on Windows
123.11.2. Examples
Example 686. Using the om_kafka Module
This configuration sends events to a Kafka cluster using the brokers specified. Events are published to the
first partition of the nxlog topic.
nxlog.conf
1 <Output out>
2 Module om_kafka
3 BrokerList localhost:9092,192.168.88.35:19092
4 Topic nxlog
5 Partition 0
6 Protocol ssl
7 CAFile /root/ssl/ca-cert
8 CertFile /root/ssl/client_debian-8.pem
9 CertKeyFile /root/ssl/client_debian-8.key
10 KeyPass thisisasecret
11 </Output>
See the list of installer packages that provide the om_null module in the Available Modules chapter of the NXLog
User Guide.
WARNING This module is deprecated, please use the om_odbc module instead.
123.13.1. Configuration
The om_oci module accepts the following directives in addition to the common module directives. The DBname,
Password, and UserName directives are required.
DBname
Name of the database to write the logs to.
1071
Password
Password for authenticating to the database server.
UserName
Username for authenticating to the database server.
ORACLE_HOME
This optional directive specifies the directory of the Oracle installation.
SQL
An optional SQL statement to override the default.
123.13.2. Examples
Example 687. Storing Logs in an Oracle Database
This configuration reads BSD Syslog messages from socket, parses the messages, and inserts them into the
database.
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input uds>
6 Module im_uds
7 UDS /dev/log
8 </Input>
9
10 <Output oci>
11 Module om_oci
12 dbname //192.168.1.1:1521/orcl
13 username joe
14 password secret
15 SQL INSERT INTO log ("id", "facility", "severity", "hostname", \
16 "timestamp", "application", "message") \
17 VALUES (log_seq.nextval, $SyslogFacility, $SyslogSeverity, \
18 $Hostname, to_date($rcvd_timestamp, \
19 'YYYY-MM-DD HH24:MI:SS'), \
20 $SourceName, $Message)
21 Exec parse_syslog();
22 </Output>
23
24 <Route uds_to_oci>
25 Path uds => oci
26 </Route>
Setting up the ODBC data source is not in the scope of this document. Please consult the relevant ODBC guide:
the unixODBC documentation or the Microsoft ODBC Data Source Administrator guide. The data source must be
1072
accessible by the user NXLog is running under.
The "SQL Server" ODBC driver is unsupported and does not work. Instead, use the "SQL Server
NOTE Native Client" or the "ODBC Driver for SQL Server" to insert records into a Microsoft SQL Server
database.
In addition to the SQL directive, this module provides two functions, sql_exec() and sql_fetch(), which can be
executed using the Exec directive. This allows more complex processing rules to be used and also makes it
possible to insert records into more than one table.
Both sql_exec() and sql_fetch() can take bind parameters as function arguments. It is
recommended to use bind parameters instead of concatenating the SQL statement with the
value. For example, these two are equivalent but the first is dangerous due to the lack of
escaping:
NOTE
See the list of installer packages that provide the om_odbc module in the Available Modules chapter of the NXLog
User Guide.
123.14.1. Configuration
The om_odbc module accepts the following directives in addition to the common module directives.
ConnectionString
This mandatory directive specifies the ODBC data source connection string.
SQL
This optional directive can be used to specify the INSERT statement to be executed for each log message. If
the statement fails for an event, it will be attempted again. If the SQL directive is not given, then an Exec
directive should be used to execute the sql_exec() function.
123.14.2. Functions
The following functions are exported by om_odbc.
string sql_error()
Return the error message from the last failed ODBC operation.
123.14.3. Examples
1073
Example 688. Write Events to SQL Server
This configuration uses a DSN-less connection and SQL Authentication to connect to an SQL Server
database. Records are inserted into the dbo.test1 table’s timestamp and message columns, using the
$EventTime and $Message fields from the current event.
nxlog.conf
1 <Output mssql>
2 Module om_odbc
3 ConnectionString Driver={ODBC Driver 13 for SQL Server}; Server=MSSQL-HOST; \
4 UID=test; PWD=testpass; Database=TESTDB
5 SQL "INSERT INTO dbo.test1 (timestamp, message) VALUES (?,?)", \
6 $EventTime, $Message
7 </Output>
In this example, the events read from the TCP input are inserted into the message column. The table has an
auto_increment id column, which is used to fetch and print the newly inserted line.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 Port 1234
4 Host 0.0.0.0
5 </Input>
6
7 <Output odbc>
8 Module om_odbc
9 ConnectionString DSN=mysql_ds;username=mysql;password=mysql;database=logdb;
10 <Exec>
11 if ( sql_exec("INSERT INTO log (facility, severity, hostname, timestamp, " +
12 "application, message) VALUES (?, ?, ?, ?, ?, ?)",
13 1, 2, "host", now(), "app", $raw_event) == TRUE )
14 {
15 if ( sql_fetch("SELECT max(id) as id from log") == TRUE )
16 {
17 log_info("ID: " + $id);
18 if ( sql_fetch("SELECT message from log WHERE id=?", $id) == TRUE )
19 {
20 log_info($message);
21 }
22 }
23 }
24 </Exec>
25 </Output>
26
27 <Route tcp_to_odbc>
28 Path tcp => odbc
29 </Route>
1074
processing rather than taking down the whole NXLog process.
This module makes it possible to execute Perl code in an output module that can handle the data directly in Perl.
See also the im_perl and xm_perl modules.
The module will parse the file specified in the PerlCode directive when NXLog starts the module. The Perl code
must implement the write_data subroutine which will be called by the module when there is data to process. This
subroutine is called for each event record and the event record is passed as an argument. To access event data,
the Log::Nxlog Perl module must be included, which provides the following methods.
To use the om_perl module on Windows, a separate Perl environment must be installed, such as
NOTE
Strawberry Perl. Currently, the om_perl module on Windows requires Strawberry Perl 5.28.0.1.
log_debug(msg)
Send the message msg to the internal logger on DEBUG log level. This method does the same as the
log_debug() procedure in NXLog.
log_info(msg)
Send the message msg to the internal logger on INFO log level. This method does the same as the log_info()
procedure in NXLog.
log_warning(msg)
Send the message msg to the internal logger on WARNING log level. This method does the same as the
log_warning() procedure in NXLog.
log_error(msg)
Send the message msg to the internal logger on ERROR log level. This method does the same as the
log_error() procedure in NXLog.
get_field(event, key)
Retrieve the value associated with the field named key. The method returns a scalar value if the key exists and
the value is defined, otherwise it returns undef.
For the full NXLog Perl API, see the POD documentation in Nxlog.pm. The documentation can be read with
perldoc Log::Nxlog.
See the list of installer packages that provide the om_perl module in the Available Modules chapter of the NXLog
User Guide.
123.15.1. Configuration
The om_perl module accepts the following directives in addition to the common module directives.
PerlCode
This mandatory directive expects a file containing valid Perl code. This file is read and parsed by the Perl
interpreter.
On Windows, the Perl script invoked by the PerlCode directive must define the Perl library
paths at the beginning of the script to provide access to the Perl modules.
nxlog-windows.pl
NOTE
use lib 'c:\Strawberry\perl\lib';
use lib 'c:\Strawberry\perl\vendor\lib';
use lib 'c:\Strawberry\perl\site\lib';
use lib 'c:\Program Files\nxlog\data';
1075
Config
This optional directive allows you to pass configuration strings to the script file defined by the PerlCode
directive. This is a block directive and any text enclosed within <Config></Config> is submitted as a single
string literal to the Perl code.
If you pass several values using this directive (for example, separated by the \n delimiter) be
NOTE
sure to parse the string correspondingly inside the Perl code.
Call
This optional directive specifies the Perl subroutine to invoke. With this directive, you can call only specific
subroutines from your Perl code. If the directive is not specified, the default subroutine write_data is
invoked.
123.15.2. Examples
Example 690. Handling Event Data in om_perl
This output module sends events to the Perl script, which simply writes the data from the $raw_event field
into a file.
nxlog.conf
1 <Output out>
2 Module om_perl
3 PerlCode modules/output/perl/perl-output.pl
4 Call write_data1
5 </Output>
perl-output.pl
use strict;
use warnings;
use Log::Nxlog;
sub write_data1
{
my ($event) = @_;
my $rawevt = Log::Nxlog::get_field($event, 'raw_event');
open(OUT, '>', 'tmp/output') || die("cannot open tmp/output: $!");
print OUT $rawevt, "(from perl)", "\n";
close(OUT);
}
123.16.1. Configuration
The om_pipe module accepts the following directives in addition to the common module directives.
Pipe
This mandatory directive specifies the name of the output pipe file. The module checks if the specified pipe
file exists and creates it in case it does not. If the specified pipe file is not a named pipe, the module does not
start.
1076
OutputType
This directive specifies the input data format. The default value is LineBased. See the OutputType directive in
the list of common module directives.
123.16.2. Examples
This example provides the NXLog configuration for forwarding messages to a named pipe on a UNIX-like
operating system.
With this configuration, NXLog reads messages from a file and forwards them to a pipe. No additional
processing is done.
nxlog.conf
<Input in>↵
Module im_file↵
File "/tmp/input"↵
</Input>↵
↵
<Output out>↵
Module om_pipe↵
Pipe "/tmp/output"↵
</Output>↵
The Python script should import the nxlog module, and will have access to the following classes and functions.
nxlog.log_debug(msg)
Send the message msg to the internal logger at DEBUG log level. This function does the same as the core
log_debug() procedure.
nxlog.log_info(msg)
Send the message msg to the internal logger at INFO log level. This function does the same as the core
log_info() procedure.
nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This function does the same as the core
log_warning() procedure.
nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This function does the same as the core
log_error() procedure.
class nxlog.Module
This class is instantiated by NXLog and can be accessed via the LogData.module attribute. This can be used to
set or access variables associated with the module (see the example below).
class nxlog.LogData
This class represents an event. It is instantiated by NXLog and passed to the write_data() method.
1077
delete_field(name)
This method removes the field name from the event record.
field_names()
This method returns a list with the names of all the fields currently in the event record.
get_field(name)
This method returns the value of the field name in the event.
set_field(name, value)
This method sets the value of field name to value.
module
This attribute is set to the Module object associated with the LogData event.
See the list of installer packages that provide the om_python module in the Available Modules chapter of the
NXLog User Guide.
123.17.1. Configuration
The om_python module accepts the following directives in addition to the common module directives.
PythonCode
This mandatory directive specifies a file containing Python code. The om_python instance will call a
write_data() function which must accept an nxlog.LogData object as its only argument.
Call
This optional directive specifies the Python method to invoke. With this directive, you can call only specific
methods from your Python code. If the directive is not specified, the default method write_data is invoked.
123.17.2. Examples
1078
Example 692. Forwarding Events With om_python
This example shows an alerter implemented as an output module instance in Python. First, any event with
a normalized severity less than of 4/ERROR is dropped; see the Exec directive (xm_syslog and most other
modules set a normalized $SeverityValue field). Then the Python function generates a custom email and
sends it via SMTP.
nxlog.conf
1 <Output out>
2 Module om_python
3 PythonCode /opt/nxlog/etc/output.py
4 Exec if $SeverityValue < 4 drop();
5 </Output>
output.py (truncated)
from email.mime.text import MIMEText
import pprint
import smtplib
import socket
import nxlog
HOSTNAME = socket.gethostname()
FROM_ADDR = 'nxlog@{}'.format(HOSTNAME)
TO_ADDR = 'you@example.com'
def write_data(event):
nxlog.log_debug('Python alerter received event')
This module requires the xm_json extension module to be loaded in order to convert the
payload to JSON. If the $raw_event field is empty the fields will be automatically converted to
NOTE
JSON. If $raw_event contains a valid JSON string it will be sent as-is, otherwise a JSON record will
be generated in the following structure: { "raw_event": "escaped raw_event content" }
See the list of installer packages that provide the om_raijin module in the Available Modules chapter of the NXLog
User Guide.
123.18.1. Configuration
The om_raijin module accepts the following directives in addition to the common module directives. The URL
directive is required.
1079
DBName
This mandatory directive specifies the database name to insert data into.
DBTable
This mandatory directive specifies the database table to insert data into.
URL
This mandatory directive specifies the URL for the module to POST the event data. If multiple URL directives
are specified, the module works in a failover configuration. If a destination becomes unavailable, the module
automatically fails over to the next one. If the last destination becomes unavailable, the module will fail over
to the first destination. The module operates in plain HTTP or HTTPS mode depending on the URL provided. If
the port number is not explicitly indicated in the URL, it defaults to port 80 for HTTP and port 443 for HTTPS.
The URL should point to the _bulk endpoint, otherwise Raijin will return 400 Bad Request.
FlushInterval
The module will send an INSERT command to the defined endpoint after this amount of time in seconds,
unless FlushLimit is reached first. This defaults to 5 seconds.
FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
send an INSERT command to the endpoint defined in URL. This defaults to 500 events. The FlushInterval
directive may trigger sending the INSERT request before this limit is reached if the log volume is low to ensure
that data is promptly sent.
HTTPSAllowUntrusted
This boolean directive specifies that the remote connection should be allowed without certificate verification.
If set to TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-
signed certificate. The default value is FALSE: the remote HTTPS server must present a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL) which will be consulted when checking the
certificate of the remote HTTPS server.
1080
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
ProxyAddress
This optional directive is used to specify the IP address of the proxy server in case the module should connect
to the Raijin server through a proxy.
The om_raijin module supports HTTP proxying only. SOCKS4/SOCKS5 proxying is not
NOTE
supported.
ProxyPort
This optional directive is used to specify the port number required to connect to the proxy server.
SNI
This optional directive specifies the host name used for Server Name Indication (SNI) in HTTPS mode.
123.18.2. Examples
1081
Example 693. Sending Logs to a Raijin Server
This configuration reads log messages from file and forwards them to the Raijin server on localhost.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Output raijin>
6 Module om_raijin
7 URL http://localhost:9200/_bulk
8 FlushInterval 2
9 FlushLimit 100
10 </Output>
This configuration sends logs to a Raijin server in a failover configuration (multiple URLs defined). The
actual destinations used in this case are http://localhost:9200/_bulk,
http://192.168.1.1:9200/_bulk, and http://example.com:9200/_bulk.
nxlog.conf
1 <Extension json>
2 Module xm_json
3 </Extension>
4
5 <Output raijin>
6 Module om_raijin
7 URL http://localhost:9200/_bulk
8 URL http://192.168.1.1:9200/_bulk
9 URL http://example.com:9200/_bulk
10 </Output>
The input counterpart, im_redis, can be used to retrieve data from a Redis server.
See the list of installer packages that provide the om_redis module in the Available Modules chapter of the NXLog
User Guide.
123.19.1. Configuration
The om_redis module accepts the following directives in addition to the common module directives. The Host
directive is required.
Host
This mandatory directive specifies the IP address or DNS hostname of the Redis server to connect to.
Channel
This directive is interpreted the same way as the Key directive (can be an expression which evaluates to a
string), except that its evaluated value will be used as the name of the Redis channel to which this module will
1082
publish records. The usage of this directive is mutually exclusive with the usage of the LPUSH, RPUSH LPUSHX
and RPUSHX commands in the Command directive.
Command
This optional directive specifies the command to be used. The possible commands are LPUSH, RPUSH (the
default), LPUSHX, RPUSHX and PUBLISH.
Key
This specifies the Key used by the RPUSH command. It must be a string type expression. If the expression in
the Key directive is not a constant string (it contains functions, field names, or operators), it will be evaluated
for each event to be inserted. The default is nxlog. The usage of this directive is mutually exclusive with the
usage of the PUBLISH command in the Command directive.
OutputType
See the OutputType directive in the list of common module directives. If this directive is unset, the default
Dgram formatter function is used, which writes the value of $raw_event without a line terminator. To
preserve structured data Binary can be used, but it must also be set on the other end.
Port
This specifies the port number of the Redis server. The default is port 6379.
Nxlog.log_info(msg)
Send the message msg to the internal logger at DEBUG log level. This method does the same as the core
log_debug() procedure.
Nxlog.log_debug(msg)
Send the message msg to the internal logger at INFO log level. This method does the same as the core
log_info() procedure.
Nxlog.log_warning(msg)
Send the message msg to the internal logger at WARNING log level. This method does the same as the core
log_warning() procedure.
Nxlog.log_error(msg)
Send the message msg to the internal logger at ERROR log level. This method does the same as the core
log_error() procedure.
class Nxlog.LogData
This class represents an event. It is instantiated by NXLog and passed to the method specified by the Call
directive.
field_names()
This method returns an array with the names of all the fields currently in the event record.
get_field(name)
This method returns the value of the field name in the event.
set_field(name, value)
This method sets the value of field name to value.
1083
See the list of installer packages that provide the om_ruby module in the Available Modules chapter of the NXLog
User Guide.
123.20.1. Configuration
The om_ruby module accepts the following directives in addition to the common module directives. The
RubyCode directive is required.
RubyCode
This mandatory directive specifies a file containing Ruby code. The om_ruby instance will call the method
specified by the Call directive. The method must accept an Nxlog.LogData object as its only argument.
Call
This optional directive specifies the Ruby method to call. The default is write_data.
123.20.2. Examples
Example 695. Forwarding Events With om_ruby
This example uses a Ruby script to choose an output file according to the severity of the event. Normalized
severity fields are added by most modules; see, for example, the xm_syslog $SeverityValue field.
TIP See Using Dynamic Filenames for a way to implement this functionality natively.
nxlog.conf
1 <Output out>
2 Module om_ruby
3 RubyCode ./modules/output/ruby/proc2.rb
4 Call write_data
5 </Output>
proc2.rb
def write_data event
if event.get_field('SeverityValue') >= 4
Nxlog.log_debug('Writing out high severity event')
File.open('tmp/high_severity', 'a') do |file|
file.write("#{event.get_field('raw_event')}\n")
file.flush
end
else
Nxlog.log_debug('Writing out low severity event')
File.open('tmp/low_severity', 'a') do |file|
file.write("#{event.get_field('raw_event')}\n")
file.flush
end
end
end
1084
See the list of installer packages that provide the om_ssl module in the Available Modules chapter of the NXLog
User Guide.
123.21.1. Configuration
The om_ssl module accepts the following directives in addition to the common module directives. The Host
directive is required.
Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.
Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination that does not have a port number specified in the Host
directive. If no port is configured for a destination in either directive, the default port is used, which is port
514.
Port directive for will become deprecated in this context from NXLog EE 6.0. Provide the
IMPORTANT
port in Host.
AllowUntrusted
This boolean directive specifies that the connection should be allowed without certificate verification. If set to
TRUE the connection will be allowed even if the remote server presents an unknown or self-signed certificate.
The default value is FALSE: the remote socket must present a trusted certificate.
CADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote socket. The certificate filenames in this directory must be in the OpenSSL
hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by including
a copy of the certificate in this directory.
CAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote socket. To trust a self-signed certificate presented by the remote (which is not signed by a CA),
provide that certificate instead.
CAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the CADir and CAFile directives are mutually
exclusive.
CertFile
This specifies the path of the certificate file to be used for the SSL handshake.
CertKeyFile
This specifies the path of the certificate key file to be used for the SSL handshake.
1085
CertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the CertFile and
CertKeyFile directives are mutually exclusive.
CRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote socket. The certificate filenames in this directory must be in the
OpenSSL hashed format.
CRLFile
This specifies the path of the certificate revocation list (CRL) which will be used to check the certificate of the
remote socket against.
KeyPass
With this directive, a password can be supplied for the certificate key file defined in CertKeyFile. This directive
is not needed for passwordless private keys.
LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used, which is not always ideal in firewalled network environments.
OutputType
See the OutputType directive in the list of common module directives. The default is LineBased_LF.
Reconnect
This optional directive sets the reconnect interval in seconds. If it is set, the module attempts to reconnect in
every defined second. If it is not set, the reconnect interval will start at 1 second and doubles on every
attempt. If the duration of the successful connection is greater than the current reconnect interval, then the
reconnect interval will be reset to 1 sec.
SNI
This optional directive specifies the host name used for Server Name Indication (SNI).
SSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
SSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the SSLProtocol directive is
set to TLSv1.3. Use the same format as in the SSLCipher directive.
SSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
SSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2, and TLSv1.3. By default, the
1086
TLSv1.2 and TLSv1.3 protocols are allowed. Note that the OpenSSL library shipped by Linux distributions
may not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
TCPNoDelay
This boolean directive is used to turn off the network optimization performed by Nagle’s algorithm. Nagle’s
algorithm is a network optimization tweak that tries to reduce the number of small packets sent out to the
network, by merging them into bigger frames, and by not sending them to the other side of the session
before receiving the ACK. If this directive is unset, the TCP_NODELAY socket option will not be set.
123.21.2. Procedures
The following procedures are exported by om_ssl.
reconnect();
Force a reconnection. This can be used from a Schedule block to periodically reconnect to the server.
123.21.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
This configuration reads log messages from socket and sends them in the NXLog binary format to another
NXLog agent.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS tmp/socket
4 </Input>
5
6 <Output ssl>
7 Module om_ssl
8 Host localhost:23456
9 LocalPort 15014
10 CAFile %CERTDIR%/ca.pem
11 CertFile %CERTDIR%/client-cert.pem
12 CertKeyFile %CERTDIR%/client-key.pem
13 KeyPass secret
14 AllowUntrusted TRUE
15 OutputType Binary
16 </Output>
17
18 # old syntax
19 #<Output ssl>
20 # Module om_ssl
21 # Host localhost
22 # Port 23456
23 # CAFile %CERTDIR%/ca.pem
24 # CertFile %CERTDIR%/client-cert.pem
25 # CertKeyFile %CERTDIR%/client-key.pem
26 # KeyPass secret
27 # AllowUntrusted TRUE
28 # OutputType Binary
29 #</Output>
1087
Example 697. Sending Logs to Another NXLog Agent with Failover
This configuration sends logs to another NXLog agent in a failover configuration (multiple Hosts defined).
nxlog.conf
1 <Output ssl>
2 Module om_ssl
3 Host localhost:23456
4 Host 192.168.1.1:23456
5 Host example.com:1514
6 LocalPort 15014
7 </Output>
See the list of installer packages that provide the om_tcp module in the Available Modules chapter of the NXLog
User Guide.
123.22.1. Configuration
The om_tcp module accepts the following directives in addition to the common module directives. The Host or
ListenAddr directive is required.
IMPORTANT Use either Host for the connect or ListenAddr the listen mode.
Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.
ListenAddr
The module will listen for connections on this IP address or DNS hostname. The default is localhost. Add
the port number to listen on to the end of a host using a colon as a separator (host:port).
Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination that does not have a port number specified in the Host
directive. If no port is configured for a destination in either directive, the default port is used, which is port
514. Alternatively, if Listen is set to TRUE, the module will listen for connections on this port.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.
1088
Listen
If TRUE, this boolean directive specifies that om_tcp should listen for connections at the local address
specified by the Host and Port directives rather than opening a connection to the address. The default is
FALSE: om_tcp will connect to the specified address.
Listen directive for will become deprecated in this context from NXLog EE 6.0. Use either
IMPORTANT
Host for the connect (FALSE) or ListenAddr the listen (TRUE) mode.
LocalPort
This optional directive specifies the local port number of the connection. This directive only applies if Listen is
set to FALSE. If this is not specified a random high port number will be used, which is not always ideal in
firewalled network environments.
OutputType
See the OutputType directive in the list of common module directives. The default is LineBased_LF.
QueueInListenMode
If set to TRUE, this boolean directive specifies that events should be queued if no client is connected. If this
module’s buffer becomes full, the preceding module in the route will be paused or events will be dropped,
depending on whether FlowControl is enabled. This directive only applies if Listen is set to TRUE. The default
is FALSE: om_tcp will discard events if no client is connected.
Reconnect
This optional directive sets the reconnect interval in seconds. If it is set, the module attempts to reconnect in
every defined second. If it is not set, the reconnect interval will start at 1 second and doubles on every
attempt. If the duration of the successful connection is greater than the current reconnect interval, then the
reconnect interval will be reset to 1 sec.
TCPNoDelay
This boolean directive is used to turn off the network optimization performed by Nagle’s algorithm. Nagle’s
algorithm is a network optimization tweak that tries to reduce the number of small packets sent out to the
network, by merging them into bigger frames, and by not sending them to the other side of the session
before receiving the ACK. If this directive is unset, the TCP_NODELAY socket option will not be set.
123.22.2. Procedures
The following procedures are exported by om_tcp.
reconnect();
Force a reconnection. This can be used from a Schedule block to periodically reconnect to the server.
123.22.3. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
1089
Example 698. Transferring Raw Logs over TCP
With this configuration, NXLog will read log messages from socket and forward them via TCP.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Output tcp>
7 Module om_tcp
8 Host 192.168.1.1:1514
9 </Output>
10
11 # old syntax
12 #<Output tcp>
13 # Module om_tcp
14 # Host 192.168.1.1
15 # Port 1514
16 #</Output>
17
18 <Route uds_to_tcp>
19 Path uds => tcp
20 </Route>
This configuration sends logs via TCP in a failover configuration (multiple Hosts defined). The actual
destinations used in this case are localhost:1514, 192.168.1.1:1514, and example.com:1234.
nxlog.conf
1 <Output tcp>
2 Module om_tcp
3 Host localhost:1514
4 Host localhost:1234
5 Host 127.0.0.1:1514
6 Host 127.0.0.1:1234
7 Host [::1]:1514
8 Host [::1]:1234
9 Host example.com:1234
10 Port
11 </Output>
See the list of installer packages that provide the om_udp module in the Available Modules chapter of the NXLog
User Guide.
1090
123.23.1. Configuration
The om_udp module accepts the following directives in addition to the common module directives. The Host
directive is required.
Host
The module will connect to the IP address or hostname defined in this directive. If additional hosts are
specified on new lines, the module works in a failover configuration. If a destination becomes unavailable, the
module automatically fails over to the next one. If the last destination becomes unavailable, the module will
fail over to the first destination. Add the destination port number to the end of a host using a colon as a
separator (host:port). For each destination with no port number defined here, the port number specified in
the Port directive will be used. Port numbers defined here take precedence over any port number defined in
the Port directive.
Because of the nature of the UDP protocol and how ICMP messages are handled by various
network devices, the failover functionality in this module is considered as "best effort".
WARNING
Detecting hosts going offline is not supported. Detecting the receiving service being
stopped - while the host stays up is supported.
Port
The module will connect to the port number on the destination host defined in this directive. This
configuration is only used for any destination in the Host directive which does not have a port specified. If no
port is configured for a destination in either directive, the default port is used, which is port 514.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.
LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used, which is not always ideal in firewalled network environments.
OutputType
See the OutputType directive in the list of common module directives. If this directive is not specified, the
default is Dgram.
Reconnect
This optional directive sets the reconnect interval in seconds. If it is set, the module attempts to reconnect in
every defined second. If it is not set, the reconnect interval will start at 1 second and doubles on every
attempt. If the duration of the successful connection is greater than the current reconnect interval, then the
reconnect interval will be reset to 1 sec.
SockBufSize
This optional directive sets the socket buffer size (SO_SNDBUF) to the value specified. If this is not set, the
operating system default is used.
123.23.2. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
1091
Example 700. Sending Raw Syslog over UDP
This configuration reads log messages from socket and forwards them via UDP.
nxlog.conf
1 <Input uds>
2 Module im_uds
3 UDS /dev/log
4 </Input>
5
6 <Output udp>
7 Module om_udp
8 Host 192.168.1.1:1514
9 LocalPort 1555
10 </Output>
11 # old syntax
12 #<Output udp>
13 # Module om_udp
14 # Host 192.168.1.1
15 # Port 1514
16 #</Output>
17
18 <Route uds_to_udp>
19 Path uds => udp
20 </Route>
This configuration sends logs via UDP in a failover configuration (multiple Hosts defined). The actual
destinations used in this case are localhost:1514, 192.168.1.1:1514, and example.com:1234.
nxlog.conf
1 <Output udp>
2 Module om_udp
3 Host localhost:1514
4 Host 192.168.1.1:1514
5 Host example.com:1234
6 </Output>
This module is very similar to the om_udp module and can be used as a drop-in replacement. The SpoofAddress
configuration directive can be used to set the address if necessary. The UDP datagram will be sent with the local
IP address if the IP address to be spoofed is invalid. The source port in the UDP datagram will be set to the port
number of the local connection (the port number is not spoofed).
The network input modules (im_udp, im_tcp, and im_ssl) all set the $MessageSourceAddress field, and this value
will be used when sending the UDP datagrams (unless SpoofAddress is explicitly set to something else). This
allows logs to be collected over reliable and secure transports (like SSL), while the om_udpspoof module is only
used for forwarding to the destination server that requires spoofed UDP input.
1092
See the list of installer packages that provide the om_udpspoof module in the Available Modules chapter of the
NXLog User Guide.
123.24.1. Configuration
The om_udpspoof module accepts the following directives in addition to the common module directives. The Host
directive is required.
Host
The module will send UDP datagrams to this IP address or DNS hostname. Add the destination port number
to the end of a host using a colon as a separator (host:port).
Port
The module will send UDP packets to this port. The default port is 514 if this directive is not specified.
Port directive will become deprecated in this context from NXLog EE 6.0. Provide the port
IMPORTANT
in Host.
LocalPort
This optional directive specifies the local port number of the connection. If this is not specified a random high
port number will be used which is not always ideal in firewalled network environments.
MTU
This directive can be used to specify the maximum transfer size of the IP data fragments. If this value exceeds
the MTU size of the sending interface, an error may occur and the packet be dropped. The default MTU value
is 1500.
OutputType
See the OutputType directive in the list of common module directives. If this directive is not specified, the
default is Dgram.
SockBufSize
This optional directive sets the socket buffer size (SO_SNDBUF) to the value specified. If this is not set, the
operating system default is used.
SpoofAddress
This directive is optional. The IP address rewrite takes place depending on how this directive is specified.
Expression
The expression specified here will be evaluated for each message to be sent. Normally this can be a field
name, but anything is accepted which evaluates to a string or an ipaddr type. For example, SpoofAddress
$MessageSourceAddress has the same effect as when SpoofAddress is not set.
1093
123.24.2. Examples
Old syntax examples are included, they will become invalid with NXLog EE 6.0.
The im_tcp module will accept log messages via TCP and will set the $MessageSourceAddress field for each
event. This value will be used by om_udpspoof to set the UDP source address when sending the data to
logserver via UDP.
nxlog.conf
1 <Input tcp>
2 Module im_tcp
3 ListenAddr 0.0.0.0:1514
4 </Input>
5
6 <Output udpspoof>
7 Module om_udpspoof
8 Host logserver.example.com:1514
9 </Output>
10
11 # old syntax
12 #<Output udpspoof>
13 # Module om_udpspoof
14 # Host logserver.example.com
15 # Port 1514
16 #</Output>
17
18 <Route tcp_to_udpspoof>
19 Path tcp => udpspoof
20 </Route>
1094
Example 703. Forwarding Log Messages with Spoofed IP Address from Multiple Sources
This configuration accepts log messages via TCP and UDP, and also reads messages from a file. Both im_tcp
and im_udp set the $MessageSourceAddress field for incoming messages, and in both cases this is used to
set $sourceaddr. The im_file module instance is configured to set the $sourceaddr field to 10.1.2.3 for
all log messages. Finally, the om_udpspoof output module instance is configured to read the value of the
$sourceaddr field for spoofing the UDP source address.
nxlog.conf (truncated)
1 <Input tcp>
2 Module im_tcp
3 Host 0.0.0.0:1514
4 Exec $sourceaddr = $MessageSourceAddress;
5 </Input>
6
7 <Input udp>
8 Module im_udp
9 Host 0.0.0.0:1514
10 Exec $sourceaddr = $MessageSourceAddress;
11 </Input>
12
13 <Input file>
14 Module im_file
15 File '/var/log/myapp.log'
16 Exec $sourceaddr = 10.1.2.3;
17 </Input>
18
19 <Output udpspoof>
20 Module om_udpspoof
21 # destination port: 1514
22 Host 10.0.0.1:1514
23 # originating port: 15000
24 LocalPort 15000
25 SpoofAddress $sourceaddr
26 </Output>
27
28 # old syntax
29 [...]
This module supports SOCK_DGRAM type sockets only. SOCK_STREAM type sockets may be
NOTE
supported in the future.
See the list of installer packages that provide the om_uds module in the Available Modules chapter of the NXLog
User Guide.
123.25.1. Configuration
The om_uds module accepts the following directives in addition to the common module directives.
1095
UDS
This specifies the path of the Unix domain socket. The default is /dev/log.
UDSType
This directive specifies the domain socket type. Supported values are dgram, stream, and auto. The default is
auto.
OutputType
See the OutputType directive in the list of common module directives. If UDSType is set to Dgram or is set to
auto and a SOCK_DGRAM type socket is detected, this defaults to Dgram. If UDSType is set to stream or is set
to auto and a SOCK_STREAM type socket is detected, this defaults to LineBased.
123.25.2. Examples
Example 704. Using the om_uds Module
This configuration reads log messages from a file, adds BSD Syslog headers with default fields, and writes
the messages to socket.
nxlog.conf
1 <Extension syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Input file>
6 Module im_file
7 File "/var/log/custom_app.log"
8 </Input>
9
10 <Output uds>
11 Module om_uds
12 # Defaulting Syslog fields and creating Syslog output
13 Exec parse_syslog_bsd(); to_syslog_bsd();
14 UDS /dev/log
15 </Output>
16
17 <Route file_to_uds>
18 Path file => uds
19 </Route>
See the list of installer packages that provide the om_webhdfs module in the Available Modules chapter of the
NXLog User Guide.
123.26.1. Configuration
The om_webhdfs module accepts the following directives in addition to the common module directives. The File
and URL directives are required.
File
1096
This mandatory directive specifies the name of the destination file. It must be a string type expression. If the
expression in the File directive is not a constant string (it contains functions, field names, or operators), it will
be evaluated before each request is dispatched to the WebHDFS REST endpoint (and after the Exec is
evaluated). Note that the filename must be quoted to be a valid string literal, unlike in other directives which
take a filename argument.
URL
This mandatory directive specifies the URL of the WebHDFS REST endpoint where the module should POST
the event data. The module operates in plain HTTP or HTTPS mode depending on the URL provided, and
connects to the hostname specified in the URL. If the port number is not explicitly indicated in the URL, it
defaults to port 80 for HTTP and port 443 for HTTPS.
FlushInterval
The module will send the data to the endpoint defined in URL after this amount of time in seconds, unless
FlushLimit is reached first. This defaults to 5 seconds.
FlushLimit
When the number of events in the output buffer reaches the value specified by this directive, the module will
send the data to the endpoint defined in URL. This defaults to 500 events. The FlushInterval may trigger
sending the write request before this limit is reached if the log volume is low to ensure that data is sent
promptly.
HTTPSAllowUntrusted
This boolean directive specifies that the connection should be allowed without certificate verification. If set to
TRUE, the connection will be allowed even if the remote HTTPS server presents an unknown or self-signed
certificate. The default value is FALSE: the remote must present a trusted certificate.
HTTPSCADir
This specifies the path to a directory containing certificate authority (CA) certificates, which will be used to
check the certificate of the remote HTTPS server. The certificate filenames in this directory must be in the
OpenSSL hashed format. A remote’s self-signed certificate (which is not signed by a CA) can also be trusted by
including a copy of the certificate in this directory.
HTTPSCAFile
This specifies the path of the certificate authority (CA) certificate, which will be used to check the certificate of
the remote HTTPS server. To trust a self-signed certificate presented by the remote (which is not signed by a
CA), provide that certificate instead.
HTTPSCAThumbprint
This optional directive specifies the certificate thumbprint of the certificate authority (CA), which is used to
look up the CA certificate from the Windows certificate store. The hexadecimal fingerprint string can be
copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are automatically removed.
This directive is only supported on Windows. This directive and the HTTPSCADir and HTTPSCAFile directives
are mutually exclusive.
HTTPSCertFile
This specifies the path of the certificate file to be used for the HTTPS handshake.
HTTPSCertKeyFile
This specifies the path of the certificate key file to be used for the HTTPS handshake.
HTTPSCertThumbprint
This optional directive specifies the certificate thumbprint to be used for the SSL handshake. The hexadecimal
fingerprint string can be copied straight from Windows Certificate Manager (certmgr.msc), whitespaces are
automatically removed. This directive is only supported on Windows. This directive and the HTTPSCertFile and
HTTPSCertKeyFile directives are mutually exclusive.
1097
HTTPSCRLDir
This specifies the path to a directory containing certificate revocation lists (CRLs), which will be consulted
when checking the certificate of the remote HTTPS server. The certificate filenames in this directory must be
in the OpenSSL hashed format.
HTTPSCRLFile
This specifies the path of the certificate revocation list (CRL), which will be consulted when checking the
certificate of the remote HTTPS server.
HTTPSKeyPass
With this directive, a password can be supplied for the certificate key file defined in HTTPSCertKeyFile. This
directive is not needed for passwordless private keys.
HTTPSSSLCipher
This optional directive can be used to set the permitted SSL cipher list, overriding the default. Use the format
described in the ciphers(1ssl) man page.
HTTPSSSLCiphersuites
This optional directive can be used to define the permitted SSL cipher list in case the HTTPSSSLProtocol
directive is set to TLSv1.3. Use the same format as in the HTTPSSSLCipher directive.
HTTPSSSLCompression
This boolean directive allows you to enable data compression when sending data over the network. The
compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults
to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and
may not support the zlib compression mechanism. The module will emit a warning on
NOTE
startup if the compression support is missing. The generic deb/rpm packages are bundled
with a zlib-enabled libssl library.
HTTPSSSLProtocol
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values
which can be any of the following: SSLv2, SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. By default, the
TLSv1.2 and TLSv1.3 protocols is allowed. Note that the OpenSSL library shipped by Linux distributions may
not support SSLv2 and SSLv3, and these will not work even if enabled with this directive.
QueryParam
This configuration option can be used to specify additional HTTP Query Parameters such as BlockSize. This
option may be used to define more than one parameter:
QueryParam blocksize 42
QueryParam destination /foo
123.26.2. Examples
1098
Example 705. Sending Logs to a WebHDFS Server
This example output module instance forwards messages to the specified URL and file using the WebHDFS
protocol.
nxlog.conf
1 <Output hdfs>
2 Module om_webhdfs
3 URL http://hdfsserver.domain.com/
4 File "myfile"
5 QueryParam blocksize 42
6 QueryParam destination /foo
7 </Output>
See the list of installer packages that provide the om_zmq module in the Available Modules chapter of the NXLog
User Guide.
123.27.1. Configuration
The om_zmq module accepts the following directives in addition to the common module directives. The Address,
ConnectionType, Port, and SocketType directives are required.
Address
This directive specifies the ZeroMQ socket address.
ConnectionType
This mandatory directive specifies the underlying transport protocol. It may be one of the following: TCP, PGM,
or EPGM.
Port
This directive specifies the ZeroMQ socket port.
SocketType
This mandatory directive defines the type of the socket to be used. It may be one of the following: REP,
ROUTER, PUB, XPUB, or PUSH. This must be set to PUB if ConnectionType is set to PGM or EPGM.
Interface
This directive specifies the ZeroMQ socket interface.
Listen
If this boolean directive is set to TRUE, om_zmq will bind to the Address specified and listen for connections. If
FALSE, om_zmq will connect to the Address. The default is FALSE.
OutputType
See the OutputType directive in the list of common module directives. The default value is Dgram.
1099
SockOpt
This directive can be used to set ZeroMQ socket options. For example, SockOpt ZMQ_BACKLOG 2000. This
directive may be used more than once to set multiple options.
123.27.2. Examples
Example 706. Using the om_zmq Module
This example configuration reads log messages from file and forwards them via ZeroMQ PUSH socket over
TCP.
nxlog.conf
1 <Input file>
2 Module im_file
3 File "/var/log/messages"
4 </Input>
5
6 <Output zmq>
7 Module om_zmq
8 SocketType PUSH
9 ConnectionType TCP
10 Address 10.0.0.1
11 Port 1514
12 </Output>
13
14 <Route file_to_zmq>
15 Path file => zmq
16 </Route>
1100
NXLog Manager
1101
Chapter 124. Introduction
Managing a log collection system where agents are scattered around the whole network can be a daunting task
especially if there are multiple teams in charge of each system.
The NXLog Manager is a log management solution which provides a web based administration interface to
configure all parameters for the log collection and enables the log management administrator to efficiently
monitor and manage the NXLog agents securely from a central console. The NXLog Manager is able to operate in
clustered mode if the network topology requires multiple manager nodes.
124.1. Requirements
To use and administer NXLog Manager, the user is expected to be familiar with the following:
There are known problems with Microsoft Internet Explorer and it is not supported.
124.2. Architecture
NXLog Manager web application
NXLog Manager is a Java based web application which can communicate with the NXLog agents.
NXLog
NXLog is the log collector with no frontend. NXLog can be used in both server and client mode. When running
as a client (agent), NXLog will collect local log sources and will forward the data over the network. NXLog can
also operate as a server to store messages locally or as a relay to forward messages to another instance.
The architecture of NXLog Manager allows log collection to function even if NXLog Manager is not running or the
control channel is not functional, thus an NXLog Manager upgrade will not cause any interruption to the log
collection process.
1102
Chapter 125. System Requirements
In order to function efficiently, NXLog Manager requires a certain amount of available system resources on the
host system. The table below provides general guidelines to use when planning an NXLog Manager installation.
Actual system requirements will vary based on the number of agents to be managed; therefore, both minimum
and recommended requirements are listed. Always thoroughly test a deployment to verify that the desired
performance can be achieved with the system resources available.
Minimum Recommende
d
Processor cores 2 2
The NXLog Manager memory/RAM requirement increases by 2 MB for each managed agent. For
example, if an NXLog Manager instance monitors 100 agents, the recommended memory/RAM
requirement is 4296 MB. These requirements are in addition to the operating system’s
NOTE
requirements, and the requirements should be combined cumulatively with the NXLog
Enterprise Edition’s System Requirements for systems running both NXLog Enterprise Edition
and NXLog Manager.
1103
Chapter 126. Supported Platforms
NXLog Manager requires either OpenJDK 7 JRE or OpenJDK 8 JRE to be run on the following GNU/Linux operating
systems.
CentOS 6 java-1.8.0-openjdk-headless
CentOS 7 java-1.8.0-openjdk-headless
Debian 8 openjdk-7-jre
Debian 9 openjdk-8-jre
NOTE
NXLog Manager is only supported on the 64-bit version of Java.
1104
Chapter 127. Installation
127.1. Installing on Debian Wheezy
Install the DEB package with the following commands:
# dpkg -i nxlog-manager_X.X.XXX_amd64.deb
# apt-get -f install
127.1.1. Requirements
• nxlog-manager-4.x requires openjdk-7-jre and
• nxlog-manager-5.x requires either openjdk-7-jre or openjdk-8-jre.
If Java is not installed or the correct version of Java is not selected, NXLog Manager will refuse to start. To select
the default version of Java on your system, use the command:
Make sure that your hostname and DNS settings are setup correctly, to avoid problems
WARNING
with NXLog Manager. Refer to host setup common issues for more information.
To select the default version of Java on your system, use the command:
To access the web interface from another host, the firewall rules should allow access to port 9090 from the
external network:
# iptables -F
Make sure that your hostname and DNS settings are setup correctly, to avoid problems
WARNING
with NXLog Manager. Refer to host setup common issues for more information.
1105
procedure is identical on all platforms supported by Docker (Linux, Windows and MacOS). Extract the files from
the compressed Docker archive.
To build, (re)create and start the container execute the following command.
$ docker-compose up -d
By default the Dockerized NXLog Manager listens on port 9090. The port configuration is defined as
HOST:CONTAINER. To change this setting, edit the docker-compose.yml file by modifying the HOST port number
preceding the colon (9080 in the below example). The port number for the CONTAINER, following the colon
should be left at 9090.
docker-compose.yml
ports:
- "4041:4041"
- "9080:9090"
restart: always
For the configuration change to take effect, the Docker container needs to be stopped and started with the
following commands.
$ sudo docker-compose up
The NXLog Manager Docker container includes MySQL. Therefore there is no need to install and
NOTE configure MySQL separately. After installing, you may proceed with the NXLog Manager
configuration.
1106
127.4.1. Setting up NXLog Manager on AWS
1. To start with, the database needs to be prepared. The Amazon Relational Database Service (RDS) works well
with NXLog Manager. For data redundancy, create a database (MySQL or MariaDB) in Multi-AZ deployment
mode. This option will create a read replica.
2. Install NXLog Manager from the DEB or RPM package, depending on the operating system. At least the EC2
"t2.small" instance type is recommended.
3. Edit /opt/nxlog-manager/db_init/db.conf. Add the RDS hostname and the database master
username/password to the MYSQLOPTS variable.
4. Execute the database initialization script. This should only be done once for the cluster!
# cd /opt/nxlog-manager/db_init
# ./dbinit.sh
5. Configure NXLog Manager to run in a distributed manner by editing the INSTANCE_MODE in /opt/nxlog-
manager/conf/nxlog-manager.conf.
INSTANCE_MODE=distributed-manager
<Set name="jmsBrokerAddress">172.31.9.100</Set>
<Set name="jdbcUrl">jdbc:mysql://RDS_INSTANCE.rds.amazonaws.com:3306/nxlog-
manager5?useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8
&autoReconnect=true</Set>
c. Update Log4ensicsDatabaseAccess.
<New class="co.nxlog.manager.data.bean.common.DatabaseAccessBean">
<Set name="databaseName">nxlog-manager5</Set>
<Set name="username">nxlog-manager5</Set>
<Set name="password">nxlog-manager5</Set>
<Set name="location">RDS_INSTANCE.rds.amazonaws.com:3306</Set>
</New>
7. From the EC2 service dashboard, go to Security Groups. Allow TCP traffic on ports 20000 and 34200-34300
to allow JMS communications inside the security group created for NXLog Manager EC2 instances. Please
note that the security group ID should be used in the Source field.
1107
8. The nxlog-manager service can now be started.
1. From the EC2 service dashboard, go to Load Balancing. Click Create Load Balancer. Select Application
Load Balancer and click Create.
2. Configure the load balancer. Set the listener to use port 9090 (the same as the backend application).
3. Choose availability zones and configure a security group in order to limit access to the load balancer.
Configure routing to forward requests to port 9090.
1108
4. Configure the health check path to /nxlog-manager. In Advanced health check settings, set the Success
codes to 302, as it is the default reply from the nxlog-manager service.
5. Select instances for the target group and finish creation of the load balancer. From the EC2 dashboard, go to
Target Groups (in the LOAD BALANCING section). Select the target group and click Edit attributes. Enable
Stickiness to prevent breaking user sessions. This will create a cookie named AWSALB with encrypted
contents.
6. Edit security groups to allow traffic between the load balancer and its target group. After this step, the
solution is ready.
1109
127.5. Configuring NXLog Manager for Standalone Mode
To operate in standalone mode, NXLog Manager requires MySQL or MariaDB v5.5.
NOTE MariaDB has replaced MySQL in more recent versions such as Debian (Stretch).
NOTE systemctl has replaced service in more recent versions such as Debian (Stretch).
1110
127.6. Configuring NXLog Manager for Cluster Mode
It is possible to run multiple instances of NXLog Manager so that a group of agents connect to a specific Manager
instance and all agents can be managed at the same time from either NXLog Manager instance no matter which
one they are connected to. This mode is referred to as distributed mode or cluster mode.
nxlog-manager.conf
INSTANCE_MODE=distributed-manager
The NXLog Manager instances communicate over JMS (Java Message Service) API. Please set the public IP
address of the interface in /opt/nxlog-manager/conf/jetty-env.xml and make sure to replace the value
127.0.0.1 set for JMSBrokerAddress with the public IP:
jetty-env.xml
<Set name="jmsBrokerAddress">10.0.0.42</Set>
To operate in clustered mode, NXLog Manager requires MariaDB Galera Cluster v5.5.
# apt-get update
1111
galera.cnf
[mysqld]
#mysql settings
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
#galera settings
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://<IP1>,<IP2>,...,<IPN>"
wsrep_sst_method=rsync
Here IP1,…,IPN are the addresses of all nodes in the Galera cluster. Distribute this file to all nodes.
On node1:
On node2:
On nodeN:
On node1:
On node2:
On nodeN:
1112
127.6.2. Installing MariaDB Galera Cluster on RHEL
There is an installation guide here. The MariaDB Galera Cluster installation and configuration steps are
summarized below.
To add the MariaDB repository create the file /etc/yum.repos.d/mariadb.repo with the following content:
mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
You can download and install 'socat' here in case of the following error:
To create an initial MariaDB configuration, execute these commands and follow the instructions:
On each cluster node, edit /etc/my.cnf.d/server.cnf and make sure to add the following content:
server.cnf
[mysqld]
pid-file = /var/lib/mysql/mysqld.pid
port = 3306
[mariadb]
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://IP2,...,IPN
wsrep_cluster_name='cluster1'
wsrep_node_address='IP1'
wsrep_node_name='db1'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password
1113
# /etc/init.d/mysql bootstrap
Bootstrapping the cluster.. Starting MySQL.... SUCCESS!
SELinux may block MariaDB from binding on the cluster port and it will print the following error in the MariaDB
error log in /etc/selinux/config:
This can be solved by running setenforce 0 and setting SELINUX=permissive. Then repeat the above
installation steps on each node.
If you are installing NXLog Manager in clustered mode, this only needs to be executed once for
NOTE
the DB cluster - i.e. only on the first node.
If a root password is set for the MySQL/MariaDB database, edit /opt/nxlog-manager/db_init/my.cnf and
provide the password:
my.cnf
[client]
password=
Execute the database initialization script (only once for the Galera cluster):
$ cd /opt/nxlog-manager/db_init
# ./dbinit.sh
To ensure that the MySQL/MariaDB database is started on boot on CentOS/RHEL distributions, execute the
following command:
# chkconfig mysqld on
or
# chkconfig mariadb on
The size of the maximum packet allowed by MySQL/MariaDB can be raised by adding the following to the global
configuration options, typically /etc/my.cnf or /etc/mysql/my.cnf. Raising the size of the maximum allowed
packet will eliminate any max_allowed_packet exceeded error messages from the log files.
my.cnf
[mysqld]
max_allowed_packet = 256M
1114
127.8. Starting NXLog Manager
1. Start NXLog Manager with the following command:
◦ Starting NXLog Manager on Debian Wheezy or RHEL6/CentOS6
2. Connect to the web interface. Launch a web browser and navigate to http://x.x.x.x:9090/nxlog-manager in
order to make sure the start was successful.
Check the logs under /opt/nxlog-manager/logs if you are having trouble accessing the web
NOTE
interface.
Running NXLog Manager directly can provide additional information if the NXLog Manager
NOTE
service fails to start. Run cd /opt/nxlog-manager/bin/, then ./jetty.sh.
# yum install dialog apr perl perl-DBI perl-JSON openssl pcre zlib expat libcap libdbi
1115
Simply click Next, accept the license agreement, then finish the installation.
During the first login of the admin user, NXLog Manager generates the encryption key. The key is shared among
all administrative accounts with ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles within the same
application session.
The application session is an instance of the started service which is always new at each start of NXLog Manager.
The encryption key encrypts private keys of user accounts. The application session cannot function without the
key. This is the reason why it should always be available either in the system database and/or in the application
session.
After each administrator login, this key is decrypted and stored in the application session.
1116
For each new administrative user as well as the admin user, during the first login the key is copied, encrypted
with the user’s password, and stored in the NXAuthSettings table. After a user changes its password, the key is
encrypted with the new password. Additional details about encryption of certificates are provided in the
Certificates Encryption section.
If for any reason it is required, the encryption of private keys can be disabled. See the
content about the Don’t encrypt agent manager’s private key checkbox in the Agent
WARNING Manager Configuration section. If this checkbox is unchecked, encryption is applied and the
suggestions from the Best Practices for Managing Encryption Keys section below are
recommended to be followed.
• The NXLog Manager instance should always have at least one account with the assigned
ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles. Otherwise it does not function.
• It is recommended to keep the default admin account in the system. Instead of deleting, it can be kept in a
disabled state and used only in specific situations. This account can provide the application session with the
encryption key after NXLog Manager is restarted.
If keeping the admin account is not possible, another account with the ROLE_ADMINISTRATOR and/or
ROLE_CERTIFICATE roles should be created and logged into the same application session with the admin.
This action will share the encryption key from the admin to the new account and makes it available for all
future accounts. After this action is taken, the admin account can be deleted because the new administrative
account can now log in and share the encryption key within the application session.
• It is strongly recommended to have a backup for the NXLog Manager database while taking any actions with
administrative accounts.
• After each restart of NXLog Manager, at least one administrative account with the ROLE_ADMINISTRATOR
and/or ROLE_CERTIFICATE roles should always be available for login, decryption of the encryption key, and
sharing it within the application session. If the only administrative account is deleted or unassigned from its
roles, the decryption key is also deleted from the application session and the database.
WARNING This situation immediately leads to an unrecoverable loss of system and data control.
• Accounts with the ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles should be assigned only to trusted
system administrators. Other users should employ other roles which are available in NXLog Manager.
• Each new account with the ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE roles should share the same
application session with the other administrative accounts. There is no point in logging into another session
if it has no encryption key from a logged user.
• The same encryption key should be available for all administrative accounts on the running instance of
NXLog Manager.
In case all administrative accounts are deleted, the decryption key is also destroyed and
WARNING
NXLog Manager terminates. This immediately leads to a complete data and system loss.
1117
To set how long the NXLog Manager UI waits before requiring user re-authentication, find the
screenLockIdleTimeB block in conf/jetty-env.xml and then set screenLockIdleTime to the desired value.
To control the active session length, find the applicationSettingsB block and then set sessionTimeout to the
desired value. Note that sessionTimeout must be larger than screenLockIdleTime for screen lock to work.
Values are in minutes.
The following example shows the directives in context. Note that other directives have been omitted from the
example to aid readability.
jetty-env.xml
<New id="applicationSettingsB" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg/>
<Arg>bean/applicationSettingsBean</Arg>
<Arg>
<New class="com.nxsec.log4ensics.data.bean.common.ApplicationSettingsBean">
<!--Session Timeout set to 15 minutes-->
<Set name="sessionTimeout">10</Set>
</New>
</Arg>
</New>
<New id="screenLockIdleTimeB" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg/>
<Arg>bean/screenLockIdleTimeBean</Arg>
<Arg>
<New class="com.nxsec.log4ensics.data.bean.common.ScreenLockIdleTimeBean">
<!--Screen Lock Idle Time set to 10 minutes-->
<Set name="screenLockIdleTime">10</Set>
</New>
</Arg>
</New>
1118
Fill in the form and then click Create.
The next dialog window will ask to create a certificate for the Agent manager.
1119
The default values should be sufficient for most users; click Finish.
The initial settings can be changed any time later under the menu items Admin > Settings > Agent Manager
and Admin > Settings > Certificates.
If you have already configured the Agent manager with the Wizard as described in the previous
NOTE
section then you will not need to modify anything here. Just make sure your settings are correct.
Select whether you would like the agents or the agent manager to initiate the connection. This can be useful
when special firewall and zone rules apply. Make sure that the agent manager certificate is properly set. Click
Save & Restart to apply settings.
The Don’t encrypt agent manager’s private key checkbox disables using of the encryption key if activated.
For more information, see the NXLog Manager Configuration section.
1120
127.10.5. Connecting Agents
To ensure that the NXLog agents can only be controlled by the NXLog Manager, NXLog agents are controlled over
a secure trusted SSL and each NXLog agent needs its own private key and certificate.
The installation steps for automated agent deployment consist of the following:
To export the CA certificate, navigate to Admin > Certificates and select the CA with the checkbox as shown in
the screenshot below.
1121
Click Export. The CA certificate should be exported using the 'Certificate in PEM format' option. Save the file as
agent-ca.pem.
The default installation of NXLog will create a file that is to be updated by NXLog Manager. On Windows systems
this is C:\Program Files (x86)\nxlog\conf\log4ensics.conf, on GNU/Linux systems this configuration file
is /opt/nxlog/var/lib/nxlog/log4ensics.conf. When doing an automated deployment, this file should be
replaced with the following default configuration.
log4ensics.conf
1 # Please set the following values to suit your environment and make
2 # sure agent-ca.pem is copied to %CERTDIR% with the proper ownership
3
4 define NXLOG_MANAGER_ADDRESS X.X.X.X
5 define NXLOG_MANAGER_PORT 4041
6
7 LogLevel INFO
8 LogFile %MYLOGFILE%
9
10 <Extension agent_managment>
11 Module xm_soapadmin
12 Connect %NXLOG_MANAGER_ADDRESS%
13 Port %NXLOG_MANAGER_PORT%
14 SocketType SSL
15 CAFile %CERTDIR%/agent-ca.pem
16 # CertFile %CERTDIR%/agent-cert.pem
17 # CertKeyFile %CERTDIR%/agent-key.pem
18 AllowUntrusted TRUE
19 RequireCert FALSE
20 <ACL conf>
21 Directory %CONFDIR%
22 AllowRead TRUE
23 AllowWrite TRUE
24 </ACL>
25 <ACL cert>
26 Directory %CERTDIR%
27 AllowRead TRUE
28 AllowWrite TRUE
29 </ACL>
30 </Extension>
Please make sure to replace X.X.X.X with the proper IP address of the NXLog Manager instance that the NXLog
agent needs to be connected to.
The CA certificate file agent-ca.pem must be also copied to the proper location as referenced in the above
configuration which is normally under C:\Program Files (x86)\nxlog\cert\ on Windows systems and under
/opt/nxlog/var/lib/nxlog/cert/ on GNU/Linux systems.
1122
When the configuration and certificate files are updated remotely, NXLog must have
NOTE permissions to overwrite these files when it is running as a regular (i.e. nxlog) user. Please make
sure that the ownership is correct:
Now start the NXLog service. The nxlog.log file should contain the following if the NXLog agent has successfully
connected.
nxlog.log
2014-10-24 17:24:46 WARNING no functional input modules!↵
2014-10-24 17:24:46 WARNING no routes defined!↵
2014-10-24 17:24:46 INFO nxlog-2.8.1281 started↵
2014-10-24 17:24:46 INFO connecting to agent manager at X.X.X.X:4041↵
2014-10-24 17:24:46 INFO successfully connected to agent manager at X.X.X.X:4041 in SSL mode↵
Click the AGENTS menu to see the list of agents. You should see the newly connected agent with an UNTRUSTED
(yellow) status. If you don’t see the agent there, check the logs for error diagnostics.
The name of the untrusted agent should be the reverse DNS of its IP address.
In order to establish a mutually trusted connection between the NXLog agent and NXLog Manager, a certificate
and private key pair needs to be issued and transferred to the agent. Select the untrusted agent in the list and
click Issue certificate. When Update connected agents is enabled, the newly issued certificate and the
configuration will be pushed to the agent. The agent will need to reload the configuration in order to reconnect
with the certificate, select the agent and click Reload.
After the agent has successfully reconnected and the agent list is refreshed the agent status should be 'online'
showing a green sphere.
At this stage the NXLog agent should be operational and can now be managed and configured from the NXLog
Manager interface.
1123
On GNU/Linux systems, extract the agents-config.zip and put the files under /opt/nxlog/var/lib/nxlog.
Make sure the files have the proper ownership:
On Windows systems, place the certificates in C:\Program Files (x86)\nxlog\cert. After restarting the
NXLog service you should now see your agent as Online under AGENTS.
1. To configure the log collection, click on your agent in the agent list and then select the Configure tab.
2. Click 'Routes' and add a route. Add a TCP input module for testing purposes:
Name: tcptest
Module: TCP Input (im_tcp)
Listen On: 0.0.0.0
Port: 1514
Input Format: line based
3. Add an output module. For test purposes we will use a null output that discards the data.
Name: out
Module: Null Output (om_null)
1124
4. Now click Update config on the Info tab, then click Reload.
After the agent is restarted the newly configured modules are visible on the Modules tab.
6. On the Modules tab check all modules and click Refresh status. The count under the Received column is 1
(or more).
log4j.xml
<appender name="internalAppender" class="org.apache.log4j.DailyRollingFileAppender">
<param name="File" value="${logs.root}.log"/>
<param name="Threshold" value="INFO"/>
<param name="DatePattern" value="'.'yyyy-MM"/>
<layout class="co.nxlog.manager.common.logging.ContextPatternLayout">
<param name="ConversionPattern" value="%d %p $host $user $component [%c] - %m %n"/>
</layout>
</appender>
'.'yyyy-ww Rollover at the first day of each week. The first day of
the week depends on the locale.
1125
• Change WARN level to DEBUG in the loggers you require,
log4j.xml
<root>
<priority value="DEBUG"/>
<appender-ref ref="internalAppender"/>
<appender-ref ref="errorAppender"/>
<appender-ref ref="debugAppender"/>
</root>
Theses files are good for testing purposes, however, they remain the same through all NXLog Manager
installations and should be replaced with valid versions.
The examples below explain how to obtain certificates and private keys.
1126
Example 707. Obtaining a CA Certificate for Versions 5.x
For NXLog Manager versions 5.x, a private key and certificate signing request (CSR) can be generated on a
server with the following command:
$ openssl req –out request.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key
This command will output the information which had been input while creating the CSR.
The request.csr file can be submitted to a corporate CA and a proper certificate can then be obtained.
After the certificate is obtained, the existing jetty9-cert.pem and jetty9-key.pem files in the NXLog
Manager directory need to be replaced with the new certificate. For more information, see the NXLog
Manager SSL Keys for Versions 5.x section.
For NXLog Manager versions 6.x, a package with a private key and self-signed certificate can be generated
using the following command:
keytool -genkeypair -keyalg RSA -keystore keystore.p12 -validity 365 -keysize 3072
This command will create a keystore.p12 package with the 3072-bit RSA private key and self-signed
certificate with the 365-day validity. The password from the package will be used later in NXLog Manager
settings. See the NXLog Manager SSL keys for Versions 6.x section.
Using the created package, the certificate signing request can be generated with the following command:
This command will create a separate request.csr file which can be submitted to a corporate CA and a
proper certificate can then be obtained.
After the certificate.cer file is obtained, it can be imported into the existing keystore.p12 file with the
following command:
The existing keystore.p12 package in the NXLog Manager directory can now be replaced with the new
one. For more information about using the password from the package, see the NXLog Manager SSL Keys
for Versions 6.x section.
Using a self-signed certificate is insecure. Nevertheless, such a certificate can be generated and utilized for
1127
HTTPS connections as well.
Example 709. Generating a Self-Signed Certificate for Versions 5.x and 6.x
For NXLog Manager versions 5.x, the command below will generate the key.pem private key file and the
cert.pem certificate with 365-day validity.
openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out cert.pem
The existing jetty9-cert.pem and jetty9-key.pem files in the NXLog Manager directory can now be
replaced with the created certificate and private key files. For more information, see the NXLog Manager
SSL Keys for Versions 5.x section.
For NXLog Manager versions 6.x, the command below will generate the keystore.p12 package file with a
3072-bit RSA private key and self-signed certificate with a 365-day validity:
keytool -genkeypair -keyalg RSA -keystore keystore.p12 -validity 365 -keysize 3072
The password from the package will be used later in NXLog Manager settings.
The existing keystore.p12 package in the NXLog Manager directory can now be replaced with the new
one. For more information about using the password from the package, see the NXLog Manager SSL Keys
for Versions 6.x section.
The below sections explain how to enable HTTPS after the proper certificate and private key have been obtained.
1128
jetty-config.xml
<New id="sslContextFactory"
class="com.nxsec.log4ensics.web.common.server.util.ssl.SslContextFactory">
<Set name="ServerCertificate"><Property name="jetty.home" default=".." />/conf/jetty9-
cert.pem</Set>
<Set name="ServerKey"><Property name="jetty.home" default=".." />/conf/jetty9-key.pem</Set>
<Set name="ServerKeyPassword"></Set>
<Set name="EndpointIdentificationAlgorithm"></Set>
<Set name="NeedClientAuth"><Property name="jetty.ssl.needClientAuth" default="false"/></Set>
<Set name="WantClientAuth"><Property name="jetty.ssl.wantClientAuth" default="false"/></Set>
<Set name="ExcludeCipherSuites">
<Array type="String">
<Item>SSL_RSA_WITH_DES_CBC_SHA</Item>
<Item>SSL_DHE_RSA_WITH_DES_CBC_SHA</Item>
<Item>SSL_DHE_DSS_WITH_DES_CBC_SHA</Item>
<Item>SSL_RSA_EXPORT_WITH_RC4_40_MD5</Item>
<Item>SSL_RSA_EXPORT_WITH_DES40_CBC_SHA</Item>
<Item>SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA</Item>
<Item>SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA</Item>
<Item>SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA</Item>
<Item>SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA</Item>
<Item>TLS_DHE_RSA_WITH_AES_256_CBC_SHA256</Item>
<Item>TLS_DHE_DSS_WITH_AES_256_CBC_SHA256</Item>
<Item>TLS_DHE_RSA_WITH_AES_256_CBC_SHA</Item>
<Item>TLS_DHE_DSS_WITH_AES_256_CBC_SHA</Item>
<Item>TLS_DHE_RSA_WITH_AES_128_CBC_SHA256</Item>
<Item>TLS_DHE_DSS_WITH_AES_128_CBC_SHA256</Item>
<Item>TLS_DHE_RSA_WITH_AES_128_CBC_SHA</Item>
<Item>TLS_DHE_DSS_WITH_AES_128_CBC_SHA</Item>
</Array>
</Set>
</New>
jetty-config.xml
<New id="sslHttpConfig" class="org.eclipse.jetty.server.HttpConfiguration">
<Arg><Ref refid="httpConfig"/></Arg>
<Call name="addCustomizer">
<Arg><New class="org.eclipse.jetty.server.SecureRequestCustomizer"/></Arg>
</Call>
</New>
1129
jetty-config.xml
<Call name="addConnector">
<Arg>
<New id="sslConnector" class="org.eclipse.jetty.server.ServerConnector">
<Arg name="server"><Ref refid="Server" /></Arg>
<Arg name="factories">
<Array type="org.eclipse.jetty.server.ConnectionFactory">
<Item>
<New class="org.eclipse.jetty.server.SslConnectionFactory">
<Arg name="next">http/1.1</Arg>
<Arg name="sslContextFactory"><Ref refid="sslContextFactory"/></Arg>
</New>
</Item>
<Item>
<New class="org.eclipse.jetty.server.HttpConnectionFactory">
<Arg name="config"><Ref refid="sslHttpConfig" /></Arg>
</New>
</Item>
</Array>
</Arg>
start.ini
#--module=ssl
start.ini
#--module=https
1130
127.11.4. NXLog Manager SSL Keys for Versions 5.x
This version of NXLog Manager is bundled with a default key pair in PEM format to be used for the secure
connection in <NXLogManager_HOME>/conf/, namely jetty8-cert.pem and jetty8-key.pem. These can be
customized in jetty-config.xml by editing the ServerCertificate and ServerKey properties of
sslContextFactory. Provide the ServerKeyPassword if the private key is password protected.
Now NXLog Manager can be restarted with HTTPS enabled on the default port 9443. The port number can also
be customized in jetty-config.xml.
where -xxx signifies the version of Jetty installed in NXLog-Manager. The following output will be generated:
blah
OBF:20771x1b206z
MD5:639bae9ac6b3e1a84cebb7b403297b79
CRYPT:me/ks90E221EY
The first line is the plain text password. Copy and paste only one of the secured versions of your choice, including
the prefix, for the property jetty.sslContext.keyStorePassword in start.ini. Creating/managing a keystore
is out of scope of this document.
Now NXLog Manager can be restarted with HTTPS enabled on the default port 9443. The port number can also
be customized in start.ini by editing jetty.ssl.port.
Procedure
1. Create the directory that will hold the changes for the nxlog-manager service.
[Service]
LimitNOFILE=10000
In this configuration, 10000 represents to the number of files that NXLog Manager is allowed to open.
1131
Change this number to suit the requirements of your environment.
4. To tell NXLog Manager about the changes, the nxlog-manager service needs to be restarted.
It is always advisable and good practice to create a backup before upgrading. This enables the
NOTE process to be rolled back if something goes wrong. Use mysqldump or phpMyAdmin to backup
MySQL/MariaDB.
After stopping the NXLog Manager service, upgrade NXLog Manager but do not start the service. Navigate to
/opt/nxlog-manager/db_init/upgrade/ and execute the command:
After stopping the NXLog Manager service, upgrade NXLog Manager but do not start the service. The upgraded
NXLog Manager requires first a database initialization. Do not start the NXLog Manager service as part of the
initialization. After initializing the database, navigate to /opt/nxlog-manager/db_init/upgrade/ and execute
the command:
The command will copy all the relevant information from the earlier version of NXLog Manager database to the
new database without altering the old database. The upgraded version of NXLog Manager service can now be
started.
1132
After stopping the NXLog Manager service, upgrade the NXLog Manager packages through dpkg/apt or rpm/yum
and then start the service.
Upgrading NXLog Manager migrates existing settings to the new version. Nonetheless, it is highly recommended
to create a database backup before upgrading.
The following steps should be performed to upgrade NXLog Manager as a Docker application.
1. Docker containers should be stopped with the following command in the NXLog Manager directory:
$ docker-compose down
2. The archive with the new version of NXLog Manager should be unpacked with the following command:
3. The .deb package from the unpacked archive should be put to the NXLog Manager directory and the existing
package file should be deleted.
4. Docker images and containers should be built and started with the following command in the NXLog
Manager directory:
$ docker-compose up --build -d
nxlog-manager.err
2016-11-21 16:14:31,015 ERROR manager-host unknown nxlog-manager [net.sf.ehcache.Cache] - Unable to
set manager-host. This prevents creation of a GUID. Cause was:↵
java.net.UnknownHostException: nxlogmgr.domain.local↵
To set the hostname to myname, the line containing the host IP address along with the FQDN and the name
aliases should be added to the /etc/hosts file.
/etc/hosts
172.16.183.1 myname.example.com myname
Any of the locally bound IP address may be used as the Manager hostname.
1133
Configuring the /etc/hosts file works for both Debian and RHEL versions of Linux.
By default, a Docker container inherits DNS settings from the Docker daemon, including the contents of the
/etc/resolv.conf and /etc/hosts files.
These settings can be overridden on a per-container basis with the following flags to docker commands:
Flag Description
--dns-opt A key-value pair representing a DNS option and its value. See the
operating system’s documentation for the resolv.conf file for
valid options.
The most common way is to edit the /etc/resolv.conf file. Usually up to three nameservers can be set up
using the nameserver keyword, followed by the IP address of the nameserver.
To test whether your configuration functions correctly, use the host or dig programs to perform both a DNS
lookup and a reverse DNS lookup (by querying with an IP address). Make sure that participating hosts on your
NXLog collection system are resolved correctly.
For more information, visit the Manually configuring the /etc/resolv.conf file in the RedHat documentation for
RHEL related distributions or visit the Defining the (DNS) Nameservers section in the Debian Wiki for Debian
based distrubutions.
There is also a comprehensive article on the Anatomy of a Linux DNS Lookup on the zwischenzugs website that
discusses the different methods and tools that can be used.
1134
Chapter 128. Dashboard and Menu
128.1. Logging in
After installing and starting NXLog Manager, open a browser and go to the URL of the application
(http://localhost:9090, if the default values were used during installation). The login screen is displayed:
NXLog Manager ships with one built-in administrator account. The User ID is admin and the password is
nxlog123. This default password should be changed as soon as possible.
HOME
Displays the dashboard.
PATTERNS
CREATE PATTERN
Create a new pattern.
LIST PATTERNS
Display a list of all available patterns.
1135
SEARCH PATTERN
Display a search page for patterns.
CREATE GROUP
Create a new pattern group.
LIST GROUPS
Display a list of all available pattern groups.
IMPORT PATTERN
Open the dialog to import a pattern database file.
CREATE FIELD
Create a new field.
LIST FIELDS
Display a list of all available fields.
CORRELATION
Open the correlation rules and rulesets management page.
LIST RULESETS
List available rulesets.
IMPORT RULESET
Import a ruleset file. See Exporting and Importing Correlation Rules.
AGENTS
Display the nxlog agents management page.
ADMIN
USERS
Load the user management interface.
ROLES
Load the roles management interface.
CERTIFICATES
Display a list of certificates available in the built-in PKI.
SETTINGS
Display system-wide settings and personal preferences page.
LOGOUT
Log out of the NXLog Manager web application and terminate your session.
Menu items are shown or hidden based on the current user’s configured roles. See Roles for
NOTE
more information about access control in NXLog Manager.
The successive chapters are organized to cover each of these components which can be accessed from the
menu.
128.3. Dashboard
On the first login, the following screen appears.
1136
The dashboard can be customized and displays content accessible to the logged in user. After clicking the Add
button, an empty dashboard item will appear:
The following item types can be selected with the combo box:
Agent list
The number of agents are displayed for each category:
• Online
• Offline
• Error
• Unmanaged
See the Agents chapter for more information about agent statuses.
Jobs summary
Will display a summary about scheduled jobs.
Certificate summary
Will display a summary about certificates grouping them by the following categories:
• Expired
1137
• To be expired in the next 10 days
• Revoked
• Valid
Agent chart
Will display one of these agent charts for an agent:
After the required parameters are filled in, click Save to add the item to the dashboard. Click Cancel if you wish
to discard the dashboard item.
The header bar of the dashboard item can be clicked to drag and move the dashboard item around.
The following items are on the header bar, from left to right:
Up arrow (▲)
Click to maximize the dashboard item.
Edit
Click to edit the dashboard item.
✕
Click to remove the dashboard item.
Title
The title provided in the last edit of the dashboard item.
1138
Chapter 129. Fields
Log messages commonly contain important data such as user names, IP addresses, application names, and
more. An event is represented as a list of key-value pairs, or "fields". The name of the field is the key, and the field
data is the value. This metadata is sometimes referred to as event properties or message tags.
NXLog Manager comes with a set of predefined fields which are suitable for typical cases. These fields can also
be extended, and new fields created, to suit custom requirements. Fields in NXLog Manager are typed (the kind of
data permitted in a key value is pre-defined), which allows complex operations and efficient storage of event log
data.
The field list is kept in the configuration database. All of the major components used throughout NXLog Manager
depend on fields, including Patterns, Correlation and Agent configuration.
To list the available fields, click on the LIST FIELDS menu item under the PATTERN menu. A list similar to the
following should appear:
The field properties will be explained shortly as we look at creating and modifying fields. To do this, click on
Create or Edit under the field list.
1139
The field properties are as follows:
Name
The name of the field will be used to refer to the field from various places in NXLog Manager and NXLog.
Type
The following types can be chosen for a field:
• STRING
• INTEGER
• BINARY
• DATETIME
• IPV4ADDR
• IPV6ADDR
• IPADDR
• BOOLEAN
Starting from version 6.0, NXLog Manager provides the new IPADDR type which configures both
NOTE IPv4 and IPv6 addresses. The IPV4ADDR and IPV6ADDR types will still be supported for backward
compatibility.
Persist
If this option is not enabled, the field value is available to the NXLog agent only for correlation and pattern
matching. Fields should be persisted if the information is needed in additional functions.
Lookup
This special property only takes effect when the field is persistent and is a string type. The lookup property
should be enabled for fields whose values are highly repetitive such as user names, enumerations, host
1140
names etc. This enables the storage engine to map the value to an integer which yields significant
compression and performance boost.
Description
The user can store additional information about the field in the description. It is not used by NXLog Manager.
1141
Chapter 130. Patterns
Patterns provide a way to extract important information (e.g. user names, IP addresses, URLs, etc.) from free-
form log messages.
Many sources, Syslog for example, generate log messages in an unstructured but human readable format,
typically a short sentence or sentence fragment. Consider the following message generated by the SSH server
when an authentication failure occurs:
To create a report about authentication failures, the username (john in the above example) needs to be
extracted. Patterns support simple string matching and also allow use of regular expressions for this purpose.
Moreover, it can leverage regular expressions in ways which outstrip simple string extraction.
• The matching executed against the field(s) can be an exact match or a regular expression.
• Patterns contain match criteria to be executed against one or more fields, matching the pattern only if all
fields match. This technique allows patterns to be used with structured logs as well.
• Patterns can extract data from strings using captured substrings and store these in separate fields.
• Patterns can modify the log by setting additional fields. This is useful for message classification.
• Patterns can contain test cases for validation.
• Patterns can be collected into Pattern Groups, greatly simplifying their application to specific sources.
Patterns are used by the NXLog agent. This makes it possible to distribute pattern matching tasks to the agents,
and receive pre-processed, ready-to-store logs instead of parsing all logs at the central log server—which can
yield a significant reduction in CPU load on the server.
For more information about the patterns used by the NXLog agent, please refer to the pm_pattern module
documentation in the NXLog Reference Manual.
Pattern groups also serve an optimization purpose. They can have an optional match criteria. One or more fields
can be specified using either EXACT or REGEXP match. The log message is first checked against this match
criteria. If it matches, only then will be the patterns belonging to the group matched against the log message.
1142
After form submission, the pattern group can be viewed:
1143
In the above example the ssh patterns will only be checked against the log if the field SourceName matches the
string sshd. The SourceName field must be extracted from the Syslog message with a syslog parser prior to
running the logs through the pattern matcher.
1144
Here, enter the basic pattern information. Make sure the Pattern Group is set.
Next, define at least one field and value to match. For example, a message field:
This can be made more generic as needed so that, for example, the pattern can extract the user name and the
destination IP address from the message:
Those parts of the pattern are replaced with regular expression constructs, (\S+) in the above example, which
are not static. Captured substrings are stored in the selected fields. In the above example AccountName and
DestinationIPv4Address are used to store the values extracted with (\S+).
If it is necessary, add more than one field to execute the matching operation against. The match type can be
either an EXACT or a REGEXP match. If this is toggled to REGEXP, the NXLog Manager will offer to escape special
characters:
If the regular expression does not start with the caret (^), the regular expression engine will try to find an
occurrence anywhere in the subject string. This is a costly operation. Typically, the regular expression is intended
to match the start of the string, and for this reason the interface shows a hint:
1145
The regular expressions are compiled and executed by the NXLog engine using the PCRE library.
NOTE
The regular expression must be PCRE compatible in order to work.
This built-in testing interface is extremely useful for verifying the functionality of pattern definitions, without the
costly overhead of loading the pattern into the agent and running it against a set of logs.
After clicking the Calculate Fields button, the captured field values appear. Field values are populated with the
content of the log message used when the pattern was created.
If the field values are not appearing or if the values are unexpected, closely review the regular
NOTE expression(s) in use. The syntax of regular expressions is very compact and oversights are not
uncommon.
Event taxonomy fields allow events to be handled in a uniform manner, regardless of their source.
NXLog Manager comes with five special fields for this purpose. Their names all begin with Taxonomy. A dictionary
of permissible values for these fields is provided.
These fields are optional, however it is strongly recommended they be used. Custom fields, with their own
permissible values, can also be created.
If there is no need to classify the event with a Taxonomy field, click Delete to remove it.
1146
130.4. Searching Patterns
The pattern list has a simple search input box in the upper right corner. This can search for entries in the list and
will show rows which contain the specified keyword.
There is a more powerful search interface which allows searching in any of the patterns' properties (fields, test
cases, etc). Click on the SEARCH PATTERN menu item under the PATTERN menu.
1147
130.5. Exporting and Importing Patterns
NXLog Manager can export and import patterns in an XML format. This is the same format used by the NXLog
agent. To export a pattern or a pattern group, check its checkbox in the list and click Export. Import a pattern
database file by clicking on the IMPORT PATTERN menu item or the Import button under the pattern list.
To use the patterns in an NXLog agent, add a pm_pattern processor module and select the appropriate pattern
groups:
The patterns will be pushed to the NXLog agent after clicking Update config and they will take effect after a
restart. See the Agents chapter for more information about agent configuration details.
Some patterns work with a set of fields and this requires some preprocessing (e.g. syslog
parsing) in some cases. Instead of writing a regular expression to match a full Syslog line which
includes the header (priority, timestamp, hostname etc), it is a lot more efficient to write the
NOTE
regular expression to match the Message field (instead of the raw_event field) and have a syslog
parser store the header information in separate fields before the pattern matching. These
patterns will be usable when the same message is collected over a different protocol.
1148
Chapter 131. Correlation
Event correlation is an important concept in log analysis. Each log message contains information about an event
which occurred at some point. There are cases when the occurrence (or absence) of one or more events must be
treated specially.
A trivial example for this is to detect 3 failed login attempts into a system. When this happens, the user will be
likely locked out. If the log analysis system is capable to detect such situation, a lot of things can be automated.
You will not need to wait for the user to come asking for a new password.
Event correlation in NXLog Manager is architected similarly to the Pattern system. It is performed in real-time by
the NXLog agents. This way it is possible to do local event correlation on the client side (at the agent). This will
not only reduce load on the central server but the system can send alerts over another channel (e.g. SMS) even if
the network is down and the log messages would not reach the central log server.
For more information about the correlation capabilities of NXLog Manager, please consult the NXLog Reference
Manual and see the documentation about the pm_evcorr module.
Clicking the name of a correlation ruleset will show a list of correlation rules within the ruleset:
1149
The order of the rules within the ruleset matter because they are evaluated by NXLog’s
NOTE pm_evcorr module in the order they appear. To change the order of the rules, use the Up and
Down buttons.
Each correlation rule has a mandatory Name, Type and Action parameter and one ore more type specific
parameters where the conditions can be specified. The following correlation rule types are available:
• Simple
• Suppressed
• Pair
• Absence
• Thresholded
Please consult the NXLog Reference Manual and see the documentation about these rule types provided by the
pm_evcorr module. There are two modes available to specify a condition.
Matched pattern is
This will generate a simple test to check whether the specified pattern matched. The generated NXLog config
will contain a similar snippet:
1150
if $PatternID == 42 {\
ACTION \
}
For this to work, the pm_pattern module must be configured and must be in front of the
NOTE pm_evcorr module in the route. The pm_pattern module is responsible for setting the
PatternID field.
Expert
This field expects a statement (a boolean condition) which evaluates to TRUE or FALSE. The above expressed
in Expert form would look like the following:
$PatternID == 42
Using the language constructs provided by NXLog, it is possible to specify more complex conditions here, for
example:
In this example, a correlation rule is created which will detect SSH brute force attempts. The rule defines
this attempt as 5 login failures within a 20 second interval. In this example, only an internal warning
message is generated, but it is possible to trigger any other action such as executing an external script to
block the IP or send an email alert.
This correlation rule depends on a pattern which matches the SSH authentication failure events. See the
Creating patterns section on how to do this. Once the pattern is available in the database, the correlation
rule should be configured as shown on the following screenshot:
1151
Unlike patterns, correlation rules used by NXLog are not in XML. Correlation rules exported from
NXLog Manager cannot by used by NXLog because the NXLog agent uses Apache-style
NOTE
configuration for the rules and this is part of (or included in) the pm_evcorr module
configuration block in nxlog.conf.
To use the correlation rules in an NXLog agent, add a pm_evcorr processor module and select the appropriate
correlation ruleset:
It is recommended to use only one ruleset per agent. The correlation rules are pushed to the NXLog agent by
clicking Update config and they take effect after a restart. See the Agents chapter for more information about
agent configuration details.
In many cases correlation rules depend on patterns (and the PatternID field). For this reason a
NOTE
pm_pattern module should be in the processor chain before the pm_evcorr module.
1152
Chapter 132. Agents
NXLog agents are used to collect and store event log data. This chapter discusses the GUI configuration and
management frontend provided by NXLog Manager. For more information about the NXLog agent, please refer
to the Agent-Based Collection chapter in the User Guide.
NXLog agent instances can be managed, monitored, and configured remotely over a secure channel. The
management component in NXLog Manager is called the agent manager. There are two operation modes:
• The NXLog agent initiates the connection and connects to the agent manager.
• The agent manager initiates the connection and connects to the NXLog agent.
Mutual X.509 certificate-based authentication is used over a trusted, secure TLS/SSL channel in order to
guarantee that only an authorized NXLog agent can connect to the agent manager. The agent manager queries
the status information of each NXLog agent every 60 seconds.
Each agent instance is provided with a special configuration file coming from the agent manager. The file name is
log4ensics.conf and it is located under the path /opt/nxlog/var/lib/nxlog/ on Linux and C:\Program
Files\nxlog\conf on Windows platforms.
This file contains a BASE64-encoded blob in the header that stores the configuration of the agent. NXLog
Manager can restore the configuration of the agent in case the agent configuration gets lost from the manager’s
database.
It is important that this file should not be modified manually or deleted. In case the manager cannot read the
blob, the below error message is generated:
Status
A coloured sphere shows the agent’s status:
Green - Online
The agent is connected and its latest status response is successful. The agent is functioning normally.
Grey - Offline
The agent is not connected to the agent manager. Check the NXLog agent’s LogFile for error diagnostics.
1153
Red - Error
One or more modules which the agent is configured to use are not running or not configured correctly.
For network output modules, there is likely a connection issue and the module is unable to send. Check
the NXLog agent’s LogFile for error diagnostics.
Yellow - Unmanaged
The agent is configured to be Unmanaged, and it is not possible to administer it remotely.
Yellow - Untrusted
The agent is connected to the manager without its own certificate. An agent must be issued a valid,
unique certificate if it was installed without one.
Yellow - Forged
The agent certificate has a CN (common name) that does not match the reverse DNS of this agent’s IP
address. Certificates must be issued to each agent—they must not be copied or configured from another
agent.
Once the agent configuration is updated centrally (by the application), or locally (on the
agent side), changes must be deployed via the Update config command in order to apply the
NOTE
central configuration. If the configuration has been changed locally, a confirmation will be
requested.
Agent name
Since the agent name is taken from the certificate subject name, the same name must be used for the agent
name as is used in the certificate subject. Click the agent name to load the Agent information page.
Template
The identifier of the template assigned to this agent. Agents inherit all template configuration settings. For
more information about templates, see the chapter on Templates.
This column is not displayed by default. To enable or disable column visibility, click the
NOTE round, grey Configuration button on the top left of the table, then check (or uncheck) the box
by the name of the column.
Tags
The tags assigned to this agent. Tags may also be assigned to a template. For more information about
configuring tags for templates, see Tags section.
This column is not displayed by default. To enable or disable column visibility, click the
NOTE round, grey Configuration button on the top left of the table, then check (or uncheck) the box
by the name of the column.
Version
The NXLog agent version number.
Host
The IP address of the remote host from which the NXLog agent is connected. Not available when the agent is
Offline or Unmanaged.
Started
Shows the time the NXLog agent last started. Not available when the agent is Offline or Unmanaged. This value
is set when the NXLog service is started or restarted, but it is not set when using the Reload button.
Load
The system load as reported by the NXLog agent’s host operating system. If this is not implemented on a
platform (e.g. Microsoft Windows) Unknown will be displayed. A small graph displays the last 10 average
1154
values. This information is not available when the agent is Offline or Unmanaged.
This value represents the system load of the host operating system, not the NXLog agent.
NOTE
Due to other resource intensive processes, this can be high even if the NXLog agent is idle.
Mem. usage
The amount of memory used by the NXLog agent. On some platforms, Unknown is shown if the information is
not available. A small graph displays the last 10 average values. This information is not available when the
agent is Offline or Unmanaged.
Received
The sum of log messages received by all input modules since the agent has been started. A small graph
displays the last 10 average values. This information is not available when the agent is Offline or Unmanaged.
Received today
The sum of log messages received by all input modules in the last 24 hours. A small graph displays the last 10
average values. This information is not available when the agent is Offline or Unmanaged.
Processing
Each NXLog agent module has a separate queue. This number shows the sum of messages in all modules'
queues. A small graph displays the last 10 average values. This information is not available when the agent is
Offline or Unmanaged.
Sent
The sum of log messages written or sent by all output modules since the agent has been started. A small
graph displays the last 10 average values. This information is not available when the agent is Offline or
Unmanaged.
If there are two output modules writing or sending logs from a single input, the number
NOTE
under Sent will be double of the value under Received.
Sent today
The sum of log messages written or sent by all output modules in the last 24 hours. A small graph displays
the last 10 average values. This information is not available when the agent is Offline or Unmanaged.
If there are two output modules writing or sending logs from a single input, the number
NOTE
under Sent will be double of the value under Received.
The information shown in the agent list is refreshed every 60 seconds or when Refresh status is clicked.
On the top left, the Filter agents button and the Show n entries drop-down menu are used to reduce the
number of items displayed.
1155
The agents list can be filtered by 3 criteria:
• Agent Status
• Agent Name
• Template(s) assigned
Click Apply filter to refresh the agent list with only agents which match the filtering criteria. For example,
selecting ONLINE status will show the following:
When a filter is applied, click Clear filter to discard the applied filter and show all agents.
On the Filter Agents dialog, there is an option to save the current filter in the configuration database as an
Agents View. Click Create View to enter the view name:
The view name must be unique, and not contain any special characters or spaces. Saved views appear as tabs
next to the Agent templates tab. A newly created view is applied to the current list immediately:
1156
At the bottom of the agent list is a row of actions used to manage agents.
The NXLog process cannot be stopped or started from the NXLog Manager management
NOTE
interface.
Refresh status
Send a query to the agent to retrieve latest status information. At least one Online agent must be selected to
use this action.
Start
Start all stopped modules. At least one Online agent must be selected to use this action. The NXLog process
cannot be stopped or started from the NXLog Manager management interface.
The xm_soapadmin module responsible for the agent manager connection is always running
NOTE
and is not affected by this action.
Stop
Stop all modules. At least one Online agent must be selected to use this action. The NXLog process cannot be
stopped or started from the NXLog Manager management interface.
The xm_soapadmin module responsible for the agent manager connection is always running
NOTE
and is not affected by this action.
Export
Export agent configuration in XML text format. When activated on the selected agent the export action dialog
appears. In this dialog, the manager allows separate parts to be exported. When the export is finished, the
browser downloads it.
Import
Import an agent configuration, typically one previously exported.
There is the option to override and define new global configuration such as the new
NOTE
manager address.
1157
When triggered, the browser redirects to import options (if global config has been overridden, this section is
skipped):
Similar to the "Clone" agent function, choose the XML file to import the new agent(s) configuration. When this
is done, the manager also allows separate parts of the configuration to be imported:
1158
Update config
After agent settings are changed, use this action to push the new configuration to the agent. All configuration
related files, including pattern database files and certificates, will be pushed to the agent. At least one Online
agent must be selected to enable this action.
Reload
Click to stop the agent, shut down all modules, reload the configuration, then reinitalize and start them all
again. This should be used after a new configuration is pushed to the agent in order for the new settings to
take effect. At least one Online agent must be selected to enable this action.
This is not a process/service level restart but rather a reload. The xm_soapadmin module
responsible for the agent manager connection must be always running, so this module is not
NOTE
affected by this action. The NXLog process cannot be stopped and/or started from the
NXLog Manager management interface.
Configure
Load the Agent configuration page. One and only one Online agent must be selected to enable this action.
Add
Add a new agent.
An agent will appear in the list without a configuration after successfully connecting to the
agent manager even if it does not exist in the agent list. It is possible to add a new agent by
NOTE
creating a certificate, deploying the installer and starting the NXLog service. The new agent
entry should appear automatically.
Delete
Delete the agent. There is no confirmation dialog for agent deletion. Be careful using this action.
The agent will reappear if it has a valid certificate and can successfully authenticate to the
agent manager. Make sure to revoke the certificate and stop the NXLog service before you
NOTE delete the entry with this button. If the NXLog service is not stopped and removed, it will
continue to execute based on its configuration settings, including reconnecting to the agent
manager.
Clone
Clone the agent. The cloned agent(s) will have all the modules and routes of the original. One and only one
Online agent must be selected to enable this action.
Download config
Downloads the agent configuration in a zip file to ease agent’s deployment locally. Each agent will have a
folder in the archive with its name containing all the necessary configuration files and certificates.
View log
View the log of an agent. By the default, it is limited to 100K of the last log. One and only one Online agent
must be selected to enable this action.
Assign template
Assign a template to the selected agent(s). The selected agents' configuration will be replaced with the
configuration of the assigned template.
Issue certificate
Issue a certificate for the selected ageens. If the checkbox Update connected agents remains checked, the
1159
manager will issue the Update config command. At least one Online agent (which doesn’t have a certificate
assigned) must be selected to enable this action.
Renew certificate
Renew certificate for the selected ageens). If the checkbox Update connected agents remains checked, the
manager will also issue the Update config command. At least one Online agent must be selected to enable
this action.
The page will show less information if the agent is not connected to the agent manager. The action buttons on
this page function similarly to those on the agent list page, discussed above.
If the agent is Online and some of its modules have variables or statistical counters, they will appear on this page
in a table.
132.1.1.1. Modules
Click the Modules tab to show detailed information about each module as shown in the following image.
This information is only available when the agent is Online. The table contains the following information.
Name
The name of the module instance.
Module
The type of loadable module which was used to create the module instance.
Type
The type of module:
1160
• INPUT
• PROCESSOR
• OUTPUT
Status
The status of the module:
• STOPPED
• RUNNING
• PAUSED
• UNINITIALIZED
The module may become PAUSED if it cannot send or forward the output. This is caused
by the built-in flow control and is perfectly normal unless you see the module in this
NOTE
status for a longer period and the number of sent messages does not increase. You do
not need to start the module when it is PAUSED, it will resume operations automatically.
Received
The number of log messages received or read.
Processing
The number of log messages in the module’s queue waiting to be processed.
Sent
The number of log messages written or sent by the module.
Dropped
The module may drop some messages. This number will be shown here. This is calculated using the values
reported in Received and Sent.
132.1.1.2. Statistics
Click the Statistics tab to display several fully interactive graphs. There is a graph for each of the following
parameters:
Optionally, additional graphs can be added for module variables and statistical counters by clicking the Add
chart button.
Select a module and fill in the name of the variable. Regular expressions of the name are also supported.
Select the graph’s interval from the following values displayed in the drop-down menu:
• Six hours
• One day
• One week
• One year
1161
132.1.2. Agent Configuration
To load the agent configuration form, click the Configure button on the agent list page or the Configure tab at
the top of the agent page. The global configuration tab appears.
Agent name
Set this to the certificate subject name. It is automatically filled out when the agent is connected and
automatically added.
Connection type
Unmanaged
Set the connection type to Unmanaged if you do not want to administer the agent remotely over a secure
connection.
Address
Either the address to which the agent should connect or the address to which the agent is listening,
depending on the Connection type setting.
1162
Port
Either the port number to which the agent should connect or the port to which the agent is listening,
depending on the Connection type setting.
Certificate
The certificate to be presented by the NXLog agent during the mutual authentication phase when the
connection is established with the agent manager. The agent manager will check whether the agent
certificate has been signed with the CA configured on the Agent Manager settings tab.
Log level
The level of detail to use when sending internal messages to the logfile and the im_internal input module.
Log to file
Enable this to use a local nxlog.log file where NXLog agent internal events are written. This method is more
efficient and error resistant than using the im_internal module, and it also works with the DEBUG log level.
Verbatim config
Verbatim configuration text for this agent. This configuration will be placed in the log4ensics.conf file as is.
The list of modules can be managed independently regardless of the route they belong to. The following
screenshot shows an example list of modules.
Add
Click Add to add a new module. The module configuration dialog will pop up.
Remove
To remove a module, click the checkbox after the module’s name. Modules which are already part of a route
cannot be removed.
Routes
Go to the Routes tab to remove or add modules to a route. On the other hand, modules not part of a route
can only be removed on this list. Configuration will not be generated for modules which are not part of a
route.
Copy
Click Copy to copy this module configuration to other agents. A popup will appear to select them. Click the
module’s name to modify its configuration.
To configure the flow of log data in the NXLog agent, click the Routes tab. A freshly created agent does not have
1163
any routes. Click Add route to add a route.
Enter the name and select the priority. Data will be processed in priority order among routes. Lower gets
processed first. This is only useful if you have more routes which contain different input sources. Select default if
you do not wish to assign a priority value.
After the route is added, you can now add modules to it. A route requires at least one input and one output
module. The following screenshot shows an example of a route with one module for each type.
Click the Add button inside the input/processor/output block to add a module instance. The module
configuration dialog will pop up. If there is already an existing module instance, you will be able to select that
also. It is possible to add more module instances to each block. To remove a module, uncheck the checkbox after
its name. The module instance is only removed from the route. To fully delete it, click the Modules tab and
remove the module.
As with modules, an entire route can be copied to other agents. Click the Copy link on the top right of a route to
select one or more agents to copy to.
The last tab contains the generated NXLog configuration which will be pushed to the NXLog agent when Update
config is clicked, as shown in the following screenshot.
1164
132.1.3. Module Configuration
When a new module instance is created, the following dialog window is shown.
132.1.3.1. Parameters
The module configuration dialog Parameters tab consists of two blocks: Common parameters and Module
specific parameters. The Common parameters are as follows:
Name
The name of the module instance.
Module
The loadable module which is used to create the module instance.
132.1.3.2. Expert
Click the Expert tab for advanced configuration.
1165
The module configuration dialog Expert tab consists of:
Actions
The Actions text area can be used to input statements in NXLog’s configuration language. It is possible to add
multiple Action input widgets. Add each action with the Add action button. Click Verify to verify the
statement(s). The contents of the Action block are copied into the module’s Exec directive. Newline characters
will be replaced with the backslash escape character.
The statement entered in the example screenshot above is highlighted below in the generated NXLog
configuration.
Verbatim config
The following is generated into nxlog.conf from the above:
1166
Module specific parameters are not discussed in this user manual. Please consult the NXLog Enterprise Edition
Reference Manual for more information about each module and its capabilities.
1167
Chapter 133. Templates
NXLog templates automate the creation, configuration, tagging, and deployment of agents. This chapter
discusses the configuration and management front-end for templates provided by NXLog Manager.
133.1. Templates
Go to the NXLog Manager main menu and click Agents, then on the Agents page, click the Agent templates tab.
A list of templates is displayed including the following data fields:
Description
A detailed description of the template.
Connection address
The IP4 address of the device running NXLog agents and associated with the template configured for
connection with NXLog Manager. Not available when the template is Unmanaged.
Connection port
The connection port of the device running NXLog agents and associated with the template configured for
connection with NXLog Manager. Not available when the template is Unmanaged.
Created
The date and time the template was added.
Last modified
The date and time the template was last edited.
Add
Creaste a new template and open the editing dialog.
Export
Save the template configuration to an external file. Similar to Export agent configuration, this action exports
template configurations.
Import
Read a template configuration from an external file. Similar to Import agent configuration, this action imports
template configurations.
1168
Delete
Delete the selected template(s).
Deleted templates must be unassigned from the agents belonging to it. If there are any, a
confirmation dialog will appear:
NOTE
Clone
Create an exact copy of the selected template.
Create agents
Create agents and automatically assign the selected template to them.
133.2.1. Tags
NXLog templates and agents can be managed by tags. Tags have role and user access permissions. To list them
for a template, click on Tags tab under Template configuration page:
1169
Description
A detailed description of the tag.
Permissions by Role
Shows the access permissions of each role allowed to manage NXLog agents.
Permissions by User
Shows the access permissions of each user allowed to manage NXLog agents.
Add
Add a new tag.
Edit
Edit a tag.
Assign
Assign (or unassign) tags to this template.
To add a new tag to the system (and in parallel assign to this template), click on the Add button. An Add tag
dialog will appear:
Fill in the Name and optional Description for this tag. Each new tag is created with default access permissions, is
assigned to this template and will appear on the list:
A tag can be then edited by selecting it and clicking the Edit button. The Edit tag dialog has two tabs—Tag and
Permissions:
1170
If permissions need to be changed, click the Permissions tab, then by User:
If tags needed to be assigned/unassigned to the current template, click on the Assign button on the tag list page.
The following dialog will appear:
Select the tags needed from the multi-select box and Assign them.
1171
Chapter 134. Agent Groups
NXLog agent groups are used for agent management by grouping them using Agent tags. This chapter discusses
the GUI configuration and management frontend provided by NXLog Manager.
Group name
The group name is the unique name of a NXLog agent group. Clicking on the name will load the agents which
are tagged by it.
Description
This is the NXLog agent group detailed description.
On the bottom of the list there is a row of actions which can be used to manage the groups.
In the agent list there are additional actions which can be used to manage this group.
Delete group
Delete this group/tag.
Add agents
Add agents to this group.
To add agents to this group, click on the Add agents button. An Add agents dialog will appear:
1172
Select the desired agents and click Add. The selected agents will be added to the list in this group.
1173
Chapter 135. Certificates
NXLog Manager uses X.509 certificates for various security purposes and has a built-in PKI system to manage
them.
Name
The certificate subject name.
Type
The type is either CA or CERT.
Activation
The time and date after which the certificate is valid.
Expiration
The time and date before which the certificate is valid.
Status
This status is either VALID, REVOKED, or EXPIRED.
Private Key
This field indicates whether the private key pair of the certificate is available or not.
The certificate list shows entries in a hierarchical (tree) structure. Certificates (and sub-CAs) will be rooted under
the CA which was used to sign it.
If the PKI system does not have any certificates, you will need to create a CA first.
135.2. Creating a CA
The certificate authority is used to issue and sign certificates and, subsequently, to verify the associated trust
relationships. To be able to create certificates, a CA is required. To create a CA cert, click Add new CA on the
certificate list page. The certificate creator dialog is displayed.
1174
Some field values are pre-filled if certificate settings are already configured. After clicking Create, the new CA
appears.
Some field values are pre-filled if the certificate settings are already configured. Fill in the name (certificate
subject) and expiry and select the certificate purpose. It is possible to customize the certificate purpose flags, but
this is not required if the certificate is only used within NXLog Manager with NXLog. After clicking Create, the
new certificate appears, displaying information similar to the following screenshot.
1175
135.4. Exporting
To export a certificate, click Export on a certificate’s general information page or below the certificate list after
selecting one certificate. The following options appear.
In order to support external certificate tools and PKI systems, certificates can be exported in different formats.
The NXLog agents use PEM formatted X.509 certificates.
To export selected certificates from the list in PKCS#12 key store format, click on the export PKCS#12 button
from the certificates page. It will ask for an optional password to protect the PKCS#12 key store:
1176
135.5. Importing
In order to support external certificate tools and PKI systems, certificates can be imported in different formats.
If the PKI system does contain the certificate of an NXLog agent and the certificate is found to be revoked, the
connection will be refused.
Generally, it is not a good idea to have multiple valid certificates with the same subject. If a
WARNING certificate has been superseded by a new one (e.g. already pushed to the agent), make sure
to revoke the former.
• The first time the 'admin' user logs-in, NXLog-Manager generates a random encryption key with predefined
length. This key is only kept in application memory space and certificate keys will be encrypted with it.
• NXLog-Manager then encrypts this key with the authorized user password and saves it in the user settings in
the database. When NXLog-Manager is restarted, the authorized user with a key must login to decrypt this
key with the password to make it available for encryption/decryption of certificate keys.
An authorized user is eligible for this key when it has the role of ROLE_ADMINISTRATOR or
NOTE
ROLE_CERTIFICATE. By default an 'admin' user has ROLE_ADMINISTRATOR.
1177
When a new authorized user is added, the encryption key must also be encrypted with the new
user’s password and saved in the database. Currently, this can only happen if the user logs in to
NOTE the same application session for which this key is already available (an authorized user with a
stored key has logged in to unlock the key). In the future, there will be enhancements in NXLog
Manager to skip the log-in step for new authorized users.
There is an option Do not encrypt agent manager’s private key in Agent Manager. With this option active,
when the NXLog Manager has to be restarted for some reason it is not necessary for the administrator to log in
to decrypt their keys so the manager can (re)start.
If reset certificates and encryption key is the last resort and there is no other option, this action can be done
from this dialog. This will update the certificates for the connected agents with renewed certificates. It is a good
outcome if there are as many agents as possible connected and the manager should be already running with
non-encrypted keys.
All offline agents during this operation must be updated locally with the new certificates. They will not be able to
connect to the manager when it is running with new certificates for authentication.
There will be notifications for each change/failure in the UI and also in the logs.
1178
Chapter 136. Settings
To configure the system components, click on the SETTINGS menu item under the ADMIN menu. Each tab is
discussed below.
The above screenshot shows the Agent manager tab where its parameters can be configured.
The agent manager can accept and initiate connections to the agents. Enable the Accept agent connections
checkbox to let the agent manager accept incoming connections from agents. Enable the Initiate connection to
the agent’s checkbox to let the agent manager initiate the connection.
For these settings to work, the agents must be configured accordingly. See the Agent
NOTE
Connection type configuration parameter.
Listen address
When Accept agent connections are requested, the IP address of the interface must be specified. Use 0.0.0.0
1179
to listen on all interfaces.
Port
When Accept agent connections are requested, the port number must be specified. This is the port the agent
manager will listen to for incoming connections.
CA
The CA configured here is used to verify the certificate presented by the NXLog agent during the TLS/SSL
handshake.
Certificate
The certificate configured here will be used to authenticate to the NXLog agent during the TLS/SSL
handshake.
For security reasons, certificate private keys in the database are stored in encrypted form. These are encrypted
with a master key which is accessible to users with ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE access
rights. The agent manager’s private key is required to be able to establish the trusted control connection with the
agents. Enable the "Don’t encrypt agent manager’s private key" option for the system to be able to operate in an
unattended mode. Otherwise, the agent manager connection will only work after a reboot/restart following a
successful admin login.
Another security option is Subject Name Authorization. Subject Name Authorization refers to the check that
happens when the TLS connection is established with the agent and the manager looks at the CN in the
certificate subject and checks whether this matches the reverse DNS. This is to prevent malicious agents
connecting with a stolen certificate. The Agent name setting that follows is to select what data to use as the "agent
name". These two options are useful on networks with DHCP assigned addresses where the agent may have
different IP addresses.
Warn if untrusted.
When this option is selected, agent manager will accept agents which try to authorize with Subject Name
other than their reverse DNS, and will mark them as Forged.
Reject agent.
When this option is selected, agent manager will reject agents which try to authorize with Subject Name other
than their reverse DNS.
Disable.
When this option is selected, agent manager will ignore the mismatch between Subject Name and reverse
DNS for connected agents.
Due to Subject Name Authorization and the specifics of some networks like NAT, agent manager must have a
policy for names of connected agents which will appear on the Agent list. Agent manager supports 3 options for
Agent name:
1180
Use reverse DNS name, else IP address.
When this option is selected, agent manager will try to resolve the Fully Qualified Domain Name of connected
agents. If resolving fails, it will use the agent’s IP address.
If one of the last 2 options is selected and the NXLog agent does not authorize with valid client
NOTE
certificate (but the manager demands Subject Name), the agent will be rejected.
Per agent rules for Subject Name Authorization and Agent Name can be defined by clicking the button "Add
override". The following dialog will appear:
There are 3 types of hosts which can be defined: exact name or IP address; name or IP address regular
expression and IP address range. An option exists to verify host definition against real host. The overridden rules
will appear as list under the global manager rules.
Click Save & Restart to apply the changes. The Status field will display the status of the agent manager.
136.2. Certificates
This form is divided in two sections—Certificate defaults and Certificates provider.
1181
Certificate defaults
This form can be used to set common parameters which are used during certificate creation. Most of these
attributes are common, though there are some that deserve a direct mention:
By default on a new system with a blank database, this setting will be disabled. If
this setting is enabled, you must always have an available administrator which can
IMPORTANT unlock the keys after log in. Losing the encryption key will lose access to private
keys, making certificates unusable. This feature must be taken very seriously. Practice
special care when enabling it.
Keystore type
This is the default keystore type that NXLog Manager will use when dealing with certificates. BKS is
considered more secure than default Java keystore JKS. If BKS does not have enough support for
certificates such as Elliptic Curve Certificates, there is an option to change the keystore type to JCEKS
instead.
NXLog Manager uses RSA encryption by default until another type of certificate is used.
NOTE
For example, if EC certificate is imported in the system, EC encryption is used.
Key size
2048 is the default key size. It is recommended that a longer length be used. Currently, 3072 is considered
safe until year 2030 with existing hardware architectures. A length of 4096 is practically unbreakable.
1182
Certificates provider
The Certificates provider option makes it possible to use a PKCS11 compliant backend to store certificates
and private keys instead of using the default configuration database. The PKCS11 API is implemented by most
smart cards and HSM devices, which can be used to securely store private keys.
136.3. Mail
To be able to send notification emails, an SMTP server is required. The Mail tab provides a form where the SMTP
server settings can be specified.
1183
136.5. License
The License tab provides a form to upload and show the license file and license details.
If the license is invalid or expired, a warning will be displayed as shown in the following image.
1184
136.6. User Settings
This form is divided in two sections—Settings and Change password:
Settings
The logged in user can change their name, email address, user interface language, and theme. The email
address will be used for system notifications.
Change password
This section allows to the logged in user to change their password.
1185
Chapter 137. Users, Roles, and Access Control
NXLog Manager’s user and role management system allows administrators to grant access to functions and
resources in a flexible and customizable framework.
137.1. Users
To access user management, from the main menu go to ADMIN, then click USERS. The Manage Users page is
displayed.
To add a user, click Add User at the bottom left of the page, as shown below.
The Add User dialog appears. Enter the new user’s details and credentials. You can also enable the user, assign
roles to the user, or toggle the user’s LDAP status (see also LDAP and LDAPS below).
1186
Click Add. The new user appears in the Users list on the Manage Users page.
To change the user’s information and role assignments, select the user from the Users list and click Edit. The
User Edit dialog appears.
1187
Edit the user information and assigned roles, then click Save.
By default, all roles have read-write permissions. To restrict certain roles to read-only, click the
NOTE selected role name. Notice the marker after the role name toggle between RW (read/write
access) and RO (read only access).
137.2. Roles
To access role management, from the main menu go to ADMIN, then click ROLES. The Manage Roles page is
displayed.
The default installation creates a set of built-in (read-only) roles. They are listed in the Roles pane on the left of
the Manage Roles page.
1188
137.2.1. Built-in User Roles
Built-in roles provide a solid basis for most user management scenarios. Each role grants the user access to the
functionality described. Built-in roles may not be modified or deleted.
ROLE_ADMINISTRATOR
The user can access and execute all administrative functions.
ROLE_FIELD
The user can access and execute all field administration functions.
ROLE_PATTERN
The user can access and execute all PATTERNS functions.
ROLE_CORRELATION
The user can access and execute all CORRELATION rule functions.
ROLE_AGENT
The user can access and execute all AGENTS functions.
ROLE_CERTIFICATE
The user can access and execute all CERTIFICATES functions.
ROLE_READONLY
This is a special role which denies any modification to the system by the user.
Additional roles for more sophisticated user management scenarios are easily created. Click Add Role at the
bottom left of the Manage Roles page. The Add Role dialog appears.
1189
Enter the new role’s name, then click Submit. The new role’s name appears in the list on the Manage Roles
page.
jetty-env.xml (fragment 1)
<Set name="ldapEnabled">true</Set>
<Set name="ldapServerURL">ldap://192.168.1.10/dc=nxlog,dc=org</Set>
<Set name="ldapUserSearchBase">cn=users</Set>
<Set name="ldapUserSearchFilter">(sAMAccountName={0})</Set>
<Set name="ldapUserDn">nxlog</Set>
<Set name="ldapPassword">PASSWORD</Set>
Below is another example showing how to configure an additional filter for the search function, in this case using
nested groups. This example is also from a working Active Directory setup, which you can tell from the use of
sAMAccountName for user search settings in this and the previous example.
jetty-env.xml (fragment 2)
<Set name="ldapUserSearchFilter">
(&(sAMAccountName={0})(memberOf:1.2.840.113556.1.4.1941:=CN=NXLog_Admins,OU=Admin
Groups,OU=Level1,dc=domain,dc=local))
</Set>
1190
137.2.2.2. LDAPS Configuration
To use LDAP over SSL, your LDAPS certificate trust store must be imported into the JRE’s key store, using the
keytool command:
After updating the key store, ensure the protocol in jetty-env.xml is changed from ldap:// to
NOTE
ldaps://.
To access the audit trail, from the main menu go to ADMIN, then click AUDIT TRAIL. The Audit Trail page is
displayed.
The page presents a table of events in chronological order. Each row is an event and each column is a field
corresponding to the event. The fields include the event date and type, username, manager address, user
address, and details about the event.
Click any of the column headers to sort the events by that field’s values, in ascending order. Click the column
header again to toggle the sort order between descending and ascending.
To toggle display of event details, click the plus (+) in the details field in the corresponding event row.
Typically, there are a large number of event entries. To filter the events list, go to the top of the Audit Trail page
and click Filter audit trail. The Filter Audit Trail dialog appears:
Audit events can be filtered by the following criteria: the event type, the time range in which the event occurred,
and the username associated with the event.
1191
Click Apply filter to apply the filtering criteria to the event list. For example, selecting DELETE event type and
applying the filter criteria displays a list similar to the following:
To discard the applied filter criteria, click Clear filter (at the top of the page). The unfiltered list of all audit events
is displayed.
1192
Chapter 138. RESTful Web Services
NXLog manager provides a *RE*presentational *S*tate *T*ransfer (REST) interface, or RESTful web service API,
that makes it possible to access agent information or configure the system without using the UI.
NXLog Manager is distributed with an embedded documentation of its REST API with detailed specification of all
supported RESTful services. Once the NXLog Manager instance is up and running, the documentation is available
at http://hostname:port/nxlog-manager/swagger-ui.html (for viewing in a web browser). If you want to get
this documentation programmatically, use the http://hostname:port/nxlog-manager/v2/api-docs endpoint
that returns information about NXLog Manager RESTful services as raw JSON.
NOTE Throughout this chapter the base URL is substituted with [B_URL].
138.1. agentmanager
This service is useful to verify the NXLog Manager is up and running. This is a GET request with URL
[B_URL]/agentmanager and no additional parameters. This service can also be used if the Don’t encrypt agent
manager’s private key setting is not enabled on the Settings tab and the NXLog Manager service has been
restarted (or after a reboot). A REST call of a user who has ROLE_ADMINISTRATOR and/or ROLE_CERTIFICATE
access rights can decrypt the master key, enabling the agent manager to establish the trusted control connection
with the agents.
138.2. appinfo
This service provides information about a running NXLog Manager. This is a GET request with URL
[B_URL]/appinfo and no additional parameters. This service provides the uptime, license state and expiration
date, version, and revision.
1193
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="appinfo">
<values>
<applicationinfo>
<nxmUptime>17143265</nxmUptime>
<licenseState>LICENSED/Expired</licenseState>
<licenseExpireDate>2011-12-30 22:00:00.0 UTC</licenseExpireDate>
<appVersion>5.0</appVersion>
<appRevisionNumber>4895</appRevisionNumber>
</applicationinfo>
</values>
</result>
138.3. agentinfo
This service provides information about NXLog Agents registered with the NXLog Manager. This is a GET request
with URL [B_URL]/agentinfo that can take additional parameters. The response can be filtered by the name or
the state of the agent with the options agentname and agentstate. Those two parameters cannot be combined,
unlike the third parameter agentwithmodules which will also include module information with the agent
information. For example, to get information for both the agents and the modules for all agents with state
ONLINE the following REST call can be used:
[B_URL]/agentinfo?agentstate=ONLINE&agentwithmodules=true. Refer to the Agents chapter for more
information.
1194
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="agentinfo">
<values>
<agent>
<name>192.168.122.1</name>
<version>3.99.2866</version>
<status>ONLINE</status>
<load>0.16</load>
<address>192.168.122.1</address>
<started>2017-12-15 16:36:23.974 UTC</started>
<memUsage>7442432.0</memUsage>
<received>2</received>
<processing>0</processing>
<sent>2</sent>
<sysinfo>OS: Linux, Hostname: voyager, Release: 4.4.0-103-generic, Version: #126-Ubuntu SMP
Mon Dec 4 16:23:28 UTC 2017, Arch: x86_64, 4 CPU(s), 15.7Gb memory</sysinfo>
<modules>
<module>
<name>in_int</name>
<module>im_internal</module>
<type>INPUT</type>
<isRunning>true</isRunning>
<received>2</received> <processing>0</processing>
<sent>2</sent>
<dropped>0</dropped>
<status>RUNNING</status>
</module>
<module>
<name>null_out</name>
<module>om_null</module>
<type>OUTPUT</type>
<isRunning>true</isRunning>
<received>2</received>
<processing>0</processing>
<sent>2</sent>
<dropped>0</dropped>
<status>RUNNING</status>
</module>
</modules>
</agent>
</values>
</result>
138.4. addagent
This service adds a new NXLog Agent to the list of existing Agents. This is a POST request with URL
[B_URL]/addagent that can take several additional parameters. The only mandatory parameter is agentname,
which is the name for the new agent. The optional parameter connectionmode can be used to change the
connection type of the Agent, from the default CONNECT_TO, to either UNMANAGED or LISTEN_FROM. The
connectionport parameter can be used to change the default port of the manager from 4041; this parameter
can only be used for managed connection types. The connectionaddress parameter can be used to set the IP
address the manager will either CONNECT_TO or LISTEN_FROM; the default value is localhost. The loglevel
parameter can be used to set the log level; values can be DEBUG, INFO, WARNING, ERROR, or CRITICAL. The
logtofiled parameter is used to enable the agent to use the local nxlog.log file. To create agent clones, the
agentname parameter can be specified more than once with unique agent names. Refer to the Agents chapter
for more information.
1195
agenttemplate=true. If multiple agent names are specified when creating a template, the first one will be the
name of the template and the rest will be agents based on this template. Refer to the Templates chapter for
more information.
Creating a new Agent, for example, can be done with this REST call:
[B_URL]/addagent?agentname=Justatest&connectionmode=LISTEN_FROM. This will return the following XML
message that includes the Agent configuration in base64 encoded format.
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="addagent">
<values>
<addagent>
<configuration># PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPGFnZW50PgogICAgPG5hbWU+
# SnVzdGF0ZXN0PC9uYW1lPgogICAgPG5zMTpnbG9iYWwtY29uZmlnIHhtbG5zOm5zMT0iaHR0cDov
# L2Nhc3Rvci5leG9sYWIub3JnLyI+CiAgICAgICAgPGNlcnQtaWQ+MTg8L2NlcnQtaWQ+CiAgICAg
# ICAgPGxvZy1sZXZlbCB4bWxuczp4c2k9Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvWE1MU2NoZW1h
# LWluc3RhbmNlIgogICAgICAgICAgICB4bWxuczpqYXZhPSJodHRwOi8vamF2YS5zdW4uY29tIiB4
# c2k6dHlwZT0iamF2YTpqYXZhLmxhbmcuU3RyaW5nIj5JTkZPPC9sb2ctbGV2ZWw+CiAgICAgICAg
# PGlzLWxvZy10by1maWxlPnRydWU8L2lzLWxvZy10by1maWxlPgogICAgICAgIDxjb25uZWN0aW9u
# LW1vZGUKICAgICAgICAgICAgeG1sbnM6eHNpPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNj
# aGVtYS1pbnN0YW5jZSIKICAgICAgICAgICAgeG1sbnM6amF2YT0iaHR0cDovL2phdmEuc3VuLmNv
# bSIgeHNpOnR5cGU9ImphdmE6amF2YS5sYW5nLlN0cmluZyI+TElTVEVOX0ZST008L2Nvbm5lY3Rp
# b24tbW9kZT4KICAgICAgICA8Y29ubmVjdGlvbi1wb3J0PjA8L2Nvbm5lY3Rpb24tcG9ydD4KICAg
# IDwvbnMxOmdsb2JhbC1jb25maWc+CjwvYWdlbnQ+Cg==
</configuration>
</addagent>
</values>
<result>
138.5. modifyagent
This service modifies the configuration of an existing Agent. This is a PUT request with URL
[B_URL]/modifyagent. This service has the same parameters as the addagent service, except for the
agenttemplate parameter.
138.6. deleteagent
This service deletes an existing Agent. This is a DELETE request with URL [B_URL]/deleteagent. The only
parameter required for this service is the agentname parameter.
138.7. certificateinfo
This safe service can retrieve certificate information from the NXLog Manager. This is a GET request with URL
[B_URL]/certificateinfo. Without any parameters the service will list all certificate information. Parameter
expirein can be used to list only certificates that will expire in the given number of days.
As an example, this call will list certificates expiring in one month: [B_URL]/certificateinfo?expirein=30. If
no certificates are expiring in that time period, an empty result is returned.
1196
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="certificateinfo">
<values>
<ok>
<message>The result is empty!</message>
</ok>
</values>
</result>
138.8. createfield
This service will create fields in NXLog Manager. This is a POST request with URL [B_URL]/createfield and
there are several parameters. The parameter name is the name of the field and must be a unique identifier. The
parameter type is the field type and must be one of the following types: STRING, INTEGER, BINARY, IPADDR,
BOOLEAN or DATETIME. The parameter description is a short description of the field. The parameters, 'persist'
and 'lookup' can be TRUE or FALSE. For more information, see the fields chapter.
The following REST call will create a TEST field of type STRING, both, persistent and lookup enabled too.
[B_URL]/createfield?name=TEST&type=STRING&description=Just a string&persist=true&lookup=true
Sample Output
<?xml version='1.0' encoding='UTF-8'?>
<result servicename="createfield">
<values>
<ok>
<message>OK</message>
</ok>
</values>
</result>
1197
NXLog Add-Ons
Various add-ons are available for NXLog, which provide specialized integration with various software and
services.
1198
Chapter 139. Amazon S3
This add-on can be downloaded from the nxlog-public/contrib repository according the license and terms
specified there.
NXLog can both receive events from and send events to Amazon S3 cloud storage. The NXLog Python modules
for input and output (im_python and om_python) are used for this, as well as Boto3, the AWS SDK for Python. For
more information about Boto3, see AWS SDK for Python (Boto3) on Amazon AWS.
NOTE The python2-boto3 package requires the installation of the EPEL repository.
~/.aws/config
[default]
region=eu-central-1
~/.aws/credentials
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
The region and credential configuration can also be hardcoded in the scripts, but this is not
NOTE
recommended.
Both the input and output Python scripts interact with a single bucket on Amazon S3. The scripts will not create,
delete, or alter the bucket or any of its properties, permissions, or management options. It is the responsibility of
the user to create the bucket, provide the appropriate permissions (ACL), and further configure any lifecycle,
replication, encryption, or other options. Similarly, the scripts do not alter the storage class of the objects stored
or any other properties or permissions.
We selected a schema where we store events on a single bucket and each object has a key that references the
server (or service) name, the date, and the event received time. Though Amazon S3 uses a flat structure to store
1199
objects, objects with similar key prefixes are grouped together resembling the structure of a file system. The
following is a visual representation of the naming scheme used. Note that the key name in the deepest level
represents a time—however, Amazon S3 uses the colon (:) as a special character and to avoid escaping we
selected the dot (.) character to substitute it.
• MYBUCKET/
◦ SERVER01/
▪ 2018-05-17/
▪ 12.36.34.1
▪ 12.36.35.1
▪ 2018-05-18/
▪ 10.46.34.1
▪ 10.46.35.1
▪ 10.46.35.2
▪ 10.46.36.1
◦ SERVER02/
▪ 2018-05-16/
▪ 14.23.12.1
▪ 2018-05-17/
▪ 17.03.52.1
▪ 17.03.52.2
▪ 17.03.52.3
Events are stored in the Amazon S3 bucket with object key names comprised from the server name, date in
YYYY-MM-DD format, time in HH.MM.SS format, and a counter (since multiple events can be received during the
same second).
1200
Example 711. Sending Events From File to S3
This configuration reads raw events from a file with im_file and uses om_python to forward them, without
any additional processing, to the configured S3 storage.
nxlog.conf
1 <Input file>
2 Module im_file
3 File "input.log"
4 # These may be helpful for testing
5 SavePos FALSE
6 ReadFromLast FALSE
7 </Input>
8
9 <Output s3>
10 Module om_python
11 PythonCode s3_write.py
12 </Output>
13
14 <Route file_to_s3>
15 Path file => s3
16 </Route>
The script keeps track of the last object retrieved from Amazon S3 by means of a file called lastkey.log, which
is stored locally. Even in the event of an abnormal termination, the script will continue from where it stopped.
The lastkey.log file can be deleted to reset that behavior (or edited if necessary).
2. Edit the BUCKET, SERVER, and POLL_INTERVAL variables in the code. The POLL_INTERVAL is the time the script
will wait before checking again for new events. The MAXKEYS variable should be fine in all cases with the
default value of 1000 keys.
3. Configure NXLog with an im_python instance.
1201
Example 712. Reading Events From S3 and Saving to File
This configuration collects events from the configured S3 storage with im_python and writes the raw events
to file with om_file (without performing any additional processing).
nxlog.conf
1 <Input s3>
2 Module im_python
3 PythonCode s3_read.py
4 </Input>
5
6 <Output file>
7 Module om_file
8 File "output.log"
9 </Output>
10
11 <Route s3_to_file>
12 Path s3 => file
13 </Route>
Pickling Events
import pickle
all = {}
for field in event.get_names():
all.update({field: event.get_field(field)})
newraw = pickle.dumps(all)
out = StringIO.StringIO()
with gzip.GzipFile(fileobj=out, mode="w") as f:
f.write(newraw)
gzallraw = out.getvalue()
1202
Chapter 140. Box
This add-on is available for purchase. For more information, please contact us.
The Box add-on can be used to pull events from Box using their REST API. Events will be passed to NXLog in
Syslog format with the JSON event in the message field.
2. Edit the configuration entries in the script as necessary, or use arguments to pass configuration to the script
as shown in the example below.
3. Configure NXLog to collect events with the im_exec module.
The script saves the current timestamp to a state file in order to properly resume when it is terminated. If the
state file does not exist, the script will collect logs beginning with the current time. To manually specify a starting
timestamp (in milliseconds since the epoch), pass it as an argument: ./box-pull.pl
--stream_position=1440492435762.
1203
Example 713. Collecting Events From Box
This configuration uses the im_exec module to run the script, which connects to Box and returns Syslog-
encapsulated JSON. The xm_syslog parse_syslog() and xm_json parse_json() procedures are used to parse
each event into internal NXLog fields. Additional modification to the fieldset can be added, as required, in
the Input instance Exec block.
For the sake of demonstration, all internal fields are then converted back to JSON and written to file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input box>
10 Module im_exec
11 Command /opt/nxlog/lib/nxlog/box-pull.pl
12 Arg --client_id=YEKigehUh0u4pXeKSgKzwTbfii2stCwU
13 Arg --client_secret=3VRiqMuPDuUYeTXA5Ds9R0B4TnL35WRy
14 Arg --enterprise_id=591376
15 Arg --oauthurl=https://api.box.com/oauth2/token
16 Arg --certkeyfile=privkey.pem
17 Arg --baseurl=https://api.box.com/2.0
18 Arg --pollinterval=5
19 Arg --statefile=/opt/nxlog/var/lib/nxlog/box-pull.dat
20 Arg --syslogpri=⑬
21 <Exec>
22 parse_syslog();
23 parse_json($Message);
24 </Exec>
25 </Input>
26
27 <Output file>
28 Module om_file
29 File '/tmp/output'
30 Exec to_json();
31 </Output>
1204
Chapter 141. Cisco FireSIGHT eStreamer
This add-on is available for purchase. For more information, please contact us.
The eStreamer add-on can be used with NXLog to collect events from a Cisco FireSIGHT System. The Cisco Event
Streamer (eStreamer) API is used for communication between NXLog and the FireSIGHT System. This section
describes how to set up FireSIGHT and NXLog and start collecting events.
For more information about eStreamer, see FireSIGHT System eStreamer Integration Guide v5.4 on Cisco.com. To
download the full Firepower eStreamer SDK, see eStreamer SDK Version 6.1 on Cisco Community.
Depending on the Cisco system, the eStreamer configuration and client creation page
NOTE location may differ. In other systems, the same page can be found under System → Local →
Registration → eStreamer.
2. Select the event types that should be sent and then click [ Save ].
3. Enter an IP address or a resolvable name in the Hostname field and optionally a password. Click [ Save ].
4. Click on the download arrow to download the certificate for the client. Place the PKCS12 certificate in the
same directory as the Perl client.
1205
141.2. Configuring the eStreamer Script
The estreamer.pl client is based on Cisco’s ssl_test.pl reference client which is included in the FireSIGHT
eStreamer SDK.
1. Make sure the following required Perl modules, which are part of the FireSIGHT eStreamer SDK, are present
in the same directory: SFStreamer.pm, SFPkcs12.pm, SFRecords.pm, and SFRNABlocks.pm.
2. Edit the script and set the configuration options. The available options include the following.
◦ The server address and port
◦ The file name and password used for the PKCS12 certificate
◦ Enable/disable verbose output
◦ The start time for receiving events: using bookmarks (by setting to bookmark) ensures that no events will
be lost or duplicated.
◦ The output mode: typically this is set to \&send_nxlog; however there is a \&send_stdout_raw output
where all data and meta-data is printed to standard output (for debugging purposes).
3. In the $OUTPUT_PLUGIN section of the script, the type of event request can be customised. Refer to the
FireSIGHT System eStreamer Integration Guide for more information.
4. Finally, the output mode subroutine \&send_nxlog might require modification if the presentation of the data
needs to be altered or alternative data or metadata need to be included or excluded. The \&send_stdout
subroutine can be used to show the output sent to NXLog and the \&send_stdout_raw can be used to show
the full contents of the data stream. Remember to set the $conf_opt->{output} variable to the appropriate
subroutine.
1206
Example 714. Collecting Events From eStreamer
This configuration uses the im_perl module to execute the Perl script. The resulting internal NXLog fields
are then converted to JSON format before being written to file with om_file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Input estreamer>
6 Module im_perl
7 PerlCode /opt/nxlog/bin/estreamer.pl
8 </Input>
9
10 <Output file>
11 Module om_file
12 File '/tmp/output.log'
13 Exec to_json();
14 </Output>
15
16 <Route estreamer_to_file>
17 Path estreamer => file
18 </Route>
Output Sample
{
"EventTime": "2018-1-24 11:50:23.939847",
"AlertPriority": 3,
"SourceIp": "192.168.99.2",
"SourcePort": 0,
"DestinationIp": "192.168.98.2",
"DestinationPort": 0,
"EventMessage": "PROTOCOL-ICMP Echo Reply",
"EventReceivedTime": "2018-01-24 11:50:29",
"SourceModuleName": "perl",
"SourceModuleType": "im_perl"
}
{
"EventTime": "2018-1-24 11:50:34.499867",
"AlertPriority": 3,
"SourceIp": "192.168.98.2",
"SourcePort": 0,
"DestinationIp": "192.168.99.2",
"DestinationPort": 0,
"EventMessage": "PROTOCOL-ICMP Echo Reply",
"EventReceivedTime": "2018-01-24 11:50:35",
"SourceModuleName": "perl",
"SourceModuleType": "im_perl"
}
An ICMP Echo Reply is not typically an intrusion detection event; however, it was a
NOTE
convenient way to simulate one.
1207
Chapter 142. Cisco Intrusion Prevention Systems
(CIDEE)
This add-on is available for purchase. For more information, please contact us.
The Cisco IPS add-on supports collection of alerts from an IPS-enabled device. The Security Device Event
Exchange (SDEE) API is used for communication between NXLog and the IPS.
142.1. Setup
1. Install the add-on.
2. Set the correct connection details in the script by editing the sdee("cisco","cisco","192.168.100.254",
"http","cgi-bin/sdee-server/","yes"); line in the read_data() subroutine. Set the appropriate
username, password, hostname or IP address, protocol, path, and force subscription.
◦ For username and password, a suitable user with the appropriate privilege level must be selected.
◦ The protocol can be http or https; however, HTTPS requires that the appropriate SSL options are
enabled further down in the sdee() subroutine.
◦ The default path for the SDEE service can be changed if necessary.
◦ We recommend using force subscription, but the default of yes can be changed to no if required.
3. Upon start-up, the script will open a connection to the device and request a subscription ID. It will then
periodically ask for new alerts. The interval that the device is queried for new alerts can be set by changing
the set_read_timer() NXLog function in the script.
Once alerts are available on the device the script will parse the XML source, format the alert, and pass it to
NXLog.
The script only collects alerts, but it can be modified to collect status and error messages too.
The primary subroutine that sorts out the information received is idsmxml_parse_alerts(). If
NOTE
the device uses a different CIDEE version, or to filter/modify information, modify the code there.
The final format of the alert messages is specified in the generate_raw_event() subroutine.
The configuration below collects IPS alerts from the configured Cisco IPS device. For simplicity, the output is
saved to a file in this example.
1208
nxlog.conf
1 <Input perl>
2 Module im_perl
3 PerlCode /opt/nxlog/bin/cisco-ips.pl
4 </Input>
5
6 <Output file>
7 Module om_file
8 File '/tmp/output.log'
9 </Output>
10
11 <Route perl_to_file>
12 Path perl => file
13 </Route>
Input Sample
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
<env:Body>
<sd:events
xmlns:cid="http://www.cisco.com/cids/2003/08/cidee"
xmlns:sd="http://example.org/2003/08/sdee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://example.org/2003/08/sdee sdee.xsd
http://www.cisco.com/cids/2003/08/cidee cidee.xsd">
<sd:evIdsAlert eventId="15117815226791" vendor="Cisco" severity="medium">
<sd:originator>
<sd:hostId>R1</sd:hostId>
</sd:originator>
<sd:time offset="0" timeZone="UTC">1511781522011779176</sd:time>
<sd:signature description="SYN Flood DOS" id="6009" version="S593">
<cid:subsigId>0</cid:subsigId>
<cid:sigDetails>SYN Flood DOS</cid:sigDetails>
</sd:signature>
<cid:protocol>tcp</cid:protocol>
<cid:riskRatingValue>63</cid:riskRatingValue>
<sd:participants>
<sd:attacker>
<sd:addr>192.168.100.1</sd:addr>
<sd:port>53760</sd:port>
</sd:attacker>
<sd:target>
<sd:addr>192.168.99.10</sd:addr>
<sd:port>2717</sd:port>
</sd:target>
<sd:vrf_name>NONE</sd:vrf_name>
</sd:participants>
<sd:actions></sd:actions>
<cid:interface>Fa0/0</cid:interface>
<cid:vrf_name>NONE</cid:vrf_name>
</sd:evIdsAlert>
<sd:evIdsAlert eventId="15117815236793" vendor="Cisco" severity="informational">
<sd:originator>
<sd:hostId>R1</sd:hostId>
</sd:originator>
<sd:time offset="0" timeZone="UTC">1511781523475744440</sd:time>
<sd:signature description="Back Door Probe (TCP 1234)" id="9007" version="S256">
<cid:subsigId>0</cid:subsigId>
<cid:sigDetails>SYN to TCP 1234</cid:sigDetails>
</sd:signature>
1209
<cid:protocol>tcp</cid:protocol>
<cid:riskRatingValue>18</cid:riskRatingValue>
<sd:participants>
<sd:attacker>
<sd:addr>192.168.100.1</sd:addr>
<sd:port>57422</sd:port>
</sd:attacker>
<sd:target>
<sd:addr>192.168.99.10</sd:addr>
<sd:port>1234</sd:port>
</sd:target>
<sd:vrf_name>NONE</sd:vrf_name>
</sd:participants>
<sd:actions></sd:actions>
<cid:interface>Fa0/0</cid:interface>
<cid:vrf_name>NONE</cid:vrf_name>
</sd:evIdsAlert>
</sd:events>
</env:Body>
</env:Envelope>
Output Sample
2017-11-28 22:29:41 UTC+0; eventid="15119009816528; hostId="R1"; severity="medium";
app_name=""; appInstanceId=""; signature="6009"; subSigid="0"; description="SYN Flood DOS";
attacker="192.168.100.1"; attacker_port="40784""; target="192.168.99.10"; target_port="4003;
protocol="tcp"; risk_rating="63"; target_value_rating=""; interface="Fa0/0";
interface_group=""; vlan=""↵
2017-11-28 22:29:44 UTC+0; eventid="15119009846539; hostId="R1"; severity="informational";
app_name=""; appInstanceId=""; signature="9007"; subSigid="0"; description="SYN to TCP 1234";
attacker="192.168.100.1"; attacker_port="43242""; target="192.168.99.10"; target_port="1234;
protocol="tcp"; risk_rating="18"; target_value_rating=""; interface="Fa0/0";
interface_group=""; vlan=""↵
NOTE The two samples are from different but similar alerts.
1210
Chapter 143. Exchange (nxlog-xchg)
This add-on is available for purchase. For more information, please contact us.
Microsoft Exchange provides two types of audit logs: administrator audit logging and mailbox audit logging. For
more information, see
The nxlog-xchg add-on can be used to retrieve administrator audit logs and mailbox audit logs. These logs include
actions taken by users or administrators who make changes in the organization, mailbox actions, and mailbox
logins including access by users other than the mailbox owner. For more information, see Administrator audit
logging in Exchange 2016 and Mailbox audit logging in Exchange 2016 on TechNet.
nxlog-xchg periodically queries an Exchange server via Windows Remoting (WinRM) and writes the result to
standard output in JSON format for further processing by NXLog. The add-on is executed by NXLog via the
im_exec module, and may be configured on either the Exchange server itself or another system.
The required steps may vary from those provided below based on the organization and domain
NOTE
topology and configuration.
143.1. Requirements
Server side requirements include:
NOTE The server and client can reside on the same machine.
WinRM remote login is only allowed for users in the local Administrator group, or Domain
NOTE Administrator group. The user created for login via WinRM must be a member of one of
these groups.
2. Windows Remoting (WinRM) will accept the connections from nxlog-xchg. By default, WinRM will listen on TCP
port 5985 for HTTP (insecure) requests. WinRM should be configured to listen for secure connections on
TCP/5986. Check if it is configured:
If the command above does not return any results, then on the Exchange server, from an elevated command
1211
line (cmd), run the following command to enable WinRM HTTPS transport.
3. If there is an error message about the system not having an appropriate (server authentication) certificate,
issue one for the server or create a self-signed one. To create a self signed certificate, open a Powershell
window and run these commands.
If you are having trouble creating a self-signed certificate (getting unaccessible private keys
NOTE in Windows 10 or Windows 2016), try using the Self-signed certificate generator from
Microsoft Script Center.
4. After the certificate has been prepared, open a PowerShell window and run:
5. After this it should not be necessary to run the quick config for WinRM and the HTTP listener can be removed
(assuming it is no longer needed otherwise).
6. The "Audit Logs" role most be added to the Active Directory user to access the "Search-AdminAuditLog" and
"Search-MailboxAuditLog" Exchange cmdlets.
PS> New-ManagementRoleAssignment -Name nxlog-xchg-mr -Role "Audit Logs" -User "Active Directory
User Name"
8. Mailbox audit logging can be enabled on a per user basis, using the Exchange Management shell. nxlog-xchg
respects the options configured in the Exchange server. To enable mailbox audit logging for a single user,
open an Exchange Management Shell and run:
To enable audit logging for all user mailboxes in the organization, run:
For more information about mailbox audit logging (including more logging options), see Enable or disable
mailbox audit logging for a mailbox on Microsoft Docs.
1212
xchg in addition to those in the configuration file:
[WinRM]
Url=https://host.yourdomain.com:5986/wsman
User=winrmuser@yourdomain.com
Password=winrmuser_password
CheckCertificate=TRUE
[Exchange]
HostFQDN=exchange.yourdomain.com
ExchangeUser=ex_user@yourdomain.com
ExchangePassword=exuser_password
ExchangeAuth=KERBEROS
[Options]
SearchAdminLog=TRUE
SearchMailboxLog=TRUE
ResultSize=5000
Nxlog section:
SavePos
This optional boolean directive specifies whether the last record number should be saved when nxlog-xchg
exits. The default is TRUE.
PollInterval
This optional directive specifies the time (in seconds) between polls. Valid values are 3-3600; the default is 30
seconds.
WinRM section:
Url
This specifies the URL of the WinRM listener (for example,
https://exchangeserver.mydomain.com:5986/wsman).
User
This specifies the user that has permission to log on to the Exchange Server system.
Password
This should be set to the password of the user defined in User above.
1213
Auth
The authentication method to use when establishing a WinRM connection (KERBEROS or NTLM). NTLM is the
default authentication method used if this is not set.
CheckCertificate
This optional boolean directive specifies whether the server certificate should be verified. The default is TRUE
(the certificate is validated).
Exchange section:
HostURI
This sets the full URI to use for the remote PowerShell connection (for example,
http://name.domain.tld/PowerShell/).
ExchangeUser
This specifies the user that has permission to query the Exchange Server.
ExchangePassword
This should be set to the password of the user defined in ExchangeUser above.
ExchangeAuth
The authentication method to use when establishing a connection to PowerShell on the Exchange server
(KERBEROS or NTLM). Kerberos is the default is authentication method used if this is not set.
Options section:
QueryAdminLog
This optional boolean directive specifies whether the administrator audit log should be queried. The default is
TRUE (the administrator audit log is queried).
QueryMailboxLog
This optional boolean directive specifies whether the mailbox audit log should be queried. The default is
TRUE (the mailbox audit log is queried).
ResultSize
This optional directive specifies the maximum number of log entries to retrieve. The default is 5000 entries.
1214
Example 716. Writing Exchange Logs to a File
This configuration uses the im_exec module to receive logs from nxlog-xchg, and writes them to file locally
with om_file.
nxlog.conf
1 <Input in>
2 Module im_exec
3 Command 'C:\Program Files (x86)\nxlog-exchange\nxlog-xchg.exe'
4 Arg -c
5 Arg C:\Program Files (x86)\nxlog-xchg\nxlog-xchg.cfg
6 </Input>
7
8 <Output out>
9 Module om_file
10 File "C:\\logs\\exchange_audit_log.txt"
11 </Output>
12
13 <Route ex>
14 Path in => out
15 </Route>
143.4. Performance
It is important to configure nxlog-xchg so the server is not polled too frequently (running nxlog-xchg too often) or
infrequently (requiring the collection of a very large result set). If PollInterval is properly adjusted, there should
not be any performance issue.
143.5. Troubleshooting
Nxlog-xchg does not launch from NXLog
Make sure the quotations in the im_exec block are correct. This can be tested by placing a simple batch script
(containing echo "Hello world", for example) into the same directory as nxlog-xchg.exe and calling that
batch file from im_exec.
No events received
If no events are being received, make sure the relevant logging is enabled in Exchange. For admin audit
logging run:
1215
Chapter 144. Microsoft Azure and Office 365
This add-on is available for purchase. For more information, please contact us.
This NXLog add-on can retrieve information about various user, admin, system, and policy actions and events
from Microsoft Azure and Office 365. Once configured, the add-on prints Syslog events, each with a JSON
payload, to standard output for processing by NXLog.
The add-on supports getting logs from the following reports corresponding to the supported Microsoft REST-
based APIs:
• <strong>Azure Active Directory reports (based on Microsoft Graph API)</strong> – Sign-In events and
directory audit log events.
• <strong>Office 365 Management Activity API</strong> – Azure Active Directory Audit events,
Exchange Audit events and Sharepoint Audit events using the <strong>Audit.AzureActiveDirectory</strong>,
<strong>Audit.Exchange</strong>, and <strong>Audit.SharePoint</strong> parameters.
• <strong>Office 365 Service Communications API</strong> – Status, service and message related
events, using the <strong>CurrentStatus</strong>, <strong>HistoricalStatus</strong>,
<strong>Messages</strong>, and <strong>Services</strong> parameters.
For more information about the log sources, see the links below:
144.1. Prerequisites
In order to complete the steps in this section and collect logs from the above mentioned APIs, the following
prerequisites laid out in this section will need to be met.
During the steps explained in this section you need to make a note of the following data:
• client_id
• tenant_domain <domainname>.onmicrosoft.com
• tenant_id
• certthumbprint
• certkeyfile <certkey.pem>
Some of the add-on arguments (parameters) require certain permissions set in MS Azure. They are listed in the
table below with a reference to the Microsoft documentation. Their configuration is detailed in the Parameters
section below.
1216
Table 73. Required Permissions
Azure AD
Microsoft Docs
Parameter API used permissions
link
required
--enable-azure-ad-reports Microsoft Graph API v1.0 AuditLog.Read.All See reference link,
and and reference link
Directory.Read.All
--service_communication_operatio An Office 365 license, or a license that includes it See reference link
ns
--management_activity_sources An Office 365 license, or a license that includes it See reference link
For troubleshooting/debugging purposes, the list of active license SKUs can be retrieved through the --license
-details switch.
As Microsoft’s licensing information can be subject to change at any time, always double-
IMPORTANT check your current requirements with the licensing/service plan documentation. The
required licenses can be managed in the Microsoft 365 admin center.
NOTE The above table with the licensing requirements are for informational purposes only.
1217
144.2.1. Installing the Microsoft Azure and Office 365 NXLog Add-On
1. Install the add-on with dpkg:
# dpkg -i nxlog-msazure-<version>.deb
# apt-get -f install
Once the new application has been registered, make note of the Application (client) ID (this will be the
client_id), as well as the Directory (tenant) ID (this will be tenant_id) on the Overview page for the new
application.
1218
• AuditLog.Read.All
• Directory.Read.All
• ActivityFeed.Read
• ServiceHealth.Read
Once your permissions are set up and the Admin consent is granted, your permission list should look like the
one below.
The gencertkey.sh script depends on the openssl toolkit and the uuidgen program. Install the corresponding
packages if necessary.
On Debian-based platforms:
On Centos/RedHat platforms:
Follow the steps below to generate the X.509 certificate and insert the relevant portion into the manifest file in
MS Azure:
1. Generate the certificate with the gencertkey.sh script on the computer where the add-on is installed.
1219
$ ./gencertkey.sh
Generating a RSA private key
............+++++
................................................+++++
writing new private key to 'certkey.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:
ThumbPrint:0nFt3fB0JP7zuSmHaRQtmsFNYqo=
"keyCredentials": [
{
"customKeyIdentifier":"0nFt3fB0JP7zuSmHaRQtmsFNYqo=",
"keyId":"629ab88d-1059-454b-b258-4ca05b46dee4",
"type":"AsymmetricX509Cert",
"usage":"Verify",
"value":"MIIDXTCCAkWgAwIBAgIJAP+XrnwhAxjOMA0GCSqGSIb3DQEBCwUAMEUxCzAJB..."
}
],
Make note of the the base64-encoded certificate fingerprint value after ThumbPrint: (certthumbprint), and
the KeyCredentials portion (which will be used in the following steps).
2. In the App registration page in MS Azure, select Manifest on the left side and click Download.
3. Edit the downloaded manifest file and replace the "empty" KeyCredentials section with the previously
1220
generated output.
From
"keyCredentials": [],
To
"keyCredentials": [
{
"customKeyIdentifier":"0nFt3fB0JP7zuSmHaRQtmsFNYqo=",
"keyId":"629ab88d-1059-454b-b258-4ca05b46dee4",
"type":"AsymmetricX509Cert",
"usage":"Verify",
"value":"MIIDXTCCAkWgAwIBAgIJAP+XrnwhAxjOMA0GCSqGSIb3DQEBCwUAMEUxCzAJB..."
}
],
Follow the steps below to move the generated certificate files to their intended directory as well as make the
required permission changes:
1. Move the certificates you generated into the /opt/nxlog-msazure/conf directory. This directory is used
later on as a value for the --working_directory parameter.
$ mv cert* /opt/nxlog-msazure/conf/
2. Set the file ownership and permissions to be in agreement with the User and Group directives (NXLog runs
under the nxlog user and nxlog group by default).
144.3. Parameters
Certain parameters need to be passed to the NXLog Microsoft Azure and Office 365 add-on as arguments in
1221
order to achieve the desired outcome. These parameters can be passed to the add-on by using the Arg directive.
--client_id=
The Azure App registration Application (client) ID
--tenant_id=
The Azure App registration Directory (tenant) ID
--certthumbprint=
The certificate fingerprint value
--tenant_domain=
The domain name created in MS Azure AD <domainname>.onmicrosoft.com
--certkeyfile=
The certificate key file certkey.pem
--working_directory=
The path where the add-on is run, which is /opt/nxlog-msazure/conf by default
--enable-azure-ad-reports
Active Directory sign-in events and directory audit logs ( based on Microsoft Graph API ). This parameter does
not require any value to be passed to it.
--management_activity_sources=
Office 365 Management Activity API
--service_communication_operations=
Office 365 Service Communications API
--top=n
The top parameter works only with Azure Active Directory reports and events. It returns a subset of the
entries for the given report, consisting of the first n entries, where n is a positive integer. top=5 returns the 5
most recent audit report events. top will be overridden where start_date and end_date can be used—top is
1222
lower priority.
--start_date=YYYY-MM-DDTHH:MM:SSZ|amonthago|aweekago|yesterday
--end_date=YYYY-MM-DDTHH:MM:SSZ|amonthago|aweekago|yesterday|now
The start_date and end_date parameters specify the time range of content to return. These parameters
work with all Office 365 reports and most of the Azure Active Directory reports. Where start/end ranges are
not supported, the add-on uses top. The amonthago, aweekago, yesterday, and now values are dynamic and
calculated in every loop.
--log_errors=path
For troubleshooting purposes, the --log_errors argument is available. The value of this parameter is a path
to a file where the add-on will write all its error messages.
The Microsoft documentation lists API errors and responses respectively in the Office 365
NOTE
API errors and Microsoft Graph error responses pages.
--add_syslog_header=true|false|yes|no
Enable or disable the Syslog header.
--infinity=true|false|yes|no
Indicates that the script should never stop and should pull logs in an endless loop. The default is true. This
can be set to false for special cases or debugging, when the script should run once and then exit.
--skip_state_file=true|false|yes|no
If true, the script will neither read from nor write to the state files. The default is false.
--sleep=n
The script will sleep n seconds between loops.
--verbose=true|false|yes|no
For debugging; if true, provides as much detail as possible about what the script is doing. The default is
false. In normal mode, the script should print only events, logs, and reports (data it retrieves from the APIs).
The script emits all diagnostic messages to standard error.
--license-details=true|false|yes|no
For troubleshooting/debugging purposes; if true, a list of active license SKUs will be retrieved. The default is
false.
1223
Example 717. Azure Active Directory Events
This configuration collects all the Azure Active Directory report events, such as user creation, group
membership, permission changes and so on. The output provided by Microsoft is in JSON format.
nxlog.conf
1 <Input msazurepull>
2 Module im_exec
3 Command /opt/nxlog-msazure/bin/msazure-pull.sh
4 Arg --client_id=912497ba-9780-46bc-a6a6-3a56a4c14278
5 Arg --tenant_id=e681b493-14a8-438b-8bbf-d65abdc826c2
6 Arg --certthumbprint=D64Rm2IkRQxp26XK4Da7Bcbqu2o=
7 Arg --tenant_domain=contoso.onmicrosoft.com
8 Arg --certkeyfile=certkey.pem
9 Arg --working_directory=/opt/nxlog-msazure/conf
10 Arg --enable-azure-ad-reports
11 <Exec>
12 parse_syslog();
13 </Exec>
14 </Input>
1224
Example 718. Office 365 Events
This configuration collects Office 365 related events, such as document creation, deletion, permission
changes and so on. The output provided by Microsoft is in JSON format.
nxlog.conf
1 <Input msazurepull>
2 Module im_exec
3 Command /opt/nxlog-msazure/bin/msazure-pull.sh
4 Arg --client_id=912497ba-9780-46bc-a6a6-3a56a4c14278
5 Arg --tenant_id=e681b493-14a8-438b-8bbf-d65abdc826c2
6 Arg --certthumbprint=D64Rm2IkRQxp26XK4Da7Bcbqu2o=
7 Arg --tenant_domain=contoso.onmicrosoft.com
8 Arg --certkeyfile=certkey.pem
9 Arg --working_directory=/opt/nxlog-msazure/conf
10 Arg
--service_communication_operations=Services,CurrentStatus,HistoricalStatus,Messages
11 Arg
--management_activity_sources=Audit.Exchange,Audit.SharePoint,Audit.AzureActiveDirectory
12 <Exec>
13 parse_syslog();
14 </Exec>
15 </Input>
The first NXLog configuration example above would look like the one below if it were invoked from a terminal
1225
console. In this case, the received events would be continuously printed to the terminal.
$ /opt/nxlog-msazure/bin/msazure-pull.sh \
--client_id=912497ba-9780-46bc-a6a6-3a56a4c14278 \
--tenant_id=e681b493-14a8-438b-8bbf-d65abdc826c2 \
--certthumbprint=D64Rm2IkRQxp26XK4Da7Bcbqu2o= \
--tenant_domain=contoso.onmicrosoft.com \
--certkeyfile=certkey.pem \
--working_directory=/opt/nxlog-msazure/conf \
--enable-azure-ad-reports
1226
Chapter 145. MSI for NXLog Agent Setup
This add-on can be downloaded from the nxlog-public/contrib repository according the license and terms
specified there.
This add-on provides an example for building an MSI package which can be used to bootstrap an NXLog agent on
a Windows system. Normally this would be used to set up the agent for management by NXLog Manager—it
installs a custom configuration and a CA certificate. The package can be installed alongside the NXLog MSI.
1. The Windows Installer XML Toolset (Wix) is required to build the custom MSI. Wix is free software available
for download from wixtoolset.org.
2. Install Wix. Make a note where the binary folder of Wix is located (containing the candle.exe and light.exe
executables, typically C:\Program Files (x86)\WiX Toolset v3.11\bin).
3. Save the add-on files in a folder of your choosing and make sure the path to the binary folder is correct in
the pkgmsi32.bat (or pkgmsi64.bat) script by editing the WIX_BUILD_LOCATION variable.
4. Add the custom agent-ca.pem and log4ensics.conf files in the folder.
6. Finally, execute either the pkgmsi32.bat or the pkgmsi64.bat script, depending on the targeted
architecture. While both the resulting MSIs include platform independent files, we strongly advise to build
and install the appropriate custom configuration MSI that matches the NXLog installation.
7. The script will proceed to build the MSI. Depending on the architecture selected, the result will be either
nxlog-conf_x86.msi or nxlog-conf_x64.msi.
8. The custom configuration MSI can now be deployed alongside the NXLog installer, using one the same
methods (interactively, with Msiexec, or via Group Policy).
1227
Chapter 146. Okta
This add-on is available for purchase. For more information, please contact us.
The Okta add-on can be used to pull events from Okta using their REST API. Events will be passed to NXLog in
Syslog format with the JSON event in the message field.
The script saves the current timestamp to a state file in order to properly resume when it is terminated. If the
state file does not exist, the script will collect logs beginning with the current time. To manually specify a starting
timestamp, pass it as an argument: ./okta-pull.pl --startdate="2014-10-29T17:13:24.000Z".
This configuration uses the im_exec module to run the script, which connects to Okta and returns Syslog-
encapsulated JSON. The xm_syslog parse_syslog() and xm_json parse_json() procedures are used to parse
each event into internal NXLog fields. Additional modification to the fieldset can be added, as required, in
the Input instance Exec block.
For the sake of demonstration, all internal fields are then converted back to JSON and written to file.
nxlog.conf
1 <Extension _json>
2 Module xm_json
3 </Extension>
4
5 <Extension _syslog>
6 Module xm_syslog
7 </Extension>
8
9 <Input okta>
10 Module im_exec
11 Command /opt/nxlog-okta/bin/okta-pull.pl
12 <Exec>
13 parse_syslog();
14 parse_json($Message);
15 </Exec>
16 </Input>
17
18 <Output file>
19 Module om_file
20 File '/tmp/output'
21 Exec to_json();
22 </Output>
1228
Chapter 147. Perlfcount
This add-on is available for purchase. For more information, please contact us.
The perlfcount add-on is a Perl script that can be used with NXLog to collect system information and statistics on
Linux platforms.
1229
Chapter 148. Salesforce
This add-on is available for purchase. For more information, please contact us.
The Salesforce add-on provides support for fetching Event Log Files from Salesforce with NXLog. The script
collects Event Log Files from a Salesforce instance by periodically running SOQL queries via the REST API. The
Events can then be passed to NXLog by different means, depending how the data collection is configured.
For more information about the Event Log File API, see EventLogFile in the Salesforce SOAP API Developer Guide.
The Event Logs feature of Salesforce is a a paid add-on feature. Make sure this feature is
NOTE
enabled on the Salesforce instance before continuing.
collect.conf.json
{
"log_level": "DEBUG",
"log_file": "var/collector.log",
"user": "user@example.com",
"password": "UxQqx847sQ",
"token": "ZsQO0k5gAgJch3mLUtEqt0K",
"url": "https://login.salesforce.com/services/Soap/u/39.0/",
"checkpoint": "var/checkpoint/",
"keep_csv": "True",
"output": "structured",
"header": "none",
"mode": "across",
"transport": "stdout",
"target": "file",
"limit": "5",
"delay": "3",
"request_delay": "3600"
}
A compact view of the command line options is shown below. Use salesforce.py -h to get help, including a
short explanation of the options.
salesforce.py usage
usage: salesforce.py [-h] [--config CONFIG] [--user USER]
[--password PASSWORD] [--token TOKEN] [--url URL]
[--checkpoint CHECKPOINT] [--keep_csv {True,False}]
[--output {json,structured}] [--header {none,syslog}]
[--mode {loop,across}] [--target TARGET] [--delay DELAY]
[--limit LIMIT] [--request_delay REQUEST_DELAY]
[--transport {file,socket,pipe,stdout}]
[--log_level {CRITICAL,ERROR,WARNING,INFO,DEBUG,NOTSET}]
[--log_file LOG_FILE]
1230
148.2. Authentication and Data Retrieval
The user needs to set the authentication parameters (username, password, and token) so that the script can
connect to Salesforce and retrieve the Event Logs. The url parameter supplied with the sample configuration file
is correct at the time of writing but it may change in the future. The log_level and log_file parameters can be
used as an aid during the initial setup, as well as to identify problems during operation.
It is not possible to find the security token of an existing profile. The solution is to reset it as
NOTE
described in Reset Your Security Token on Salesforce Help.
Depending on your setup, the mode parameter can be set to loop so that the script will look for new events
continuously or to across so that once all the available events are retrieved the script will terminate. When in
loop mode, the request_delay can be configured for the script to wait the specified number of seconds before
requesting more events.
Directories and files are created automatically when an event of that type is logged by
NOTE
Salesforce.
var/checkpoint/ApexExecution:
2018-02-08T00:00:00.000+0000.csv
2018-02-08T00:00:00.000+0000.state
var/checkpoint/LightningError:
2018-02-08T00:00:00.000+0000.csv
2018-02-08T00:00:00.000+0000.state
var/checkpoint/Login:
2018-02-08T00:00:00.000+0000.csv
2018-02-08T00:00:00.000+0000.state
var/checkpoint/Logout:
2018-02-08T00:00:00.000+0000.csv
2018-02-08T00:00:00.000+0000.state
var/checkpoint/PackageInstall:
2018-03-01T00:00:00.000+0000.csv
2018-03-01T00:00:00.000+0000.state
If this directory structure is removed, the script will be unable to determine the state and all
available events stored in your Salesforce instance will be retrieved and passed to NXLog
WARNING
again. However, after testing and determining that everything is configured correctly,
remember to delete the directory structure to reset the state.
Once all the available events have been downloaded and the script determines that no other events has been
added, it will proceed to process them and produce the final output. The limit and delay parameters can be set
to throttle the processing by limiting by number of records and delaying between blocks of records in seconds.
The script will delete the CSV files once those are processed. However, the keep_csv parameter can be set to
True to preserve them.
1231
148.4. Data Format and Transport
The processed events can be presented in two different formats: either as structured output or as JSON. This can
be selected by setting the output parameter accordingly. Furthermore, a Syslog style header can be added
before the event data by means of the header parameter. The output types are show below.
Structured Output
CLIENT_IP="46.198.211.113" OS_NAME="LINUX"
DEVICE_SESSION_ID="33ddcf5f751fdaf4b6a010d73014710ed2f13e33" BROWSER_NAME="CHROME"
BROWSER_VERSION="64" USER_AGENT=""Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/64.0.3282.186 Safari/537.36"" CLIENT_ID="" REQUEST_ID=""
SESSION_KEY="qomr/wgmbMU73iG6" DEVICE_ID="" CONNECTION_TYPE="" EVENT_TYPE="LightningError"
SDK_APP_VERSION="" SDK_APP_TYPE="" UI_EVENT_SOURCE="storage" SDK_VERSION="" UI_EVENT_SEQUENCE_NUM=""
LOGIN_KEY="5ujU+09kPSKatTxR" UI_EVENT_TYPE="error" PAGE_START_TIME="1519928816975" DEVICE_MODEL=""
USER_TYPE="Standard" ORGANIZATION_ID="00D1r000000rH0F" OS_VERSION=""
USER_ID_DERIVED="0051r000007NyeqAAC" UI_EVENT_ID="ltng:error" APP_NAME="one:one"
UI_EVENT_TIMESTAMP="1519928819334" USER_ID="0051r000007Nyeq" TIMESTAMP="20180301182702.187"
TIMESTAMP_DERIVED="2018-03-01T18:27:02.187Z" DEVICE_PLATFORM="SFX:BROWSER:DESKTOP"↵
JSON Output
{"CLIENT_IP": "Salesforce.com IP", "REQUEST_ID": "4GVCi4pxSjCESP-qby__7-", "SESSION_KEY": "",
"API_TYPE": "", "EVENT_TYPE": "Login", "SOURCE_IP": "46.198.211.113", "RUN_TIME": "143",
"LOGIN_KEY": "", "USER_NAME": "user@example.com", "CPU_TIME": "57", "BROWSER_TYPE": "Mozilla/5.0
(X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36",
"URI": "/index.jsp", "ORGANIZATION_ID": "00D1r000000rH0F", "USER_ID_DERIVED": "0051r000007NyeqAAC",
"DB_TOTAL_TIME": "47093446", "LOGIN_STATUS": "LOGIN_NO_ERROR", "USER_ID": "0051r000007Nyeq",
"TIMESTAMP": "20180302083919.878", "TLS_PROTOCOL": "TLSv1.2", "REQUEST_STATUS": "", "CIPHER_SUITE":
"ECDHE-RSA-AES256-GCM-SHA384", "TIMESTAMP_DERIVED": "2018-03-02T08:39:19.878Z", "URI_ID_DERIVED":
"", "API_VERSION": "9998.0"}
NOTE The samples above are not from the same event.
The formatted output can then be displayed in standard output, passed to another program by a named pipe,
saved to a file, or sent to another program using Unix Domain Sockets (UDS). This can be controlled by setting
the transport parameter to stdout, pipe, file, or socket respectively. When the transport is pipe, file, or socket the
target parameter can be used to set the name of the pipe, file, or socket.
A first scenario is that NXLog is running the script directly and consumes the data from the script. To do this, the
script should be running in loop mode, so that events are fetched periodically from Salesforce.
1232
Example 721. Loop Mode
NXLog executes salesforce.py, which in turn collects events every hour, processes them, formats them as
JSON with a Syslog header, and forwards them to NXLog.
collect.conf.json
{
"log_level": "DEBUG",
"log_file": "var/collector.log",
"user": "user@example.com",
"password": "UxQqx847sQ",
"token": "ZsQO0k5gAgJch3mLUtEqt0K",
"url": "https://login.salesforce.com/services/Soap/u/39.0/",
"checkpoint": "var/checkpoint/",
"keep_csv": "True",
"output": "json",
"header": "syslog",
"mode": "loop",
"transport": "stdout",
"target": "file",
"limit": "100",
"delay": "3",
"request_delay": "3600"
}
nxlog.conf
1 <Extension _syslog>
2 Module xm_syslog
3 </Extension>
4
5 <Extension _json>
6 Module xm_json
7 </Extension>
8
9 <Input messages>
10 Module im_exec
11 Command ./salesforce.py
12 <Exec>
13 parse_syslog();
14 parse_json($Message);
15 </Exec>
16 </Input>
17
18 <Output out>
19 Module om_file
20 File "output.log"
21 </Output>
22
23 <Route messages_to_file>
24 Path messages => out
25 </Route>
A second scenario: set up NXLog to listen on a UDS for events and use either NXLog or an external scheduler to
run salesforce.py. In this case, salesforce.py runs in across mode.
1233
Be sure to provide ample time for the script to finish executing before the scheduler starts
WARNING a new execution. Or use a shell script that prevents running multiple instances
simultaneously.
collect.conf.json
{
"log_level": "DEBUG",
"log_file": "var/collector.log",
"user": "user@example.com",
"password": "UxQqx847sQ",
"token": "ZsQO0k5gAgJch3mLUtEqt0K",
"url": "https://login.salesforce.com/services/Soap/u/39.0/",
"checkpoint": "var/checkpoint/",
"keep_csv": "True",
"output": "structured",
"header": "none",
"mode": "across",
"transport": "socket",
"target": "uds_socket",
"limit": "100",
"delay": "3",
"request_delay": "3600"
}
nxlog.conf
1 <Extension exec>
2 Module xm_exec
3 <Schedule>
4 Every 1 hour
5 <Exec>
6 log_info("Scheduled execution at " + now());
7 exec_async("./salesforce.py");
8 </Exec>
9 </Schedule>
10 </Extension>
11
12 <Input messages>
13 Module im_uds
14 UDS ./uds_socket
15 UDSType stream
16 </Input>
17
18 <Output out>
19 Module om_file
20 File "output.log"
21 </Output>
22
23 <Route messages_to_file>
24 Path messages => out
25 </Route>
It is even possible to manually start the salesforce.py in loop mode with a large request_delay and collect via
UDS (as shown above) without the xm_exec instance. Or set the transport to file and configure NXlog to read
events with im_file.
1234
Though events are captured in real time, Salesforce generates the Event Log Files during non-
NOTE
peak hours.
1235