Cortex Xsoar Admin
Cortex Xsoar Admin
Cortex Xsoar Admin
5.5
paloaltonetworks.com/documentation
Contact Information
Corporate Headquarters:
Palo Alto Networks
3000 Tannery Way
Santa Clara, CA 95054
www.paloaltonetworks.com/company/contact-support
Copyright
Palo Alto Networks, Inc.
www.paloaltonetworks.com
© 2020-2020 Palo Alto Networks, Inc. Palo Alto Networks is a registered trademark of Palo
Alto Networks. A list of our trademarks can be found at www.paloaltonetworks.com/company/
trademarks.html. All other marks mentioned herein may be trademarks of their respective companies.
Last Revised
May 5, 2020
Proxy.................................................................................................................... 75
Configure Proxy Settings........................................................................................................................ 77
Use NGINX as a Reverse Proxy to the Cortex XSOAR Server......................................................78
Use Engines Through the NGINX Reverse Proxy................................................................ 78
Install NGINX on Cortex XSOAR............................................................................................. 78
Generate a Certificate for NGINX........................................................................................... 78
Configure NGINX......................................................................................................................... 79
Manage Data......................................................................................................81
Reindex the Entire Database................................................................................................................. 83
Reindex a Specific Index Database...................................................................................................... 84
Reindex the Entire Database for a Distributed Database...............................................................85
Reindex a Specific Index for a Distributed Database...................................................................... 86
Free up Disk Space with Data Archiving............................................................................................ 87
Migrate Data to Another Server...........................................................................................................90
Move Data Folders to Another Location on the Server................................................................. 91
Restore an Archived Folder....................................................................................................................92
iv TABLE OF CONTENTS
Users and Roles.................................................................................................93
Users and Roles Overview..................................................................................................................... 95
Roles in Cortex XSOAR...........................................................................................................................96
Define a Role.................................................................................................................................96
Default Admin............................................................................................................................................99
Self-Service Read-Only Users............................................................................................................. 100
Configure the Server for Self Service Read-Only Users.................................................. 100
Create the Self Service Read-Only Users............................................................................101
Create the Read-Only Dashboard......................................................................................... 101
Create the Read-Only Incident Type and Layout.............................................................. 102
User Settings and Preferences............................................................................................................103
Messages......................................................................................................................................103
Details........................................................................................................................................... 103
Preferences..................................................................................................................................103
Notifications................................................................................................................................ 104
Shift Management.................................................................................................................................. 105
Managing Shifts..........................................................................................................................105
User Invitations....................................................................................................................................... 107
Invite a User................................................................................................................................107
Integration Permissions.........................................................................................................................109
Password Policy...................................................................................................................................... 110
Create a Password Policy........................................................................................................ 110
Edit a Default Password Policy.............................................................................................. 110
Change the Administrator Password.................................................................................................113
Authenticate Users with SAML 2.0................................................................................................... 114
Set up Okta as the Identity Provider Using SAML 2.0.....................................................114
Set up Microsoft Azure as the Identity Provider............................................................... 122
Set up ADFS as the Identity Provider Using SAML 2.0................................................... 129
Configure User Notifications...............................................................................................................142
Set the Default Theme for New Users.............................................................................................143
TABLE OF CONTENTS v
Configure a Remote Repository on the Production Machine..................................................... 169
Edit and Push Content to a Remote Repository............................................................................ 171
Troubleshoot a Remote Repository Configuration........................................................................ 173
Troubleshoot a Remote Repository Definition...................................................................173
Troubleshoot Editing and Pushing Content........................................................................ 174
Troubleshoot Content Issues..................................................................................................175
Engines.............................................................................................................. 177
Cortex XSOAR Engines Overview..................................................................................................... 179
Engine Proxy............................................................................................................................... 179
Engine Architecture...................................................................................................................179
Engine Load-Balancing..............................................................................................................179
Engine Installation and Configuration...................................................................................180
Install Cortex XSOAR Engines.............................................................................................................181
Run the Engine as a Service on Windows...........................................................................183
Use an Engine in an Integration......................................................................................................... 185
Manage Engines...................................................................................................................................... 186
Configure Engines.................................................................................................................................. 187
Edit the Engine Configuration................................................................................................ 187
Configure the Engine to Use a Web Proxy.........................................................................188
Configure the Engine to Call the Server Without Using a Proxy................................... 189
Configure the Number of Workers for the Server and Engine...................................... 189
Configure Access to Communication Tasks through an Engine.....................................190
Notify Users When an Engine Disconnects........................................................................190
Remove the Cortex XSOAR Server From the Load-Balancing Group...........................190
Remove an Engine..................................................................................................................................191
Troubleshoot Cortex XSOAR Engines...............................................................................................192
Troubleshoot Engine Upgrades.............................................................................................. 192
Docker............................................................................................................... 193
Docker Installation................................................................................................................................. 195
Install Docker Enterprise Edition on Cortex XSOAR........................................................ 195
Install Docker Community Edition on Cortex XSOAR......................................................196
Install Docker Distribution for Red Hat on Cortex XSOAR.............................................197
Install Docker Images Offline.............................................................................................................. 199
Configure Python Docker Integrations to Trust Custom Certificates....................................... 200
Docker Images in Cortex XSOAR.......................................................................................................202
Manage Docker Images............................................................................................................203
Create a Docker Image In Cortex XSOAR...........................................................................204
Docker Hardening Guide......................................................................................................................206
Configure Memory Limit Support Without Swap Limit Capabilities............................. 208
Run Docker with Non-Root Internal Users.........................................................................208
Use a Docker Image for Python Scripts...............................................................................209
Configure the Memory Limitation......................................................................................... 209
Test the Memory Limit.............................................................................................................210
Limit Available CPU...................................................................................................................211
Configure the PIDs Limit......................................................................................................... 211
Configure the Open File Descriptors Limit......................................................................... 212
Troubleshoot Docker Networking Issues............................................................................ 213
Run Docker with Non-Root Internal Users......................................................................................214
Dashboards...................................................................................................... 217
vi TABLE OF CONTENTS
Dashboard Overview............................................................................................................................. 219
Create a Dashboard............................................................................................................................... 220
Add a Widget to a Dashboard............................................................................................................ 221
Configure a Default Dashboard..........................................................................................................222
Share and Unshare a Dashboard........................................................................................................223
Edit a Dashboard.................................................................................................................................... 224
Reports.............................................................................................................. 225
Reports Overview...................................................................................................................................227
Chromium Installation for Reports.....................................................................................................228
Install Chromium on Fedora, RHEL, or CentOS................................................................. 228
Install Chromium on openSUSE and SUSE..........................................................................228
Install Chromium on Ubuntu or Debian...............................................................................228
Configure Cortex XSOAR to Use PhantomJS................................................................................. 230
Create a Report.......................................................................................................................................231
Schedule a report................................................................................................................................... 232
Schedule a Report Examples...................................................................................................232
Create an Incident Summary Report................................................................................................. 235
Add a Widget to a Report................................................................................................................... 236
Edit a report.............................................................................................................................................237
Change the Report Logo...................................................................................................................... 238
Configure the Time Zone and Format in a Report.........................................................................239
Troubleshoot Reports............................................................................................................................240
Widgets............................................................................................................. 241
Widgets Overview..................................................................................................................................243
Create a Widget in the Widgets Library.......................................................................................... 244
Widget Parameters....................................................................................................................244
Create a Custom Widget Using a JSON File...................................................................................246
JSON File Widget Parameters................................................................................................246
JSON File Widget Example..................................................................................................... 248
Create a Custom Widget Using an Automation Script................................................................. 250
Script Based Widgets Using Automation Scripts Examples............................................ 250
Create a Widget from an Indicator....................................................................................... 258
Edit a Widget...........................................................................................................................................260
Create a Used Percentage Widget for a Disk Partition................................................................ 261
Saved By Dbot (ROI) Widget.............................................................................................................. 263
Customize the Currency Symbol in the Saved by Dbot Widget.................................... 264
Manage Indicators..........................................................................................265
Understand Indicators........................................................................................................................... 267
Feed Integrations....................................................................................................................... 267
Indicators Page........................................................................................................................... 269
Indicator Reputation..................................................................................................................270
Indicator Types........................................................................................................................... 272
Indicator Fields........................................................................................................................... 275
Exclusion List...............................................................................................................................277
Create a Feed-Triggered Job.................................................................................................. 277
Manage the Indicator Timeline.............................................................................................. 278
Auto Extract Indicators......................................................................................................................... 279
Auto Extract Modes.................................................................................................................. 279
How to Define Auto Extract...................................................................................................280
Incidents............................................................................................................285
Incident Lifecycle....................................................................................................................................287
Planning........................................................................................................................................ 287
Configure Integrations.............................................................................................................. 288
Classification Mapping..............................................................................................................288
Pre-Processing............................................................................................................................ 288
Incident Created.........................................................................................................................288
Running Playbooks.................................................................................................................... 288
Post-Processing.......................................................................................................................... 288
Incidents Management..........................................................................................................................289
Fetch Incidents from an Integration Instance.....................................................................290
Classification and Mapping......................................................................................................290
Create a Search Query for Incidents.................................................................................... 293
Create a Widget From an Incident........................................................................................294
Customize Incident View Layouts......................................................................................... 297
Incident Investigation................................................................................................................318
War Room Overview................................................................................................................ 319
Work Plan.................................................................................................................................... 325
Link Incidents.............................................................................................................................. 326
Investigate Using the Canvas................................................................................................. 328
Incident Actions..........................................................................................................................332
Evidence Handling..................................................................................................................... 333
Incident Tasks............................................................................................................................. 334
Incident Fields.............................................................................................................................335
Incident De-Duplication........................................................................................................... 345
Post Processing for Incidents................................................................................................. 350
Incident Access Control Configuration.................................................................................352
Playbooks..........................................................................................................355
Playbooks Overview.............................................................................................................................. 357
Task Types...................................................................................................................................357
Inputs and Outputs................................................................................................................... 358
Field Mapping............................................................................................................................. 358
Manage Playbook Settings...................................................................................................................359
Playbook Inputs and Outputs..............................................................................................................360
Playbook Tasks........................................................................................................................................363
Create a Conditional Task....................................................................................................... 363
Communication Tasks...............................................................................................................365
Playbook Task Fields.................................................................................................................373
Extend Context....................................................................................................................................... 379
Extend Context in a Playbook Task...................................................................................... 379
Extend Context using the Command Line...........................................................................379
Generic Polling........................................................................................................................................ 381
Prerequisites................................................................................................................................381
Inputs............................................................................................................................................ 381
Generic Polling Example.......................................................................................................... 382
Limitations of Generic Polling.................................................................................................383
Filters and Transformers.......................................................................................................................384
Create Filters and Transformers in a Playbook.................................................................. 384
Lists.................................................................................................................... 427
Work With Lists......................................................................................................................................429
Use cases..................................................................................................................................... 429
List commands............................................................................................................................ 429
Create a List................................................................................................................................ 430
Set the List Separator Character........................................................................................................ 431
Agents................................................................................................................445
Agents Overview.................................................................................................................................... 447
Shared Agents......................................................................................................................................... 448
Configure a Shared Agent Instance...................................................................................... 448
Install a Shared Agent...............................................................................................................449
D2 Agent.................................................................................................................................................. 451
Install a D2 Agent......................................................................................................................451
Troubleshoot a Remote Installation (Windows)..............................................................................453
Agent Tools.............................................................................................................................................. 454
Configure Cortex XSOAR to Use PowerShell.................................................................... 454
D2 Agent Script Commands................................................................................................... 456
Return the memory dump file script.....................................................................................459
TABLE OF CONTENTS ix
Running a Batch file Using Agent Tools.............................................................................. 460
View All Running Processes Script........................................................................................462
Logs.................................................................................................................... 463
Audit Trail................................................................................................................................................. 465
Send the Audit Trail to an External Log Service.............................................................................470
x TABLE OF CONTENTS
Cortex XSOAR Overview
Cortex XSOAR combines security orchestration, incident management, and interactive
investigation into a seamless experience. The orchestration engine is designed to automate
security product tasks and weave in human analyst tasks and workflows. Cortex XSOAR is
powered by DBot, which learns from the real-life analyst interactions and past investigations to
help SOC teams with analyst assignment suggestions, playbook enhancements, and best next
steps for investigations. With Cortex XSOAR, security teams can build future proof security
operations to reduce MTTR, create consistent and audited incident management process, and
increase analyst productivity.
11
12 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Overview
© 2020 Palo Alto Networks, Inc.
Cortex XSOAR Licenses
• Cortex XSOAR License Types
• Cortex XSOAR Users
• Cortex XSOAR Community Edition: (General free usage) Evaluates Cortex XSOAR for partner
development.
• Cortex XSOAR Threat Intel Management: Limited to customers migrating from Minemeld and do not
require case management for security orchestration, automation and response (SOAR).
• Cortex XSOAR Starter Edition: Relevant for customers who require case management for SOAR.
• Cortex XSOAR: Relevant for customers with case management for SOAR and threat intelligence needs.
Add a License
Follow these steps to add a new license to your Cortex XSOAR instance.
6.1 Two months after 6.5 release date. Estimated EOL for 6.1 is
Jan 1, 2022.
General Guidelines
You should use the latest releases, as these include bug fixes, performance improvements, stability
enhancements, and may include security patches.
You can continue using a version that is end-of-life. However, when encountering an issue
that requires Customer Support involvement, you may be asked to upgrade to a supported
version before assistance can be provided.
Layouts All custom layouts and the incident fields being used.
Integrations Metadata for all custom integrations. The integration script is not
collected.
Integration instances Metadata for all integration instances, such as the instance name,
brand, and category. Private information, such as credentials, is not
collected.
Most-used commands The command names of the most-used commands, per incident
type.
Custom Fields All custom fields, including incident fields, indicator fields, and
evidence fields.
Incident Types All custom incident types and corresponding data, such as
associated playbook.
Incidents Metadata for all incidents, including the number of incidents per
incident type, the amount of time each incident stage took to
resolve.
Incident Metadata The number of incidents for each incident type, the average time of
each stage.
Incident Actions Incident creation, incident updates, whether the incident owner
suggestion assignment was used, file linkage, files uploaded to the
War Room.
Incident Cluster Usage Modifications to the similarity filter, changes to the time frame.
Custom Indicators All custom indicator types and corresponding data, such as type and
related incidents.
Indicator Reputations All indicator types, including name, regex, reputation command, and
reputation script.
Reports Metadata for all scheduled reports, including name, schedule time,
tags, and paper information.
Exclusion List A summary of exclusion list rules, and exclusion count per indicator
type.
Users All user metadata. Sensitive user data is hashed, for example, user
name, email address, and phone number.
Canvas The total number of canvases and the number of nodes and
connections for each canvas.
User Actions User updates, logins, updated credentials, login method, color
theme.
New Incident Incident source, incident type, playbook name, and playbook ID.
Playbook Run Incident source, incident type, playbook name, playbook ID, and is
sub-playbook (whether it is a sub-playbook.
Command Run Incident source, incident type, command, integration brand, trigger
method (manual/automatic).
Incident Close Incident source, incident type, open duration, and timer fields and
values.
Manual Task Start Task type, incident type, playbook name, playbook ID, and task
name.
Manual Task Completion Task type, incident type, playbook name, playbook ID, and task
name.
To-Do Task The total number of To-Do tasks. Whether the DBot suggested was
selected.
Incidents
Potential security data threat that SOC administrators identify and remediate. There are several incident
triggers, including:
• SIEM alerts
• Mail alerts
• Security alerts from third-party services, such as SIEM, mail boxes, data in CSV format, or from the
Cortex XSOAR RESTful API
Cortex XSOAR includes several out-of-the-box incident types, and users can add custom incident types
with custom fields, as necessary.
Incident Fields
Incident Fields are used for accepting or populating incident data coming from incidents. You create fields
for information you know will be coming from 3rd party integrations and in which you want to insert the
information.
Incident Lifecycle
Cortex XSOAR is an orchestration and automation system used to bring all of the various pieces of your
security apparatus together. Using Cortex XSOAR, you can define integrations with your 3rd-party security
and incident management vendors. You can then trigger events from these integrations that become
incidents in Cortex XSOAR. Once the incidents are created, you can run playbooks on these incidents to
enrich them with information from other products in your system, which helps you complete the picture. In
most cases, you can use rules and automation to determine if an incident requires further investigation or
can be closed based on the findings. This enables your analysts to focus on the minority of incidents that
require further investigation.
Integrations
Third-party tools and services that the Cortex XSOAR platform orchestrates and automates SOC
operations. In addition to third-party tools, you can create your own integration using the Bring Your Own
Integration (BYOI) feature.
The following lists some of the integration categories available in Cortex XSOAR. The list is not exhaustive,
and highlights the main categories:
• Analytics and SIEM
• Authentication
• Case Management
• Data Enrichment
• Threat Intelligence
• Database
• Endpoint
• Forensics and Malware Analysis
• IT Services
• Messaging
Integration Instance
A configuration of an integration. You can have multiple instances of an integration, for example, to connect
to different environments. Additionally, if you are an MSSP and have multiple tenants, you could configure a
separate instance for each tenant.
Playbooks
Cortex XSOAR Playbooks are self-contained, fully documented prescriptive procedures that query, analyze,
and take action based on the gathered results. Playbooks enable you to organize and document security
monitoring, orchestration, and response activities. There are several out-of-the-box playbooks that cover
common investigation scenarios. You can use these playbooks as-is, or customize them according to your
requirements. Playbooks are written in YAML file format using the COPS standard.
A key feature of Playbooks is the ability to structure and automate security responses, which were
previously handled manually. You can reuse Playbook tasks as building blocks for new playbooks, saving
you time and streamlining knowledge retention.
Automations
The Automation section is where you manage, create, and modify scripts. These scripts perform a specific
action, and are comprised of commands associated with an integration. You write scripts in either Python or
JavaScript. Scripts are used as part of tasks, which are used in playbooks and commands in the War Room.
Scripts can access all Cortex XSOAR APIs, including access to incidents, investigations, share data to the
War Room, and so on. Scripts can receive and access arguments, and you can password protect scripts.
The Automation section includes a Script Helper, which provides a list of available commands and scripts,
ordered alphabetically.
Commands
Cortex XSOAR has two different kinds of commands:
• system commands - Commands that enable you to perform Cortex XSOAR operations, such as clearing
the playground or closing an incident. These commands are not specific to an integration. System
commands are entered in the command line using a /.
• external commands - Integration-specific commands that enable you to perform actions specific to an
integration. For example, you can quickly check the repuration of an ip. External commands are entered
in the command line using a !. For example, !ip.
War Room
The War Room is a collection of all investigation actions, artifacts, and collaboration pieces for an incident.
It is a chronological journal of the incident investigation. You can run commands and playbooks from the
War Room and filter the entries for easier viewing.
Playground
The playground is a non-production environment where you can safely develop and test automation scripts,
APIs, commands, and more. It is an investigation area that is not connected to a live (active) investigation.
To erase a playground and create a new one, in the Cortex XSOAR CLI run the /playground_create
command.
Jobs
You can create scheduled events in Cortex XSOAR using jobs. Jobs are triggered either by time-triggered
events or feed-triggered events. For example, you can define a job to trigger a playbook when a specified
TIM feed finishes a fetch operation that included a modification to the list.
Authentication
Top Use Cases:
• Use credentials from authentication vault in order to configure instances in Cortex XSOAR
(Save credentials in: Settings -> Integrations -> Credentials) The integration should include the
isFetchCredentials Parameter, and other integrations that will use credentials from the vault, should
have the ‘Switch to credentials’ option
• Lock/Delete Account – Give option to lock account (credentials), and unlock/undelete
• Reset Account - Perform a reset password command for an account
• List credential names – Do not post the actual credentials. (For example – Credential name: McAfee
ePO, do not show actual username and password.)
• Lock Vault – In case of an emergency (if the vault has been compromised), allow the option to lock +
unlock the whole vault
• Step-Up authentication - Enforce Multi Factor Authentication for an account
Authentication Integration Example: CyberArk AIM
Case Management
Top Use Cases:
• Create, get, edit, close a ticket/issue, add + view comments
• Assign a ticket/issue to a specified user
• List all tickets, filter by name, date, assignee
• Get details about a managed object, update, create, delete
• Add and manage users
Case Management/Ticketing Integration Example: ServiceNow
Email Gateway
Top Use Cases:
• Get message – Download the email itself, retrieve metadata, body
• Download attachments for a given message
• Manage senders – Block/ Allow specified mail senders
• Manage URLs – Block/ Allow the sending of specified URLs
• Encode/ Decode URLs in messages
• Release a held message (The gateway can place suspicious messages on hold, and sometimes they would
need to be released to the receiver)
Email Gateway Integration Example: MimeCast
Endpoint
Top Use Cases:
• Fetch Incidents & Events
• Get event details (from specified incident)
• Quarantine File
• Isolate and contain endpoints
• Update Indicators (Network, hashes, etc.) by policy (can be block, monitor) – deny list
• Add indicators to the exclusion list
• Search for indicators in the system (Seen indicators and related incidents/events)
• Download file (based on hash, path)
• Trigger scans on specified hosts
• Update .DAT files for signatures and compare existing .DAT file to the newest one on the server
• Get information for a specified host (OS, users, addresses, hostname)
• Get policy information and assign policies to endpoints
Endpoint Integration Examples: Cortex XDR, Tanium and Carbon Black Protection
Vulnerability Management
Top Use Cases:
• Enrich asset – get vulnerability information for an asset (or a group of assets) in the organization
• Generate/Trigger a scan on specified assets
• Get a scan report including vulnerability information for a specified scan and export it
• Get details for a specified vulnerability
• Scan assets for a specific vulnerability
Vulnerability Management Integration Example: Tenable.io
• Using the search box: searches for incidents, entries, evidence, investigations, and indicators in Cortex
XSOAR. The search box appears in the top right hand corner in every page. You can either type free text
or search using the search query format (use the arrow keys to assist you in the search). For example,
incident.severity:Low searches for all incidents that have low in the severity category.
• Using a general search. For example, when searching for a table in the Users tab, searching for a widget,
or a task in a playbook, etc.
Input Description
Add text Type any text. The results show all data where one of the words
appears. For example, the search low virus returns all data where
either the string, low or the string, virus appears.
and Searches for data where all conditions are met. For example,
status:Active and severity:High finds all incidents with an
active status that has an high severity.
“” An empty value.
- Excludes from any search. For example in the Incidents page the -
status:closed -category:job searches for all incidents that are
not closed and for categories other than jobs.
Relative time. For example: Relative time in natural language can be used in search queries. Time
filters - < and > can be used when referring to a specified time, such as
• “half an hour ago”
dueDate:>="2018-03-05T00:00:00 +0200".
• “1 hour ago”
• “5 minutes ago” When adding some fields, such as Occurred you can
• “10 days ago” enter the date from the calendar. You can also filter the
• “five days ago” date when the results are displayed.
• “5 seconds ago”
• “two weeks ago”
• “a month ago”
• “a few months ago
• “one year ago”
• a week ago
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
STEP 3 | When prompted, review the required permissions and click Allow.
31
32 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Single Server Deployment
© 2020 Palo Alto Networks, Inc.
System Requirements
Cortex XSOAR requires the following software and hardware. Ensure you meet all minimum system
requirements.
• Cortex XSOAR Server
• Cortex XSOAR Engine
• Web Browsers
• Required URLs
Amazon Linux 2
Hardware Requirements
Web Browsers
Cortex XSOAR supports the following web browsers:
Internet Explorer 11
Required URLs
You need to allow the following URLs for Cortex XSOAR to operate properly.
Disk Usage
The required disk space for each incident varies based on the number of integrations and the size and
complexity of the playbook. We simulated the number of incidents and their respective size in the disk.
For the simulation we used out-of-the-box integrations and an example phishing playbook (see Simulation
Incidents and Required Disk Size table).
The incidents were generated using genuine phishing emails of various sizes, which averaged 4KB.
The below values show the disk space for incidents after ingestion and playbook run, without Demisto
data compression. A plain incident before ingestion and playbook run averages 3KB in the file system, and
depends on the data received from the SIEM.
You should not compare disk usage tests between versions. Different tests are performed for
each version.
5.0 22 GB 90 GB 225 GB
Benchmark Results
The results are the average time for each test.
The numbers specified were processed without data compression. The results might vary based on
the machines' hardware specifications, system configurations, Docker version, and the type of actions
performed.
Cortex XSOAR utilizes free memory and available resources to enable faster system performance, including
cache, containers manipulation, and more.
Single-server Tests
50 814ms 2m49s
50 0.5s 2m12s
Asset Path
Binaries /usr/local/demisto
Data /var/lib/demisto
Logs /var/log/demisto
Configuration /etc/demisto.conf (will not be created if defaults are selected during installation)
Reports /tmp/demisto_install.log
Prerequisites
Verify the following information and requirements before you install Cortex XSOAR.
• Your deployment meets the minimum system requirements.
• You have root access.
• The production server has Python 2.7 or 3.x.
• (CentOS 8) Open the installation script by running the sudo yum install tar command.
STEP 1 | Download the server package from the link that you received from Cortex XSOAR Support.
demistoserver.xxxx.sh
STEP 4 | Run the chmod +x demistoserver-xxxx.sh command to convert the .sh file to an
executable file.
STEP 5 | In a web browser, go to the https:// serverURL: port to verify that Cortex XSOAR was
successfully installed.
Flag Description
STEP 1 | Create a local yum repo with the required dependencies for your deployment type.
STEP 3 | Run the command for your deployment to install Cortex XSOAR.
STEP 4 | Run the repoquery -a --installed command to verify that the required dependencies were
successfully installed.
Debian dependencies
The following dependencies are required for Debian and Ubuntu deployments.
• systemd-services
• smbclient
• xmlsec1
• rpm
• libcap2-bin
• file
• libfontconfig1
• libexpat1
• libpng12-0
• libfreetype6
• openjdk-8-jre
• git
Content
• Verify that your content is up to date. Navigate to the Playbooks section and click Check for new
content.
• Verify that you see automation scripts in the Automations section.
• Verify that you see playbooks in the Playbooks section.
• Verify that you see dashboard widgets in the My Dashboards section.
{
"Security":{
"CertFile":"",
"KeyFile":""
}
}
Cortex XSOAR server does not support PKCS#8 encrypted PEM files.To validate that the file
is supported, check that the "DEK-Info" header exists.
When using a Safari browser, the self-signed certificate must be added to the OS Keychain.
STEP 1 | On a SSH session to the Cortex XSOAR server, generate the private certificate by running the
following command.
openssl genrsa -out DemistoPrivateKey.key 2048
The RSA private key is generated.
STEP 2 | Generate the Certificate Signing Request (CSR) by running the following command.
openssl req -new -sha256 -key DemistoPrivateKey.key -out
DemistoPrivateCert.csr
STEP 4 | Replace the existing internal certificate in /usr/local/demisto/cert.pem and key in /usr/
local/demisto/cert.key with the newly generated private certificate and key.
M5 m5.4xlarge
m5.12xlarge
m5.24xlarge
M4 m4.4xlarge
m4.10xlarge
m4.16xlarge
C5 c5.4xlarge
c5.9xlarge
c5.18xlarge
C4 c4.4xlarge
c4.8xlarge
R4 r4.4xlarge
r4.8xlarge
r4.16xlarge
55
56 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Distributed Database Deployment
© 2020 Palo Alto Networks, Inc.
Distributed Database Deployment
This multi-tier configuration enables you to scale your environment and manage load resources. Cortex
XSOAR supports two types of multi-tier configurations. In both multi-tier configuration types, there is a
single app server.
Each database server, main and nodes, must have its own disaster recovery server configured.
Although a distributed database deployment might enhance performance, there are various
factors that must be considered. This might not be the preferred deployment method for
you. Contact your Cortex XSOAR Customer Success manager before you implement a
distributed database deployment.
• You must ensure that ports 443 and 50001 are open from the app server to the database
servers. In addition, port 443 needs to be open while you are initially registering a
database node.
• Each database server, main and nodes, must have its own disaster recovery configured.
Main database
Database Node
STEP 1 | To install the database server, run the sudo ./demistoserver-X.sh -- -db-only -db-
secret=<your_db_secret> -y command.
Parameter Description
demistoserver-X The name of the Cortex XSOAR installer, where X is the version and build number.
db-only The flag indicating that only the database server is installed.
db-secret The 10-character string that you defined when you installed the database server.
y The flag that completes the installation silently by answering yes to the remaining
installation questions. Default settings are applied where applicable.
STEP 2 | To install the app server, run the sudo ./demistoserver-X.sh -- -server-only -db-
secret=<your_db_secret> -db-address=<IP or hostname> -external-address=<IP
or hostname> -y command.
Parameter Description
demistoserver-X The name of the Cortex XSOAR installer, where X is the version and build number.
server-only The flag indicating that only the application server is installed.
db-secret The 10-character string that you defined when you installed the database server.
db-address The database server's public IP address or hostname. Do not include the http or https
prefix.
external-address The app server’s public IP address or hostname. Do not include the http or https
prefix.
y The flag that completes the installation silently by answering yes to the remaining
installation questions. Default settings are applied where applicable.
STEP 3 | Copy the demistonode-5.0-X.sh file to the machine on which you want to install the node.
STEP 4 | Run the node installation using the following command: sudo ./demistonode-X.sh -- -
external-address=<IP or hostname> -y.
Parameter Description
demistonode-X The name of the Cortex XSOAR database node installer, where X is the
version and build number.
external-address The node’s public IP address or hostname. This is the address that the
node uses to register with to the cluster and that the App Server uses to
communicate with the node. Do not include the http or https prefix.
y Answers all remaining installation questions with a yes (using default settings)
and enables you to continue the installation silently.
STEP 5 | In Cortex XSOAR, verify that the new database node appears in the list of remote databases.
Live Backup only backs up the database servers. As no information is stored on the
application servers, there is no need to back those up. In the event that an application server
fails, you can install another application server.
Server actions are mirrored in real-time. There may be pending actions due to high server load, connectivity
issues, and so on. Note the following:
• Live Backup uses a single active server and a single standby server.
• Active / Active configuration is not currently supported.
• Each host retains its own distinct IP address and hostname.
• Neither host has any awareness of which node is truly active. Therefore, failover is not dynamic,
meaning that making a node active must be done manually, by an administrator.
In the event of a server failover, engines dynamically reconnect to the active host.
If there is ever uncertainty about whether a host that is presently down or stopped was in
an active state before it went offline, it is recommended that you put the presently active
node into a standby state before starting the Cortex XSOAR service on the other host. You
can then make it active again after you have confirmed whether the host you are starting is
already in active mode.
STEP 2 | Select the database node for which you want to configure Live Backup and click Edit Live
Backup Configuration.
Trust server certificate On: certificates are not checked. Off: certificates
are checked.
STEP 6 | Copy the installation package to the machines on which you want to install the live backup
servers. Ensure that the backup server has a different IP address from the active server.
2. Parameter Description
do not start server The flag indicating that the server should not be
started.
STEP 8 | Verify that the passive server is accessible from the active server through port 443 (or any
other port configured as a listening port). Make sure that there are no firewalls that might drop
communication.
STEP 10 | Create a tarball file of the following necessary files and folders on the active server to be
copied to the passive server.
• /var/lib/demisto/data
• /var/lib/demisto/artifacts
• /var/lib/demisto/attachments
• /var/lib/demisto/systemTools
• /var/lib/demisto/d2_server.key
• /usr/local/demisto/cert*
• /usr/local/demisto/demisto.lic
To create the file, use the following command, which preserves demisto:demisto ownership and
file permissions. tar --ignore-failed-read -pczf demistoBackup.tgz /var/lib/
demisto/data /var/lib/demisto/artifacts /var/lib/demisto/attachments /var/
lib/demisto/systemTools /var/lib/demisto/d2_server.key /usr/local/demisto/
cert* /usr/local/demisto/demisto.lic
STEP 11 | Copy the created tarball file (demistoBackup.tgz) to the passive server using
either scp or a tool that you prefer. For example, scp demistoBackup.tgz
root@<yourBackupServerIPorHostname>:/root
STEP 12 | On the passive server, extract the backup tarball file with the following command (original file
permissions and ownership will be preserved): tar -C / -xzpvf demistoBackup.tgz
If the procedure was successful, you will see the following information populated in the table in Settings >
Advanced > Remote Databases.
Property Value
STEP 1 | Manually verify that the main database really is down (as opposed to a connectivity issue for a
particular app server). Make sure it will not come back online later.
1. Determine if the main database’s host machine is down and cannot be restarted.
2. If the main database’s host machine is up, log into it and check if the demisto service is down and
cannot be restarted.
3. If the demisto service is up, rule out a network or firewall issue by checking if the main database’s
machine is accessible from the app server's machine.
STEP 3 | Navigate to the URL of the passive server and log in as an administrator. You are presented
with the following page:
STEP 5 | In the Switch Hosts dialog window, type Switch Hosts in the text box.
The database monitoring dashboard should appear and the database server is now active.
When the system is back online, you will probably want to set up a new standby server for your main
database. See Configure a Live Backup for a Distributed Database Overviewfor details.
STEP 1 | On the application server, navigate to Settings > Advanced > Remote Databases.
STEP 2 | Select the database server you want to transition, and click Switch Host.
STEP 3 | In the Switch Hosts dialog window, type Switch Hosts in the text box.
STEP 4 | Click the Yes, switch hosts button to commit the change.
It may take several minutes for the switch to occur and be reflected in the page.
If you used this procedure to recover from a database failure, you will probably want
to set up a new standby server for that database. See Configure a Live Backup for a
Distributed Database Overview for details.
Parameter Description
Port The port through which you are connecting. Do not include
the http or https prefix.
DB-SECRET The 10-character string that you defined when you installed
the main database server.
Parameter Description
Port The port through which you are connecting. Do not include
the http or https prefix.
DB-SECRET The 10-character string that you defined when you installed
the main database server.
Your Cortex XSOAR Engines are currently configured to connect to the previous
configuration of the Cortex XSOAR platform, which you have now converted to the database
server. You need to change the engine configuration so that it connects to the newly installed
Cortex XSOAR application server.
Parameter Description
demistoserver-X The name of the Cortex XSOAR installer, where X is the version and
build number.
db-only The flag indicating that only the database server is installed.
db-secret A 10-character string that you will need to provide for the app
server.
STEP 2 | Install the Cortex XSOAR app server using the sudo ./demistoserver-X.sh-- -server-
only -db-secret=<your_db_secret> -db-address=<IP or hostname> -external-
address=<IP or hostname> -y command.
demistoserver-X The name of the Cortex XSOAR installer, where X is the version and
build number.
db-secret A 10-character string that you will need to provide for the app server.
external-address The app server's public IP address or hostname. Do not include the
http or https prefix.
STEP 4 | Check the status of the engine using the sudo systemctl status d1 command.
The engine logs are available at /var/log/demisto/d1.log
STEP 1 | Log in to the app server and databases (main, node1, node2...) via SSH.
STEP 2 | Stop all Cortex XSOAR services in the following order: app server, node databases, main
database.
sudo service demisto stop
STEP 6 | When the main database is up, start the node databases.
sudo service demisto start
STEP 1 | Log in to the app server and databases (main, node1, node2...) via SSH.
STEP 2 | Stop all Cortex XSOAR services in the following order: app server, node databases, main
database.
sudo service demisto stop
STEP 6 | When the main database is up, start the node databases.
sudo service demisto start
STEP 1 | Copy the installer to each of the machines in the distributed database environment.
STEP 2 | Stop the app server using the command sudo service demisto stop.
STEP 3 | Stop the database servers using the command sudo service demisto stop on the main
database and all secondary databases.
STEP 4 | To upgrade the database servers, run the sudo ./demistoserver-X.sh-- -y command on
the main database and all secondary databases.
STEP 5 | To upgrade the app server, run the sudo ./demistoserver-X.sh-- -y on the app server.
STEP 6 | Start the database servers using the command sudo service demisto start on the main
database and all secondary databases..
STEP 7 | Start the app server using the command sudo service demisto start.
75
76 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Proxy
© 2020 Palo Alto Networks, Inc.
Configure Proxy Settings
Proxy settings can be configured globally in Cortex XSOAR by adding them as a server configuration.
Generally, when you need a proxy for Cortex XSOAR, you also need a proxy for Docker. For
information about how to configure Docker to use a proxy, see the Docker documentation.
When using a BlueCoat proxy, ensure you encode the values correctly.
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
STEP 1 | Run one of the following commands according to your Linux system:
• RedHat/Amazon: sudo yum install nginx
• Ubuntu: sudo apt-get install nginx
STEP 2 | (Optional) Verify the NGINX installation by running the following command:
sudo nginx -v
STEP 1 | To use OpenSSL to generate a self-signed certificate run the following command:
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/
nginx/cert.key -out /etc/nginx/cert.crt
STEP 2 | When prompted, complete the on-screen instructions to complete the required fields.
STEP 1 | Open the following NGINX configuration file with your preferred editor:
/etc/nginx/conf.d/demisto.conf
upstream demisto {
server DEMISTO_SERVER:443;
}
server {
# Change the port if you want NGINX to listen on a different port
listen 443;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/demisto.access.log;
location / {
proxy_pass https://demisto;
proxy_read_timeout 90;
}
STEP 4 | Verify you can access Cortex XSOAR by browsing to the NGINX server host.
81
82 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Manage Data
© 2020 Palo Alto Networks, Inc.
Reindex the Entire Database
Follow these steps to reindex the entire database.
STEP 1 | Stop the Cortex XSOAR service using the appropriate command for your OS.
• systemctl stop demisto
• sudo service demisto stop
STEP 5 | Start the Cortex XSOAR service using the appropriate command for your OS.
• systemctl start demisto
• sudo service demisto start
STEP 6 | Log in to your Cortex XSOAR instance and verify that the reindex process was successful.
All should appear, for example, incidents, playbooks, automations, and so on.
STEP 1 | Stop the Cortex XSOAR service using the appropriate command for your OS.
• systemctl stop demisto
• sudo service demisto stop
STEP 3 | Run the server and be sure to specify the index as an argument.
sudo /usr/local/demisto/server -restore-index-name=indexName -public /usr/
local/demisto/dist -stdout -conf /etc/demisto.conf
STEP 4 | Log in to your Cortex XSOAR instance and verify that the reindex process was successful.
STEP 5 | When a message appears stating Server up and running, good luck to us all or the
Cortex XSOAR UI displays, verify that all the data is present from the reindexing, and stop the
process by pressing Ctrl+C or Cmd+C.
STEP 7 | Start the Cortex XSOAR service using the appropriate command for your OS.
• systemctl start demisto
• sudo service demisto start
STEP 2 | Stop the demisto instance on the main database and all secondary databases.
sudo systemctl stop demisto
STEP 3 | Delete the index folder on all databases using the following command.
sudo rm -rf /var/lib/demisto/data/demistoidx
STEP 7 | Log in to your Cortex XSOAR instance and verify that the reindex process was successful.
All of your data should appear, for example, incidents, playbooks, automations, and so on. If there is a
problem, contact the Cortex XSOAR support team.
STEP 2 | Log in to the main database machine and all secondary database machines.
STEP 3 | Stop the demisto instance on the main database and all secondary databases.
sudo systemctl stop demisto
STEP 4 | On each database machine, run the server and be sure to specify the index as an argument.
sudo /usr/local/demisto/server -restore-index-name=indexName -public /usr/
local/demisto/dist -stdout -conf /etc/demisto.conf
STEP 5 | Log in to your Cortex XSOAR instance and verify that the reindex process was successful.
STEP 6 | When a message appears stating Server up and running, good luck to us all or the
Cortex XSOAR UI displays, verify that all the data is present from the reindexing, and stop the
process by pressing Ctrl+C or Cmd+C.
STEP 8 | Start the main database machine and all secondary database machines.
sudo systemctl start demisto
Although the folders reside in /var/lib/demisto/data/, Do Not save the backup folders
under /var/lib/demisto/.
The following data folder and files can be found in this folder.
• demisto.db - db for all playbooks and automation - all things not having to do with incidents and insights
• demistoidx - indexing of the system
• partitionsData - Data of incidents, insights, entries split up by month resolution
The following is an example of how the folders and filenames will appear in your system.
$ tree /var/lib/demisto/data
### demisto.db
### demistoidx
# ### accounts
# # ### index_meta.json
# # ### store
...
# ### entries_082017
# # ### index_meta.json
# # ### store
# ### entries_092017
# # ### index_meta.json
# # ### store
# ### entries_102017
# # ### index_meta.json
# # ### store
# ### evidences
# # ### index_meta.json
# # ### store
# ### incidents_082017
# # ### index_meta.json
# # ### store
# ### incidents_092017
# # ### index_meta.json
# # ### store
# ### incidents_102017
# # ### index_meta.json
# # ### store
# ### investigations_082017
# # ### index_meta.json
# # ### store
# ### investigations_092017
# # ### index_meta.json
STEP 1 | Stop the Cortex XSOAR service using the following command.
$ sudo service demisto stop
STEP 4 | Move the data you want to archive to the archive directory using the following command. The
following command moves all folders that have a mmyyyy suffix.
mv /var/lib/demisto/data/**/*_<date_to_archive>* /var/lib/demisto-archive/
archived-2019
For example:
mv /var/lib/demisto/data/**/*_092019* /var/lib/demisto-archive/
STEP 5 | Create the compressed archive of your selected files and folders using the following tarball
command.
$ tar -cvzf demisto-2019-archive.tar.gz /var/lib/demisto-archive/
archived-2019
STEP 6 | Start the Cortex XSOAR service using the following command.
$ sudo service demisto start
STEP 3 | Copy the following files and directories from the old server to the new server.
• /var/lib/demisto
• cert.key and cert.pem under /usr/local/demisto
Make sure that ownership of the directories and file are set to demisto:demisto
STEP 4 | Start the new server and wait for the server to complete the indexing process.
{
"Server":{
"HttpsPort":"443"
},
"folders":{
"lib":"{new path}/var/lib/demisto"
},
"DB":{
"DatabaseDir":"{new path}/var/lib/demisto/data",
"IndexDir":"{new path}/var/lib/demisto/data"
}
}
STEP 4 | Start the Cortex XSOAR service using the appropriate command for your OS.
sudo service demisto start
STEP 5 | (Optional) Verify that the process was performed successfully and remove the original directory
/var/lib/demisto.
STEP 1 | Stop the Cortex XSOAR service using the appropriate command for your OS.
• systemctl stop demisto
• sudo service demisto stop
STEP 4 | Restore the folder using the following command, where folderName is the name of the folder
to restore.
tar -C archive -xvzf folderName
STEP 5 | Move the idx data back to the original demistoidx folder using the following command.
mv archive/*2017 data/demistoidx
STEP 6 | Move the partitions back to the original partitionsData folder using the following command.
mv archive/*2017.db data/partitionsData
STEP 7 | Start the Cortex XSOAR service using the appropriate command for your OS.
• systemctl start demisto
• sudo service demisto start
93
94 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Users and Roles
© 2020 Palo Alto Networks, Inc.
Users and Roles Overview
Cortex XSOAR uses role-based access control (RBAC) for controlling user access. RBAC helps manage
access to Cortex XSOAR components, so that users, based on their roles, are granted minimal access
required to accomplish their tasks.
You can manage the following settings/roles in the USERS AND ROLES tab:
• View and manage different roles and access permissions in the Roles tab. You can add as many roles as
required and change their permission levels, as described in Roles in Cortex XSOAR.
• View and manage different users in the Users tab. You can view the user’s details such as name, email
address, last log in, whether they have been locked out, and so on. You can also manage the user’s
password, unlock their account, disable, enable, and remove their account.
• Invite users and manage invitations, as described in User Invitations. After the user has accepted the
invitation you can manage their role in Cortex XSOAR.
• Assign roles to commands at the integration instance level. This means if you have multiple instances of
the same integration, you can assign different roles (permission levels) for the same command in each
instance. For more information, see Integration Permissions.
• View details of actions taken in Cortex XSOAR in the Audit trail.
• Set a password policy, as described in Password Policy.
You can also authenticate users with SAML 2.0, using Okta, Azure, and so on, as described in Authenticate
Users with SAML 2.0.
You can add as many roles as you require, by clicking New. To create a new role, see create a new role.
Follow the same steps when editing a role. When defining a new role, you can add permissions, SAML and
AD Roles, defining shift periods and so on.
Permissions
You can view and change the following permission levels as required:
Permission Description
Shifts
If you want to manage shift periods for users, including who is on call and to whom to assign, you can define
a role for a specific shift period and then assign that shift to a user.
Define a Role
Cortex XSOAR comes with three roles with default permissions. You can add as many new roles and
combine them with other roles, such as single sign on.
STEP 2 | In the Role name field, type the name for the new role.
Incident table actions Limits table actions in the Incidents page, such as
delete, edit, close and so on.
Settings You can set the permission level generally for all
settings or split them according to the following:
Users: includes invitations and editing
permissions.
Integrations: whether a user can add, edit or
delete instances.
Credentials: whether a user can add, edit, or
delete credentials.
STEP 4 | In the Page Access section, select the pages you want the user to have access.
STEP 5 | To assign the role to an active directory group, in the AD Roles Mapping section, from the drop
down list, select the group as required.
STEP 6 | To assign a role to a single sign on group, in the SAML Roles Mapping section, from the drop
down list, select the group as required.
STEP 7 | If you want to associate the role with another role, in the Nested Roles section, from the drop
down list, select the nested role, as required.
The Nested Role overrides any settings you select in the Roles tab.
STEP 8 | To add a shift period of work to the role, in the Shifts field, click + Add Shift and define the
required period.
Weekly shifts start on Sunday and specified in the UTC time zone.
STEP 2 | Select a user who has an Administrator role and click Roles.
STEP 6 | Enter the key dashboards.read.only.users and type a comma-separated list of the names
of the dashboards you created for the self service read-only users.
STEP 4 | In the Role Name field, type the name for the new role. you do not need to assign it any
permissions.
STEP 7 | Enter the user’s email address and select the role you created and click Invite.
STEP 8 | Click the Roles tab and delete the role you created.
Be sure to share the dashboard with the read-only user after you create it. For information about creating
dashboards, see Create a New Dashboard.
STEP 2 | Create an Incident Type for the self service read-only user. Be sure to configure the playbook
to run automatically.
STEP 5 | Hover over the tab you want the self service read-only user to view and click the gear icon.
STEP 7 | In the Viewing Permissions window, select the Permit users with no roles to view this tab
option.
STEP 8 | Customize this incident tab to only include widgets with data read-only users can access. Such
information can be Case Details, Timeline Information, Attachments, and War Room entries.
STEP 9 | Test and validate this incident tab layout as a read-only user to ensure it displays correctly and
without an insufficient permission error message.
Messages
The Messages tab is where you view all messages, notifications, incidents, and tags you have been
mentioned in. The date is located to the right of each message. At the top of the page is a search bar where
you can search for specific messages, notifications, and more.
Details
The Details tab is where you update your personal information, including the email address, phone number,
and password.
Drop any image here (or click to browse) The image to use in Cortex XSOAR.
Preferences
The Preference tab is where you customize your personal experience with Cortex XSOAR.
Script Editor Style The style of the editor to use to use in the
Automation and BYOI.
Sign me out of all other sessions (this session When clicked, your account will be automatically
remains open) signed out of all sessions it is signed in to, except,
the current session which will remain signed in and
open.
Sign me out of all sessions When clicked, your account will be automatically
signed out all active sessions you are logged in to
including the current session which will be signed
out and will close.
Notifications
The Notifications tab enables you to select which notifications to receive and the method for receiving the
notifications (email, mobile, or Slack).
If you want to consider on-call users only run the getOwnerSuggestions command.
Managing Shifts
You can define shifts for various roles and then add them to automations, playbooks and incidents.
STEP 2 | In the Shifts field, click Add Shift and add the required period.
Weekly shifts start on Sunday and are specified in the UTC time zone format.
For example, we create a role called First Shift and add a shift starting on Sunday and ending Monday.
STEP 4 | (Optional) To add the role to a user, in the Users tab, select the user you want to add.
STEP 6 | From the drop down list select the role you created.
STEP 8 | (Optional) You can view on-call users details in a dashboard by adding the required widgets.
Action Description
Delete the invite If the user has not accepted the invitation you
can delete the invite and the user cannot use this
invitation to join the Cortex XSOAR environment.
Invite a User
You can invite one user at a time and add their roles as required. The invitation is sent by email to the user’s
email address or you can copy the URL invite and send it direct. An invite is valid for one week.
Ensure that you configure an Email integration, such as Gmail, EWS, Mail Sender, and so on.
You may need to add an external host name if your Cortex XSOAR needs to connect to an external host
name.
STEP 3 | Type the email address of the user you want to add.
STEP 4 | From the drop down list, select at least one role to assign to the user.
If you want to view or edit integration permissions, go to Settings > USERS AND ROLES > Integration
Permissions . You can see a list of all the enabled integrations in Cortex XSOAR. Under each integration you
can see the following:
• Commands: a list of all commands for the integration.
• Instance: a list of all instances for the integration.
• Permitted roles: a drop down list of roles you can assign to the command.
Users that do not have command permissions cannot do the following:
• Run the command from the CLI (!).
• Complete pending tasks in a Work Plan that run the command.
• Edit arguments for playbook tasks that have this command.
• Select the command when editing a playbook.
• Select a script if using a reputation-command.
STEP 4 | When selecting unlock choose one of the following options to unlock the user’s account:
• By Admin only: only administrators can manually unlock user accounts.
• Automatically: users can unlock themselves after a specified time.
Locked out users cannot use API keys. Cortex XSOAR has a delay mechanism for multiple failed logins.
However, unlike the lockout mechanism, this system is not suitable for preventing automated brute-
force attacks. It is useful for preventing accidental lockouts.
STEP 3 | Add the keys and values, as described in Default Password Policy Keys.
{
"Server":{
"HttpsPort":"443"
},
"db":{
"index":{
"entry":{
"disable":true
}
}
},
"limit":{
"docker":{
"memory":true
}
},
"password":{
"policy":{
"default":{
"Enabled":true,
"MinLowercaseChars":4,
"MinUppercaseChars":4,
"ExpireAfter":4,
"ExpireUnit":"day",
"PreventRepetition":true,
"MaxFailedLoginAttempts":4,
"SelfUnlockAfterMinutes":4
}
}
}
}
MinPasswordLength Default is 8.
MinLowercaseChars Default is 1.
MinUppercaseChars Default is 1.
MinDigitsOrSymbols Default is 1.
{
"users": [{
"username": "newadmin",
"password": "veryStrongPassword!",
"email": "admin@example.com",
"phone": "+650-123456",
"name": "New Admin Dude",
"roles": {
"demisto": [
"Administrator"
]
}
}]
}
STEP 2 | Save the file and restart the Cortex XSOAR server by running the systemctl restart
demisto command.
The file is removed when Cortex XSOAR restarts.
STEP 3 | Log in to Cortex XSOAR by using the new administrator credentials, as created in step 1.
STEP 4 | Change the password for the current administrator and log out.
STEP 5 | Login to Cortex XSOAR using the current administrator credentials, including the new
password.
STEP 5 | To add users to the group, select the group name you created in step 4.
STEP 4 | From the Create a New Application Integration window, in the Platform field, select Web.
STEP 7 | From the General Settings section, in the App name field, type a name for the application and
click Next.
STEP 12 | Continue with Configure the SAML 2.0 Integration for Okta.
Parameter Value
The Group Attribute Statement parameters define which groups to associate with Cortex XSOAR and which
groups are to be mapped to Cortex XSOAR roles. In this example, add a group called Everyone.
If you are using memberof as a group attribute statement, ensure not to use the memberof
as an attribute statement. You cannot have both single user and group user attributes.
STEP 6 | To verify that the settings are successful, in the instance settings, click Get service provider
metadata.
Attribute Description
Service Provider Entity ID The URL of your Cortex XSOAR server (also known as an ACS URL).
In the format: https://yourdomain.com/saml
Idp metadata URL URL of your organization’s IdP metadata file. You can find this in
the Sign On tab in Otka or when defining an Okta application, as
described in Define the Okta Application to authenticate Cortex
XSOAR.
IdP metadata file Your organization’s IdP metadata file. You either need to add the IdP
metadata URL or the file.
IdP SSO URL The URL of the IdP application that corresponds to Cortex XSOAR.
You can copy and paste the IdP SSO URL in Okta, when clicking
View Setup Instructions.
Attribute to get username Attribute in your IdP for the user name.
Attribute to get email Attribute in your IdP for the user's email address.
Attribute to get first name Attribute in your IdP for the user's first name.
Attribute to get last name Attribute in your IdP for the user's last name.
Attribute to get phone Attribute in your IdP for the user's phone number.
Attribute to get groups Attribute in your IdP for the groups of which the user is a member.
Default role Role to assign to the user when they are not a member of any group.
RelayState Only used by certain IdPs. If your IdP uses relay state, you need to
supply the relay state.
Sign request and verify response Method for the IdP to verify the user sign-in request using the IdP
signature vendor certificate.
Identity Provider private key Private key for your IdP, in PEM format. Created locally by the user
who wants to use SAML. The matching public key is uploaded to
Okta.
Do not map SAML groups to SAML groups will not be mapped to Cortex XSOAR roles.
Cortex XSOAR roles
STEP 3 | In the SAML Roles Mapping field, specify one or more SAML groups to map to the Cortex
XSOAR role.
In the following example, you want to add the group, called “Everyone”, which has been defined in the
Group Attributes Statement field in Okta:
In Cortex XSOAR, in the SAML Roles Mapping field, add the following:
STEP 1 | From the home page, select Azure Active Directory > Enterprise applications > New
Application.
STEP 4 | If you have not created any users or groups, go to Home > Azure Active Directory and select
the following:
• For users, select Users > New user and create or invite a user.
• For groups, select Groups > New Group and type the group information as required.
STEP 5 | From the Name of the Application Overview window, in the Getting Started section, click
Assign users and groups.
STEP 6 | In the Name of the Application Users and groups window, click Add user.
STEP 4 | In the Basic SAML Configuration section, add the Identifier (Entity ID) and Reply URL
(Assertion Consumer Service URL).
Use the format https://<XSOAR Server FQDN>/saml
Ensure the attribute names match the names in Cortex XSOAR, when defining the instance.
STEP 8 | In the Advanced options section, select the Customize the name of the group claim check box.
STEP 12 | You can now add an instance in Cortex XSOAR, as described in Configure the SAML 2.0
Integration for Azure.
STEP 6 | To verify that the settings are successful, in the instance settings, click Get service provider
metadata.
STEP 7 | To map Azure groups, continue with Map Azure Groups to Cortex XSOAR Roles.
Attribute Description
Service Provider Entity ID The URL of your Cortex XSOAR server (also known as an ACS URL).
In the format: https://yourdomain.com/saml
Idp metadata URL URL of your organization’s IDP metadata file. You can copy this from
the App Federation Metadata URL in the SAML Signing Certificate
in Azure.
IdP metadata file Your organization’s IdP metadata file. You either need to add the Idp
metadata URL or the file.
IdP SSO URL The URL of the IdP application that corresponds to Cortex XSOAR.
You can copy this from the Login URL field in the SAML Signing
Certificate section.
Attribute to get username Attribute in your IdP for the user name. Value: nameIdentifier
Attribute to get email Attribute in your IdP for the user's email address. Value: Email
Attribute to get first name Attribute in your IdP for the user's first name. Value: FirstName
Attribute to get last name Attribute in your IdP for the user's last name. Value: LastName
Attribute to get phone Attribute in your IdP for the user's phone number. Value: Phone
Attribute to get groups Attribute in your IdP for the groups of which the user is a member.
Value: memberOf
Default role (for IdP users Role to assign to the user when they are not a member of any group.
without groups) For example, Analyst.
RelayState Only used by certain IdPs. If your IdP uses relay state, you need to
supply the relay state.
Use system proxy settings Select the check box to use proxy settings.
Compress encode URL (AFDS) (Manadatory) Select the check box to compress encode URL (AFDS).
If not, you may receive a Decoding Flat error during connection.
Service identifier (AFDS) Add the appid value, which can be found at the end
of the IDP metadata URL. For example, https://
login.microsoftonline.com/934a6d32-9550be/
federationmetadata/2007-06/federationmetadata.xml?
appid=b0331331-f15b-4a32-9f48-19158beb0340.
Do not map SAML groups to SAML groups are not mapped to Cortex XSOAR roles. Default roles
Cortex XSOAR roles are assigned and you can select them later.
STEP 3 | In the SAML Roles Mapping field, type the Object ID that appears in Azure.
For example, in Azure, we created a group, called Cortex Admins. Note the Object ID below:
STEP 2 | In the tree in the left panel, right-click Service and select Edit Federation Service Properties.
STEP 3 | Click the General tab and confirm that the DNS entries and certificates names are correct.
STEP 5 | The Add Relying Party Trust Wizard screen appears. Click Start.
STEP 10 | (Optional) In the Configure Certificate page, you can configure the claims encryption.
STEP 12 | In the Configure URL page, select Enable support for the SAML 2.0 Web SSO protocol, and
enter the Cortex XSOAR server URL followed by /SAML.
STEP 14 | In the Configure Identifiers page, add the Relying party trust identifier. The identifier can be a
friendly name, the same as the Display name, or the application URL. This identifier is used to
STEP 16 | In the Choose Access Control Policy page, select an access control policy for the
authentication portal. In this example, we choose .
STEP 18 | In the Ready to Add Trust page, verify that all the setting are correct.
STEP 1 | From the right menu pane of the Relying Party Trusts, click Edit Claim Issuance Policy
STEP 3 | In the Add Transform Claim Rule Wizard, select Transform an Incoming Claim from the drop
down list.
STEP 5 | In the Configure Claim Rule page, type the Claim rule name WindowsAccountName which will
pass the user login name in AD and select the Windows account name for the Incoming and
Outgoing claim type.
STEP 7 | Add another claim rule which will pass the AD user account attributes to Cortex XSOAR. This
step is required to map the user group membership, full name, email, phone and other LDAP
attributes.
1. From the right menu pane of the Relying Party Trusts, click Edit Claim Issuance Policy
2. Click Add Rule.
3. In the Add Transform Claim Rule Wizard, select Send LDAP Attributes as Claims from the drop
down list.
4. Click Next.
5. In the Configure Claim Rule page, type a claim rule name, select Active Directory from the Attribute
store drop down list and map the required fields. Note that the user group attribute is mandatory if
you wish to map the user group to the Cortex XSOAR user role.
STEP 8 | Open PowerShell and make sure the IDP Sign-on page is enabled
STEP 9 | Verify that the ADFS IDP Sign-on page is working by browsing to the ADFS service portal URL,
in our example: https://demistodev.local/adfs/ls/idpinitiatedsignon.aspx
STEP 10 | Continue with Configure the SAML 2.0 Integration for ADFS.
STEP 8 | To verify that the settings are successful, in the instance settings, click Get service provider
metadata.
Attribute Description
Service Provider Entity ID The URL of your Cortex XSOAR server (also known as an ACS URL).
In the format: https://yourdomain.com/saml
Attribute to get email Attribute in your IdP for the user's email address.
Attribute to get user name Attribute in your IdP for the user's user name.
Attribute to get first name Attribute in your IdP for the user's first name.
Attribute to get last name Attribute in your IdP for the user's last name.
Attribute to get groups Attribute in your IdP for the groups of which the user is a member.
Default role Role to assign to the user when they are not a member of any group.
Users can be assigned to a default role at Cortex XSOAR in case
there is no mapping between their AD group membership and a
Cortex XSOAR server role.
Service Identifier (ADFS) The ADFS relay identifier which Cortex XSOAR will redirect the user
for SSO first login.
STEP 3 | In the SAML Roles Mapping field, specify one or more SAML groups to map to the Cortex
XSOAR role.
STEP 3 | Select the items for which you want to receive a notification, and the method through which
you want to receive that notification, such as Email and Mobile.
145
146 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
Disaster Recovery and Live Backup Overview
Live Backup enables you to mirror your production server to a backup server. In a disaster recovery
situation, you can easily convert your backup server to be the production server.
Server actions are mirrored in real-time. There might be pending actions due to high server load,
connectivity issues, and so on. Note the following:
• Live Backup uses a single main server and a single standby server. Beyond these, additional servers are
not currently supported.
• Active/Active configuration is not currently supported.
• Each host retains its own distinct IP address and host name.
• Neither host has any awareness of which node is truly active. Therefore, failover is not dynamic,
meaning that making a node active must be done manually, by an administrator.
In the event of a server failover, engines dynamically reconnect to the active host.
If there is ever uncertainty about whether a host that is presently down or stopped was in
an active state before it went offline, it is recommended that you put the presently active
node into a standby state before starting the Cortex XSOAR service on the other host. You
can then make it active again after you have confirmed whether the host you are starting is
already in active mode.
To configure the live backup environment, see Configure the Live Backup Environment.
The following scenarios describe how to test, and deal with active server failures:
• DR Scenario: Testing the DR Environment
• DR Scenario: Unrecoverable Active Server Failure
• DR Scenario: Unrecoverable Standby Server Failure
When you first install the Cortex XSOAR server and starts for the first time, you can use a configuration file
to transition between DR states, as described in Transition Between DR States Through the Configuration
File.
If you need to upgrade your live backup environment, see Upgrade the Live Backup Environment.
For details about the relationship between engines and disaster recovery, see Cortex XSOAR Engines and
Disaster Recovery. For information about host names, DNS, and disaster recovery, see Host Names, DNS,
and Disaster Recovery.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 147
© 2020 Palo Alto Networks, Inc.
2. Add the following configurations
148 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
Host Names, DNS, and Disaster Recovery
Consider the following about host names, DNS and DR:
• When configuring Live Backup, each Cortex XSOAR server should have its own unique host name and IP
address
• You may require for analysts to always navigate to the same host name when accessing Cortex XSOAR.
In this scenario, configure a separate DNS record which points to the active Cortex XSOAR server. In the
event of a server failover, you are required to manually repoint this DNS record to the IP of the newly-
active Cortex XSOAR server.
• It is critical that the TTL of the DNS record be set to a zero value. If it is higher, analysts are not able to
access the active server using the shared host name until the TTL of the record expires and the DNS
record is refreshed in the cache. This could take more than an hour.
• If you do not require a single URL to access Cortex XSOAR, when a server failover occurs, you might
point your browser to the URL of the newly-active Cortex XSOAR server.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 149
© 2020 Palo Alto Networks, Inc.
Configure the Live Backup Environment
Live Backup enables you to mirror your production server to a backup server, and in disaster recovery
scenarios to easily convert your backup server to be the production server.
Before you start, ensure that you save the disaster recovery configurations before you copy
all files
STEP 1 | Go to Settings > About > Troubleshooting > Server Configurations and do the following:
1. Verify that the External Host Name is correct.
2. Click Add Server Configuration.
3. Add the following key and value.
Key Value
ui.livebackup True
STEP 2 | Go to Settings > Advanced > Backups and in the Live Backup field, select ON.
Parameters Value
Trust server certificate (unsecured) ON: certificates are not checked. OFF:
certificates are checked.
STEP 4 | On another machine with a different host name or IP address, install Cortex XSOAR using the
-- -dr -do-not-start-server flag, by typing the following command:
# ./demistoserver-xxxx.sh -- -dr -do-not-start-server
STEP 5 | Verify that the backup server is accessible from the production server through port 443 (or any
other port configured as a listening port). Ensure that there are no firewalls that might drop
communication.
STEP 6 | Stop the Cortex XSOAR server, by typing the following command:
sudo service demisto stop
STEP 7 | Create a tarball file of the necessary files and folders on the production server to be copied to
the backup server. Ensure that all files and folders have demisto:demisto ownership.
150 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
The following command preserves demisto:demisto ownership and file permissions:
# tar --ignore-failed-read -pczf demistoBackup.tgz /var/lib/demisto/data /
var/lib/demisto/artifacts /var/lib/demisto/attachments /var/lib/demisto/
systemTools /var/lib/demisto/d2_server.key /usr/local/demisto/cert* /usr/
local/demisto/demisto.lic
STEP 8 | Copy the necessary files and folders from the production server to the backup server. Ensure
all files and folders have demisto:demisto ownership.
• data
• /var/lib/demisto/artifacts
• /var/lib/demisto/attachments
• /var/lib/demisto/systemTools
• /var/lib/demisto/d2_server.key
• /var/lib/demisto/cert*
• /var/lib/demisto/demisto.lic
STEP 9 | Copy the created tarball file (demistoBackup.tgz) to the backup server using either scp or a tool
that you prefer, by typing the following:
# scp demistoBackup.tgz root@<yourBackupServerIPorHostname>:/root
STEP 10 | On the backup server, extract the backup tarball file with the following command (original file
permissions and ownership will be preserved).
# tar -C / -xzpvf demistoBackup.tgz
STEP 11 | Start the backup server and then the production server by typing the following command in
each environment:
sudo service demisto start
If the procedure is successful, Live Backup is ON:
If the server is active, Cortex XSOAR appears as usual when you connect. You can Transition an Active
Server to Standby Mode or Transition a Standby Server to Active Mode.
STEP 1 | Verify the External Host Name value matches the URL of the production server by going to
Settings > Troubleshooting > Server Configuration.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 151
© 2020 Palo Alto Networks, Inc.
STEP 3 | On the disaster recovery (DR) server:
1. Create a SAML integration by repeating steps 2.1 to 2.3.
Ensure that the Service Provider Entity ID parameter matches the URL of the DR server. This is the
only valid value. For example, https://dr.demisto.com/saml.
2. Verify that you can access the production server.
STEP 4 | (Optional) For Dev/Test environments, switch to the DR server and the test SSO login.
STEP 1 | If you want to use Cortex XSOAR to backup the server, do the following:
1. On the live production server, select Settings > Advanced > Backups > Switch Hosts.
2. When prompted, complete the online Switch Hosts instructions.
Ensure that the production server is not live.
3. Go to the backup server and follow the on-screen instructions:
STEP 2 | If you want to use a configuration file to test the DR environment, do the following:
1. In the backup server, stop the server by typing:
sudo service demisto stop
2. Open the /etc/demisto.conf file and change the Server.dr.enabled property to false.
3. In the live server, open the /etc/demisto.conffile and change the Server.dr.enabled
property to true.
4. Access the backup server using its IP address to check that the server is up and running.
152 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
STEP 3 | To revert to the original settings, repeat the steps above.
When following the instructions, remember the backup server is now the production server and
production server is now the backup server.
STEP 1 | On the standby server, follow the steps in Transition a Standby Server to Active Mode.
STEP 2 | If your analysts use a single, pivoting host name to connect to the active Cortex XSOAR server,
update your DNS record to re-point your Cortex XSOAR server host name to the now active
server. For more information about host names, see Host Names, DNS, and Disaster Recovery.
STEP 3 | If using engines, confirm that they are connected in Settings > Integrations > Engines.
If they have not reconnected and you have confirmed that network connectivity is good between the
engine and the now active (previously backup) server (i.e., it is reachable on TCP 443 or the port you
have configured), then follow the guidance in Host Names, DNS, and Disaster Recovery.
STEP 5 | Follow the procedure for Configure the Live Backup Environment using your now-active server
as the primary host, and copying its files and data to the newly-built Cortex XSOAR server.
Confirm that Live Backup is working.
STEP 6 | If appropriate for your environment (depends on whether you want to remain on the present
active node), transition the active node over to the newly-built host by following the procedure
Transition an Active Server to Standby Mode and confirming that Live Backup is again
operational.
STEP 8 | Re-point your shared DNS record, if applicable, back to the primary Cortex XSOAR server and
have analysts reconnect.
STEP 9 | Confirm that Cortex XSOAR is working by confirming that your integrations are working
properly, incidents are being created normally, and that Analysts can login and work normally in
Cortex XSOAR.
STEP 1 | Obtain the new server that will serve as your new standby server according to your
requirements.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 153
© 2020 Palo Alto Networks, Inc.
Do not install until step 2.
STEP 2 | Follow the procedure for Configure the Live Backup Environment using the newly-active
server as the primary host, and by copying its files and data to the newly-built Cortex XSOAR
server. Confirm that Live Backup is working.
STEP 3 | Test the new server by following the steps in DR Scenario: Testing the DR Environment.
154 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
Transition an Active Server to Standby Mode
If you are performing a manual failover in a DR simulation (when both hosts are operational), remember to
always first put your active server into Standby mode before failing over.
STEP 1 | On the active server, navigate to Settings > Advanced > Backups.
STEP 3 | In the Switch Hosts dialog box, in the text box, type Switch Hosts.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 155
© 2020 Palo Alto Networks, Inc.
Transition a Standby Server to Active Mode
If the server is active, when you connect to it, Cortex XSOAR appears as normal. After logging in, if the
server is in standby mode, it appears like this:
STEP 1 | On the standby server being transitioned to active mode, from the This is currently the Backup
Server page, click Make this the production server.
STEP 2 | In the Switch Hosts dialog box, in the text box, type Switch Hosts.
Cortex XSOAR appears and the host is now active.
156 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
Transition Between DR States Through the
Configuration File
It is possible to transition a server between active and standby states via the /etc/demisto.conf
configuration file, when the server is new and starts for the first time. It cannot be used at any other time.
You need to set the server.dr.enabled configuration property to true or false. If set to true, the
server is in DR mode when it starts. If set to false, the server is in active mode when it starts.
/etc/demisto.conf
{
"Server":{
"HttpsPort":"443",
"dr":{
"enabled":true
}
},
"db":{
"index":{
"entry":{
"disable":true
}
}
}
}
If the DR state of the server is ever transitioned through Cortex XSOAR, the setting is stored in the Cortex
XSOAR database rather than in the demisto.conf file. The database config setting for dr.enabled
always takes precedence over demisto.conf, and changing the server.dr.enabled setting in the
demistor.conf file has no effect.
In a real disaster recovery scenario, one in which the original production server is unrecoverable, you need
to convert the backup server to the new production server and then configure a new backup server.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 157
© 2020 Palo Alto Networks, Inc.
Upgrade the Live Backup Environment
Follow this procedure when upgrading your Cortex XSOAR version.
STEP 1 | Stop the main Cortex XSOAR server and the DR Cortex XSOAR server by typing the following
command.
sudo service demisto stop
STEP 4 | If the DR Cortex XSOAR server did not start, restart the DR Cortex XSOAR server.
STEP 5 | If the main Cortex XSOAR server did not start, restart the main Cortex XSOAR server.
158 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
Cortex XSOAR Engines and Disaster Recovery
In the event of a failover between the Cortex XSOAR server, engines are capable of dynamically failing over
to the active node. This should happen automatically if the engine was deployed after DR was configured.
Assuming all is configured and working properly, it should not be necessary to change DNS
to affect an engine failover when a server failover occurs.
If engine failover is not working when failing over between Cortex XSOAR servers (i.e., does
not display as Connected: false in Settings > Integrations > Engines, it is likely due to one of
the following causes:
• The file /var/lib/demisto/d2_server.key is not the same on each Cortex XSOAR
server. This can sometimes happen if Live Backup was previously configured using
Cortex SOAR (Demisto) 4.0 and this file did not exist at the time that Cortex XSOAR was
first configured. Copy this file from the primary server to the backup server and restart the
backup server service.
• On the engine, the EngineURLs array property of /usr/local/demisto/d1.conf is
missing the IP or host name of the backup Cortex XSOAR server. Solutions are:
Simply redeploy the engine from Settings > Integrations > Engines. This should
automatically include both servers in the d1.conf file.
Modify the JSON in conf file manually, to add the other server to the EngineURLs
array, and restart the engine. If a syntax error is detected in the JSON, the engine
service refuses to start and may not log any error messages. The array should now look
something like:
EngineURLs": [
"wss://cortex xsoarserver1:443/d1ws",
"wss://cortex xsoarserver2:443/d1ws"
],
• Host name resolution is broken from the engine to one of your servers. Use ping or
nslookup to confirm that the engine host can resolve the backup server, and that the IP
address is of the server correct. If not, it may require a change to your DNS environment
or a network or host firewall is blocking connectivity from the engine to your backup
Cortex XSOAR server.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 159
© 2020 Palo Alto Networks, Inc.
Backup the Database
With Cortex XSOAR, you can perform both automated and manual backups, which store the entire
database of incidents, playbooks, scripts, and user defined configurations. Cortex XSOAR stores daily,
weekly, and monthly backup files.
You can define whether you want Cortex XSOAR to create automatic backups, and the location to store the
backups. The database backup files are located in /var/lib/demisto/backup. In addition to automated
backups, manual backups are recommended before doing server operations and maintenance work. We also
recommend you set up backups for additional Cortex XSOAR folders listed in Step 3, scheduled for off-peak
hours, using your standard backup tools.
160 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
© 2020 Palo Alto Networks, Inc.
Restore the Database
Cortex XSOAR automatically backs up the database. If the database becomes corrupted or you need to
revert to an earlier version of your data, you can restore a database backup.
STEP 5 | Extract the .gzip backup file using tar -xzf <file-name> .
When you run the command, new sub-folders are created (where you ran the command) with the db
files inside. If you use the default path, the files are in the var folder. For example, the following files are
generated:
STEP 6 | (Automatic Backup) Move the demisto_XXXXX.db files to the partitionsData folder.
Keep the demisto.db file in the /data parent folder.
If you backup manually, you do not need to move the files, as the required _XXXXX.db files are already
in the partitionsData folder.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup 161
© 2020 Palo Alto Networks, Inc.
162 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Disaster Recovery and Live Backup
Remote Repositories in Cortex XSOAR
> Remote Repositories Overview
> Configure a Remote Repository on a Development Machine
> Configure a Remote Repository on the Production Machine
> Edit and Push Content to a Remote Repository
> Troubleshoot a Remote Repository Configuration
163
164 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Remote Repositories in Cortex XSOAR
© 2020 Palo Alto Networks, Inc.
Remote Repositories Overview
Cortex XSOAR supports the ability to work with separate repositories for development and production
environments. This enables you to develop and test all of your content in one location, and when it is ready,
you push the content to the remote repository. On your production environment, you pull the content as
you would all other content updates.
The development and production environments must be running on the same version of
Cortex XSOAR.
In addition, Cortex XSOAR content updates are only delivered to the development environment. This
enables you to determine which updates you want to push to production.
Working with remote repositories is git-based. Any service that supports this protocol can be
used, for example, GitHub, GitLab, Bitbucket, etc. In addition, on-premise repositories are
also supported.
How it Works
In the production environment, the content appears as a content update, just like any other, and you pull
the content from the remote repository into your working branch.
To work with remote repositories, you must have two separate Cortex XSOAR environments on two
separate machines. The development environment is used to write the following content:
• Automations
• Playbooks
• Integrations
• Classification
• Agent tools
• Incident fields
• Indicator fields
• Evidence fields
• Incident layouts
• Incident types
• Pre-processing rules
If you have more than two pre-processing rules in your Local Changes queue, you must
push all of those changes to the remote repository.
• Indicator types
• Reports
• Dashboards
• Widgets
On the production environment, it is not possible to edit these elements.
You need to configure a remote repository both on a development enviornment and a production
environment. After you develop your content, if you want it to be available as part of a content update for
the production environment, you must push the changes to the remote repository. If you experience issues,
learn how to troubleshoot remote repositories.
You cannot configure a Cortex XSOAR engine to manage communication to the remote
repository.
• When creating a repository in your remote Git platform, verify that the repository contains branches.
Defining the repository in Cortex XSOAR does not create the branches.
• Before toggling the remote repository feature on or off, or changing your repository configuration,
ensure to back up your existing content to your local computer by navigating to Settings > About >
Troubleshooting > Custom Content and clicking Export.
• You need to install Cortex XSOAR on your development environment, as described in Single Server
Deployment.
If you are using a passphrase, only RSA private keys are supported.
If your SSH connection uses a port other than port 22 (the default SSH port), you must include the
ssh string and port number in the url. In the following example, we use port 20017:
ssh://git@content.demisto.com:20017/~/my-project.git
5. Select the active branch on which you will be working.
6. Click Save.
7. In the Migrate server changes screen, determine whether or not you want to keep the content that
is currently on the development server, or discard the changes and synchronize completely with the
remote repository.
8. Click Continue.
Content from the remote repository is installed.
This can take several minutes depending on the amount of content in the remote repository and your
hardware configuration. Your custom content is automatically backed up to the Cortex XSOAR server
any time you change one of the remote repository settings. The backup is located under /var/lib/
demisto/backups/content-backup-*.tar.gz..
Any content that exists in the production environment, but not on the remote repository,
will be deleted
If you are using a passphrase, only RSA private keys are supported.
If your SSH connection uses a port other than port 22 (the default SSH port), you must include the
ssh string and port number in the url. In the following example, we use port 20017:
ssh://git@content.demisto.com:20017/~/my-project.git
5. Select the active branch from which you pull content.
6. Click Save.
In the Discard server changes screen, you are presented with content that exists in your production
environment, but does not exist on the remote repository. This includes integrations, and their
instances and classifiers.
You should not manually push content to the remote repository. Use only the procedures
outlined in the documentation to ensure that your content is properly updated in the
production environment
You can save version and manage revisions locally using the Save Version button. Alternatively, you can
click and save the changes.
These options are only available for the following content types:
• Automations
• Playbooks
• Integrations
• Classifications
• Layouts
• Reports
• Dashboards
For all other content types, your changes are automatically saved to the local changes.
177
178 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Engines
© 2020 Palo Alto Networks, Inc.
Cortex XSOAR Engines Overview
Cortex XSOAR engines are installed in a remote network and allow communication between the remote
network and the Cortex XSOAR server. Although you cannot run scripts, you can run integration
commands. It is possible to install a single engine, or multiple engines. An engine is used for the following
purposes:
• Proxy
• Load-Balancing
Engine Proxy
Cortex XSOAR engines enable the Cortex XSOAR server to access internal or external services that are
otherwise blocked by a firewall or a proxy, etc. For example, if a firewall blocks external communication and
you want to run the Rasterize integration, you need to install an engine to access the Internet.
Engine Architecture
Within the network, you need to allow the Engine to access the Cortex XSOAR server's IP address and
listening port (by default, TCP 443).
Engine Load-Balancing
Engines can be part of a load-balancing group, which enables distribution of the command execution load.
The load-balancing group is a group of engines that use an algorithm to efficiently share the workload for
integrations that the group is assigned to, thereby speeding up execution time. In general, heavy workloads
are caused by playbooks that run a high number of commands.
Before configuring an integration to run using multiple engines in a load-balancing group, we recommend
that you test the integration using a single engine in the load-balancing group.
By default, the Cortex XSOAR server is part of the load balancing group. It is recommended that you
Remove the Cortex XSOAR Server From the Load-Balancing Group when there are two or more engines in
the load-balancing group, or if you use engines to access integrations that are inaccessible from the Cortex
XSOAR server.
For Linux machines, you need to install Docker before installing an engine. If you use the
Shell installer, Docker is automatically installed.
Use DEB and RPM installation when shell installation is not available. You need to install
Docker and any dependencies.
• Zip: Used for Windows machines.
• Configuration: Configuration file for download. When you install one of the other options, this config file
(d1.conf) is installed on the engine machine.
When you create the engine, the python.engine.docker key is set to true. If Docker
is not available when the engine is created, the key is set to false. If this happens, in the
d1.conf file, you need to set the key to true
Before you begin, check you have the required Cortex XSOAR Engine requirements.
After you install and deploy an engine, there are several ways that you can Manage Engines. For Linux
systems, you can run Python integrations on an engine. Ensure you have Python 2.7 or later installed on the
engine machine. Running Python integrations needs to be through Docker.
STEP 7 | (Optional) If you experience performance issues you may need to Configure the Number of
Workers for the Server and Engine.
STEP 2 | Unzip the file and move the contents to the same directory you installed the engine.
STEP 3 | Open a command prompt as an administrator and type the following command:
.\nssm.exe install d1engine
Ensure that the nssm exe file is in the same directory as the engine you want to run.
The NSSM service installer appears.
STEP 7 | (Optional) Go to the Task Manager and check that the service is running.
BindAddress String The port on which the engine The engine d1.conf
listens for agent connection file.
requests and communication task
responses
LogFile String Path to the d1.log file. If you The engine d1.conf
change the name or location of file
the d1.log file, you need to
update this parameter.
String
engine.allow.data.collection Disables the option to send The engine d1.conf
communication task forms file.
through the engine.
• false
{"http_proxy": "http://proxy.host.local:8080",
"https_proxy": "https://proxy.host.local:8443"}
STEP 1 | On the computer where you have installed the engine, go to the directory for d1.conf file.
For RPM, DEB, Shell go to /usr/local/demisto.
Key Value
STEP 1 | To configure the number of workers for the Server, do the following:
1. Select Settings > About > Troubleshooting > Add Server Configuration.
2. Add the following key and value:
Task Value
STEP 2 | To configure the number of workers for the engine, do the following:
1. Go to Settings > Integrations > Engines.
2. Select the engine checkbox you want to define the number of workers.
3. Click Edit Configuration.
4. In the JSON formatted configuration field, add the following engine configuration in JSON format:
Parameter Value
workers.count.engines Defines the number of workers for all engines across the system.
This will override any other engine-worker configurations. Default is
4 workers per CPU core.
workers.per.cpu Defines the number of workers per engine CPU. By default, each
CPU has 4 workers, meaning that for an engine machine with 20
CPU, there will be 80 workers.
STEP 2 | In the Server Configuration section, add the following key and value.
Key Value
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
engine.group.add.server.to.group False
Engine Errors
These are some common Cortex XSOAR engine errors.
443 Error
This error might occur when a connection is established between an engine and the Cortex XSOAR server,
because, by default, Linux does not allow processes to listen on low-level ports.
Error Message
listen tcp :443: bind: permission denied
Solution
• In the d1.conf file, change the port number to a higher one, for example, 8443.
• Run this command: s**udo setcap CAP_NET_BIND_SERVICE=+eip /path/to/binary. After
running this command the server should be able to bind to low-numbered ports.
STEP 1 | Access the engine machine using a tool like SSH or PuTTY.
File Action
193
194 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Docker
© 2020 Palo Alto Networks, Inc.
Docker Installation
Docker is used to run Python scripts and integrations in a controlled environment.
You can install Docker on the following Enterprise Linux platforms:
• Docker Enterprise Edition
• Docker Community Edition
• Red Hat Docker Distribution
Troubleshooting
• In some cases, the Docker created veths are not correctly bridged and the Docker container can’t access
the network or internet. You should update systemd.
• To verify that the Cortex XSOAR OS user has necessary permissions and can run Docker containers, run
the following command from the OS command line.
If everything is configured properly you will receive the following output. Python 2.7.14.
After installing Docker, you may need to configure docker images and update settings.
STEP 5 | (Optional) If you installed Cortex XSOAR before Docker EE, you should perform the following
procedure.
You will receive an error message during the Cortex XSOAR installation. Acknowledge the error and
then proceed.
1. Stop the Cortex XSOAR server.
2. Install Docker EE.
3. Run the following commands:
sudo groupadd docker
sudo usermod -aG docker demisto
4. Start the Cortex XSOAR server.
5. Select Settings > About > Troubleshooting > Add Server Configuration.
6. Remove the following keys:
python.executable
$ uname -r
4.1.12-124.24.1.el7uek.x86_64
• Update Container-Selinux if you receive the Requires: container-selinux >= 2.9 message.
STEP 2 | After installation, start Docker daemon to fetch images during installation by running the
following command:
systemctl start docker
STEP 4 | (Optional) If you are using Docker for an Engine, Install an Engine.
Update Container-Selinux
When installing Docker, if you receive the message Requires: container-selinux >= 2.9, you
need to install a newer version of container-selinux.
STEP 2 | Find the latest version of container-selinux and copy the URL package.
STEP 4 | Install the latest version by running the following command (assuming the latest version is
2.74-1):
CentOS 7 provides a similar docker distribution package as part of the CentOS Extras
repository.
STEP 3 | Change ownership of the Docker daemon socket so members of the dockerroot user group
have access.
1. Edit or create the file /etc/docker/daemon.json
2. Enable OS group dockerroot access to Docker by adding the following entry to the /etc/
docker/daemon.json: "group": "dockerroot"file. For example:
{
"group": "dockerroot"
}
3. Restart the Docker service by running the following command:
systemctl restart docker.service
4. Install Cortex XSOAR.
5. After Cortex XSOAR is installed, run the following command to add the demisto os user to the
dockerroot os group (Red Hat uses dockerroot group instead of docker):
usermod -aG dockerroot demisto
6. Restart the Cortex XSOAR server.
STEP 1 | Download the Docker image by appending the download link you received from Cortex
XSOAR with the following parameters.
&downloadName=dockerimages
STEP 2 | Copy the downloaded Docker image to the Cortex XSOAR server.
STEP 3 | Stop the Cortex XSOAR service using the appropriate command for your OS.
• systemctl stop demisto
• sudo service demisto stop
STEP 5 | Start the Cortex XSOAR service using the appropriate command for your OS.
• systemctl start demisto
• sudo service demisto start
After you save the server configuration, Docker images that are launched by the Cortex XSOAR server will
contain the certificates file mounted in the following path:
certs_file = os.environ.get('SSL_CERT_FILE')
if certs_file:
# perform custom logic to trust certificates...
The Python SSL library will check the SSL_CERT_FILE environment variable only when using OpenSSL.
If you are using a Docker image that uses LibreSSL, the SSL_CERT_FILE environment variable will be
ignored.
You can specify in the Cortex XSOAR IDE the Python version (2.7 or 3.x). If 3.x is chosen,
the latest Cortex XSOAR Python 3 Docker image is selected automatically.
The selected Docker image is configured in the script or integration YAML file under the dockerimage
key.
Docker Images
Cortex XSOAR maintains a repository of Docker images, all of which are available in the Docker hub under
the demisto organization. The Docker image creation process is managed in the open-source project
demisto/dockerfiles. A search of the repository-info branch should be done prior to creating a new image.
The repository is updated nightly with all image metadata and os/python packages used in the images.
For security, images that are not part of the Cortex XSOAR organization in Docker hub
cannot be accepted.
When an engine needs a Docker image it pulls it either from Docker Hub or from a custom registry, if
defined in the server configuration: python.docker.registry.
From version 5.0, the engine can fetch Docker images directly from the Cortex XSOAR server. If the engine
fails to fetch the Docker image from the registry it tries to fetch it from the Cortex XSOAR server. The
server packages the image when running docker save, and sends it to the engine, which enables the
engine to obtain the required images, even if it does not have network access to the Docker Hub. The
engine can only obtain images that are available from the server.
If an existing image cannot be found, you can create a Docker image.
Package Requirements
Consider some of the following:
• Does the package have known security issues?
• Is the package licensed?
• What type of license is used?
Licensing
Security Concerns
Due diligence needs to be done on all approved packages. Including verifying the package name is correct.
In 2018 a scan of PyPI resulted in the detection of 11 “typo-squatted” packages which were found to be
malicious. See Detecting Cyber Attacks in the Python Package Index (PyPI).
Create a Docker Image in Cortex XSOAR
After due diligence has been completed and licenses checked, you can Create a Docker Image In Cortex
XSOAR.
Docker Files (Required for Production)
If the integration is for public release, the integration pushes Docker files into the dockerfiles repository.
Pushing into the repository will add an image (after the approval process) to the Docker hub Cortex XSOAR
organization. For more information, see Cortex XSOAR’s Dockerfiles and Image Build Management.
When modifying an existing Docker image, ensure the change does not disrupt other
integrations that may use the same package. All Docker images are created with unique
version tags, for which overriding is blocked.
Command Description
/docker_image_update Updates a specified, or all Docker images. Use this when you
change a Docker image, and that image is used in a script, avoiding
the need to manually update the Docker image.
Key Value
python.docker.image The Docker image you want to define, as the base image. For
example:
myregistry.local:5000/demisto/python:1.0
python:1.4-alpine
Key Value
If the alternate registry requires authentication you will need to login into the registry with the Cortex
XSOAR OS user. Type the following:
sudo -u demisto docker login >registry server>.
For more information about Docker login, see the Docker documentation.
Argument Description
dependencies New Docker image dependencies. Python libs like stix or requests, can
have multiple libs as comma separated: lib1,lib2,lib3.
packages New Docker image packages. OS packages like libxslt or wget, can have
multiple comma separated packages: pkg1,pkg2,pkg3.
base New docker image base image to use. Must be ubuntu based with python
installed. Default is demisto/python3-deb base image, with python 3.x.
In the following example create a Docker image called example_name and use the python dependency,
Mechanize. You can specify OS packages. This example requires wget as a package.
/docker_image_create name=example_name dependencies=mechanize packages=wget
When the Docker image is created, the following dialog box appears.
STEP 3 | (Optional) If you need to update a Docker image, type the following command:
Argument Description
On RHEL and CentOS 7.x distributions with Docker CE or EE with version 17.06 and
later, ensure that your kernel fully supports kmem accounting or that it has been compiled
to disable kmem accounting. The kmem accounting feature in Red Hat’s Linux kernel
has been reported to contain bugs, which cause kernel deadlock or slow kernel memory
leaks. This is caused by a patch introduced in runc, which turns on kmem accounting
automatically when user memory limitation is configured, even if not requested by the Docker
CLI setting --kernel-memory (see: opencontainers/runc#1350). Users using Red Hat's
distribution of Docker based on version 1.13.1 are not affected as this distribution of Docker
does not include the runc patch. For more information see Red Hat’s Docker distribution
documentation.
If you do not want to apply Docker memory limitations, due to the note above, you should
explicitly set the advanced parameter: limit.docker.memory to false.
Swap Limit Support: Not all Linux distributions have the swap limit support enabled by default.
• Red Had and CentOS distributions usually have swap limit support enabled by default.
• Debian and Ubuntu distributions usually have swap limit support disabled by default.
To check if your system supports swap limit capabilities, after logging into the Server machine console (ssh),
run the following command:
sudo docker run --rm -it --memory=1g demisto/python:1.3-alpine true command. If
you see the WARNING: Your kernel does not support swap limit capabilities or the
cgroup is not mounted. Memory limited without swap. message in the output (the message
may vary between Docker versions), you have two options:
• Configure swap limit capabilities by following the Docker documentation.
• Configure Memory Limit Support Without Swap Limit Capabilities.
If swap limit capabilities is enabled, Configure the Memory Limitation.
To test the memory, see Test the Memory Limit.
Limit Available CPU
It is recommended limiting each container to 1 CPU. See Limit Available CPU.
Limit PIDs
Unsuccessful output:
{"docker.run.internal.asuser": true,"limit.docker.cpu":
true,"limit.docker.memory": true,"python.pass.extra.keys": "--pids-
limit=256##--ulimit=nofile=1024:8192"}
STEP 1 | Set the docker run option --memory-swap option to -1 (disables swap memory enforcement).
STEP 2 | In Cortex XSOAR, select Settings > About > Troubleshooting > Add Server Configuration.
Name Value
python.pass.extra.keys --memory=1g##--memory-swap=-1
If you have the python.pass.extra.keys already set up with a value, append it to the config file are
the ## separator.
STEP 1 | Configure Cortex XSOAR Server to execute containers as non-root internal users.
1. Select Settings > About > Troubleshooting > Add Server Configuration.
2. Add the following:
Key Value
docker.run.internal.asuser true
3. Click Save.
4. Reset the running containers using on of the following methods:
From the Cortex XSOAR CLI, type /reset_containters command.
Alternatively, restart the Cortex XSOAR Server.
5. From the Cortex XSOAR CLI, type the following command to check if the container is running as non-
root internal user:
!py script="import os;print(os.getuid())"
If the server configuration was added successfully and the container is running with a non-root
internal user, the output is a non-zero UID.
Key Value
STEP 2 | In BYOI, set the Docker image to use to run the integration by expanding Script > Python.
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
limit.docker.memory true
docker.memory.limit 1g
def big_string(size):
sys.stdin = os.fdopen(0, "r")
s = 'a' * 1024
while len(s) < size:
s = s * 2
print('completed creating string of length: {}'.format(len(s)))
The command returns an error when it fails to allocate 1 GB of memory. For example:
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
limit.docker.cpu true
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
python.pass.extra.keys --pids-limit=256
STEP 4 | (Optional) To Test the PIDs limit, run the type the following command in the playground:
!py script="from multiprocessing import Pool; p=Pool(256); print('pool
started')"
When the limit is in place, the command fails with a Python OSError. For example:
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
python.pass.extra.keys --ulimit=nofile=1024:8192
STEP 4 | (Optional) To test the file descriptors limit, run the following command in the playground:
!py script="import resource;print('file descriptor limit: ',
resource.getrlimit(resource.RLIMIT_NOFILE))"
The command prints the file descriptor limit (soft and hard). For example:
STEP 1 | Configure Cortex XSOAR Server to execute containers as non-root internal users.
1. Select Settings > About > Troubleshooting > Add Server Configuration.
2. Add the following:
Key Value
docker.run.internal.asuser true
3. Click Save.
4. Reset the running containers using on of the following methods:
From the Cortex XSOAR CLI, type /reset_containters command.
Alternatively, restart the Cortex XSOAR Server.
5. From the Cortex XSOAR CLI, type the following command to check if the container is running as non-
root internal user:
!py script="import os;print(os.getuid())"
If the server configuration was added successfully and the container is running with a non-root
internal user, the output is a non-zero UID.
If the server configuration was not configured correctly and the container is running with an internal
root user, the output is 0.
Key Value
217
218 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Dashboards
© 2020 Palo Alto Networks, Inc.
Dashboard Overview
The dashboard consists of visualized data powered by fully customizable widgets, which enables you to
analyze data from inside or outside Cortex XSOAR, in different formats such as graphs, pie charts, or text
from information. For more information about widgets, see Widgets Overview.
When you first install Cortex XSOAR, the following dashboard tabs are created:
• Incidents: information relating to incidents, such as severity type, active incidents, unassigned incidents
and so on.
• Threat intelligence Management: information relating to threat intel management indicators.
• System Health: information relating to the Cortex XSOAR Server.
• My Dashboard: a personalized dashboard relating to your incidents, tasks, and so on.
• SLA: information relating to your Service Level Agreement.
You can change the order of the dashboards in the dashboard tab by clicking next to the
relevant dashboard, and then drag and drop the dashboard into the required location.
In every dashboard, you can set the date range from which to return data and the refresh rate. In the
DASHBOARDS tab, you can do the following:
• Create a Dashboard
• Edit a Dashboard
• Import and export a dashboard, which is useful in a test and production environment.
• Share and Unshare a Dashboard
• Delete or remove (if shared) a dashboard.
If you want to set up dashboards as a default for all existing and future users, see Configure a Default
Dashboard.
STEP 1 | From the homepage, in the DASHBOARD tab, click New Dashboard.
STEP 2 | From the Dashboards page, in the Widget’s Library section, add the widgets to the dashboard.
STEP 3 | From the Date Range drop down list, set the date range for the dashboard.
Widgets can have their own date range, which may be different than the dashboard’s date range.
STEP 3 | In the Widgets Library section, search for the widget you want to add and click Add.
STEP 4 | To edit the widget, select the gear button > Edit Widget.
STEP 3 | To unshare a dashboard, from the drop down list, click Unshare.
STEP 4 | To remove a shared dashboard, selectDASHBOARDS > gear icon > Remove.
225
226 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Reports
© 2020 Palo Alto Networks, Inc.
Reports Overview
Reports contain statistical data information which enables you to analyze data in PDF, Word and CSV
formats. A report contains widgets , which enables you to analyze data from inside or outside Cortex
XSOAR, in different formats such as graphs, pie charts, or text from information.
Cortex XSOAR uses Chromium or Chrome to generate reports. If your operating system does not have
Chrome or Chromium you need to install. Alternatively, you can use PhantomJS (deprecated), which is not
developer supported. You need to Configure Cortex XSOAR to Use PhantomJS.
Reports can be time bound or non-time related. Time bound reports are incident summaries over a period
of time. For example, incident summaries over the last 24 hours, or last 30 days. Non-time related reports
are filtered summaries of incident data. For example, open, late, and critical incidents.
Cortex XSOAR comes with out-of-the-box reports, such as critical and High incidents, Daily incidents, last 7
days incidents, and so on. These reports cannot be edited apart from the schedule time and who can receive
the report.
If you want to change these type of reports, go to github reports repository, download and
update the JSON file, and upload the report.
For custom reports you cannot generate CSV reports unless you define it in a JSON file and
import it to Cortex XSOAR.
You can run the report immediately or schedule a time as described in Schedule a report.
You can schedule a report directly from incident, as described in Create an Incident Summary Report.
STEP 1 | Install the Chromium package from the repository, by typing $ sudo sh -c 'echo -e
"[google-chrome]\nname=google-chrome - 64-bit\nbaseurl=http://dl.google.com/
linux/chrome/rpm/stable/x86_64\nenabled=1\ngpgcheck=1\ngpgkey=https://dl-
ssl.google.com/linux/linux_signing_key.pub" >> /etc/yum.repos.d/google-
chrome.repo'.
STEP 3 | Install the stable version of Chromium by typing $ yum install google-chrome-stable.
STEP 1 | Install the Chromium package from the repository by typing, $ sudo zypper ar http://
dl.google.com/linux/chrome/rpm/stable/i386 Google-Chrome.
STEP 3 | Install the stable version of Chromium by typing, $ sudo zypper install google-chrome-
stable.
STEP 1 | Install the Chromium package from the repository by typing the following command:
$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main"
>> /etc/apt/sources.list.d/google.list'
STEP 4 | Install the stable version of Chromium by typing the following command:
$ sudo apt-get install google-chrome-stable
STEP 1 | From the homepage, in the REPORTS tab, click New Report.
STEP 2 | From the Reports page, in the Widget’s Library section, add the widgets as required, as
described in Add a Widget to a Report.
STEP 3 | From the Date Range drop down list, select the date range from which to generate the report.
Widgets can have their own date range, which can be different from the report’s date range
STEP 5 | To change the number of recipients or their details, in the Recipients field, click the number of
recipients.
STEP 6 | To change the format, orientation and paper size select the options as required.
It is recommended to use landscape to ensure that all information displays in the report.
STEP 7 | Before generating the report, click Preview to see a preview of the report. You can change the
size or arrange the widgets as required,
STEP 9 | To generate the report immediately, in the Reports tab, click Run Now.
The Report downloads.
Ensure that you enable pop-ups in your browser.
STEP 1 | In the Reports tab, select the report you want to schedule.
STEP 2 | In the Next Run field, click Disabled or the date it was last run.
If creating or editing a report, click next to the Schedule field.
STEP 4 | If you want to restrict the content of the report in accordance with a user’s authorization,
select the Run with current user.
To change authorizations, go to Settings > USERS AND ROLES. For more information about users and
roles, see Reports Overview.
Number Description
00 00 in minutes
8 8am
1/1 Starting in January, and every month thereafter. If you want the report to
start on a different month, change 1/1 to the relevant month, such as 2/1 for
February, 3/1 for March and so on.
The reports run at 8am on January 1, 2020, February 1, 2020, March 1, 2020 and so on.
Cron calculates the next relevant date. If you want the report to run next month, provided that
date has passed in the current month, you do not need to specify the month. For example,
assume the date is 12 December. To run the report on 11 January at 8am, type 00 8 11 * *.
The report starts running on 11 January (and on 11th of each month thereafter). If the current
date is 10 December, the next run date would be 11 December.
Number Description
00 00 minutes
8 8am
The report runs at 0800 on 1 January 2020, 1 January 2021, 1 January 2022, and so on.
Number Description
00 00 in minutes
0 Midnight
* Any day
* Any month
1 Monday
The report runs on the first available Monday 16 December at midnight, and on 23 December, 30
December, 6 January, and so on.
Number Description
30 30 minutes
17 5pm
* Any day
Number Description
00 00 in minutes
6 6am
* Any day
* Any month
* Any day of the week. If you want to run from Monday to Friday, type 1-5. For
Sunday to Thursday, type 0-4.
STEP 1 | Go to the Incidents page and select the incident for which you want to create a report.
STEP 3 | To build a new report, from the Build Report tab, select the following:
• Format
• Orientation
It is recommend to use the landscape orientation to ensure that all information displays in the report.
• Paper Size
If you want to use the setting as a template, click the Save report as template check box.
STEP 4 | To use an existing template, from the Select a Template tab, select the template.
STEP 3 | In the Widgets Library section, search for the widget you want to add and click Add.
STEP 4 | To edit the widget, click the gear button and select Edit Widget.
STEP 1 | In the Reports page, locate the report you want to edit and click the edit button.
Most out of the box reports, time zone and time formats cannot be changed. For custom
reports, custom fields can be changed.
Key Value
STEP 5 | Go to Settings > About > Troubleshooting and delete the server configuration you created.
241
242 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Widgets
© 2020 Palo Alto Networks, Inc.
Widgets Overview
Widgets are visual components that enable you to analyze data internally or externally from Cortex XSOAR,
in different formats such as graphs, pie charts, text from information, and so on.
Cortex XSOAR comes with a number of out of the box system widgets, such as Today’s New Incidents,
Late Incidents, and Saved by Dbot, etc. You can edit these widgets, when creating or editing a dashboard or
report.
Some out of the box system widgets are not editable. If you want to change these widgets,
go to the github widgets repository, download, update the JSON file, and upload it to the
Widgets Library.
You can create widgets from the following and then add them to a dashboard or report, as required:
• Widgets Library: create the widget in the Widgets Library which is then available for all users.
• Create a Widget From an Incident: create the widget from the incident page and then add it to a
dashboard or a report.
• Create a Widget from an Indicator: create the widget from the indicators page then add it to a
dashboard or a report.
• JSON file: these are static widgets and display relatively straightforward information, such as grouping
incidents severity by type, active incidents by type, and so on.
• Automation Script: you can create dynamic widgets using automation scripts for more complex
calculations, such as calculating the percentage of incidents that DBot closed. The automation script can
pull information from the Cortex XSOAR API. For examples, see Script Based Widgets Using Automation
Scripts Examples.
If you want to add a script based widget to a dashboard or report, you need to create a widget in the
Widgets Library. You can create or upload the script to the Automation page or you can directly upload
the script to the Widgets Library.
You can also add a custom widget in the War Room, so you can easily view the incident in a widget format,
such as severity in a bar chart.
If you have a significant numbers of widgets, performance may be affected. You should try to
keep widgets simple (no scripts) and refresh times higher than 1 minute whenever possible.
STEP 4 | From the drop down list, select one of the following data types:
• Incidents Data
• Indicators Data
• Script based
Relevant if you have created a script in the Automation page.
• Upload
You can upload either a JSON file or a script file.
You can change the data type when you edit the widget.
STEP 5 | In the Quick chart definitions window, select the Widget Parameters.
Widget Parameters
The following table describes the widget parameters in the Quick chart definitions window.
Parameter Description
Data Type The type of data you want to display. From the drop down list, you can select the
following:
• Incidents
• Indicators
• Scripts
When selecting Scripts, if your script does not appear you need to add it to the
Automation page and add the widget label.
For some widgets you cannot select the data type, such as task
widgets.
Data query Queries query data in the Lucene query syntax form relating to the dataType. For
example when dataType is incidents and the query is: -status:closed and owner:"",
it queries all incidents that are not closed, which does not have an owner.
When you add the widget, it automatically uses the date rage of
the dashboard or report. You can change it by clicking the gear
icon and selecting Use widget’s date range.To revert, click the
gear icon again and select Use dashboard’s date range.
Entries by Filter the data according to the entry, such as the date created.
Views data in a table format. Click the gear icon to edit columns.
Views data in a text format, which can be used as a text summary of the displayed
data. You can use {0} to display a query value and {date} to display the date.
Markdown is supported.
STEP 1 | Create a JSON file, and add the JSON File Widget Parameters.
STEP 3 | In the Widget Library section, select the add button > Upload.
STEP 4 | Select the JSON file you created in step 1 and click Open.
Parameter Description
datatype The data source of the widget. Must be one of the following:
• incidents
• indicators
• messages
• entries
• scripts
Relevant only when you are creating an automation script.
• tasks
query Queries query data in the Lucene query syntax form relating to the
dataType. For example when dataType is incidents and the query is: -
status:closed and owner:"" it queries all incidents that are not closed, which
does not have an owner.
For script based widgets, the query is the name of the script.
sort Sorts the data, when displaying the widgetType (such as table, list, bar,
column, pie) according to the following:
• field: the field name for which to sort.
• asc: whether to sort data in ascending values. If true, the order is in
ascending value.
widgetType The type of widget you want to create. Must be one of the following:
• bar
• column
• pie
• number
• line
• table
• trend
• list
• duration
• image
size The maximum number of returning elements. Use 0 for the widgetType's
default. Note the following:
• Table/List: To change the size, go to Settings > About >
Troubleshooting > Add Server Configuration add the
default.statistics.table.size. key and then add the value. Default is up to
13
• Chart: Default is up to 10.
• Number and Trend: Ignores the size value.
category Adds a category name. The widget appears under a category instead of being
classified by dataType.
dataRange The time period for which to return data. The time period is overridden by
the dashboard or report time period. Default is all times.
• fromDate: The start date from which to return data in the format: “YYYY-
MM-DDTHH:MM:SSZ”. For example, "2019-01-01T16:30:00Z"
• toDate: The end date for which to return data in the format: "YYYY-MM-
DDTHH:MM:SSZ". For example, "2019-01-01T16:30:00Z" -
• period: An object describing a period of relative time. If using the
fromDate/toDate parameters, this paramter is ignored.
• byTo: The to period unit of measurement. Values are ‘minutes', 'hours',
'days', 'weeks', 'months'.
• byFrom: The from period unit of measurement. Values are: 'hours',
'days', 'weeks', 'months'.
• toValue: The duration of the to period. Integer.
• fromValue: The duration of the from period. Integer. For example, last
7 days - { byFrom: 'days', fromValue: 7 }.
params Enriches the widget with specific parameters, mainly based on the
widgetType. Includes the following:
• groupBy: An array of field names for which to group the returned values.
Used when widget type is bar, column, line or pie. For example, ["type",
"owner"], groups results by type and owner, and returns a nested result
for each type with statistics according to owner.
legend An array of objects that consists of a name and color. The name must match
a group name. The color can be the name of the color, the hexadecimal
representation of the color, or the rgb color value. (V6.0+)
{
"name": "Incident Severity by Type",
"dataType": "incidents",
"widgetType": "bar",
"query": "-category:job and -status:archived and -status:closed",
"dateRange": {
"period": {
"byFrom": "days",
"fromValue": 30
}
},
"params": {
"groupBy": [
"severity",
"type"
]
}
}
You can see the incidents are grouped by severity and the number of incidents are displayed by the length
of the bar, which are colored according to type.
If you create or upload the script to the Automation page, you can use the script in any widget (rather than
uploading the script each time), use the script with a JSON file, and you can also add it an Incident and an
Indicator page.
STEP 1 | Create a new script by uploading or creating a new script in the Automation page.
You can also upload the script in the Widgets Library. If creating a JSON file, the script is the widget
query and the script should return a value in the format of the widget type you want to use. For
example, number, or text.
STEP 2 | For dashboards and reports, after creating the script, you need to create a widget in the
Widgets Library, as described in Create a Widget in the Widgets Library.
STEP 3 | Select the Script based data type and then add the script.
You can also create a JSON file and upload the JSON file directly to the Widgets Library. For
information about JSON file parameters, see JSON File Widget Parameters.
Python
{
"id": "1a2b3c4d",
"name": "GetOnlineUsers",
"dataType": "scripts",
"widgetType": "text",
"query": "GetOnlineUsers"
}
When creating or editing the widget in the Cortex XSOAR, to add a page break, type /pagebreak in the text
box. When you generate a report, the widgets that follow the page break are on a separate page.
Number
This example shows how to create a single item widget with the percentage of incidents that DBot closed.
In the automation script, type one of the following:
JavaScript
res = executeCommand("getIncidents", {
'status': 'closed',
'fromdate': args.from,
'todate': args.to,
'size': 0
});
Python
res = demisto.executeCommand("getIncidents", {
"query": "status:closed and investigation.users:\"\"",
"fromdate": demisto.args()["from"],
"todate": demisto.args()["to"],
"size": 0
})
closedByDbot = res[0]["Contents"]["total"]
res = demisto.executeCommand("getIncidents", {
"status": "closed",
"fromdate": demisto.args()["from"],
"todate": demisto.args()["to"],
"size": 0
});
overallClosed = res[0]["Contents"]["total"]
if overallClosed == 0:
demisto.results(0)
else:
result = round(closedByDbot * 100 / overallClosed)
demisto.results(result);
{
"id": "closed-by-dbot-incidents-percentage",
"name": "Closed By Dbot",
"dataType": "scripts",
"widgetType": "number",
"query": "DBotClosedIncidentsPercentage"
}
Duration
In this example, create a script that queries and returns a time duration (specified in seconds), and displays
the data as a countdown clock. If using a JSON file, you must set widgetType to duration.
In the automation script, type one of the following return values:
JavaScript
Python
The return type should be a string (any name) and an integer. The time is displayed in seconds.
(Optional) If using a JSON file, type the following:
{
"id": "1a2b3c4d687",
"name": "slaRemaining",
"dataType": "scripts",
"widgetType": "duration",
"query": "RemainingSLAScript"
}
After you have uploaded the script and created the widget, you can add the widget to the dashboard or
report. The following widget displays the time duration:
Trend
In this example, create a script that queries and returns the trend between two sums. If creating a JSON file,
set widgetType to trend.
In the automation script, type one of the following return values:
JavaScript
Python
The returns displays a an object which compares the current sum with the previous sum.
(Optional) If creating a JSON file, type the following:
{
"id": "1a2b3c4d55",
"name": "DailyTotalTrend",
"dataType": "scripts",
"widgetType": "trend",
"query": "DailyTotalTrendScript"
}
var data = [
{name: "2018-04-12", data: [10], color: "blue"},
{name: "2018-04-10", data: [3], color: "#029be5"},
{name: "2018-04-17", data: [1], color: "rgb(174, 20, 87)"},
{name: "2018-04-16", data: [34], color: "grey"},
{name: "2018-04-15", data: [17], color: "purple"}
];
return JSON.stringify(data);
Python
data = [
{"name": "2018-04-12", "data": [10], color: "blue"},
{"name": "2018-04-10", "data": [3], color: "#029be5"},
{"name": "2018-04-17", "data": [1], color: "rgb(174, 20, 87)"},
{"name": "2018-04-16", "data": [34], color: "grey"},
{"name": "2018-04-15", "data": [17], color: "purple"}
];
demisto.results(json.dumps(data));
{
"id": "1a2b3c4dee",
"name": "DailyTotalSales",
"dataType": "scripts",
"widgetType": "pie",
After you have uploaded the script and created the widget you can add the widget to a dashboard or report.
The following widget displays the trend in a pie chart:
var data = [
{name: "2018-04-12", data: [10], groups: [{name: "Unclassified", data:
[10] }]},
{name: "2018-04-10", data: [3], groups: [{name: "Unclassified", data:
[2] }, {name: "Access", data: [1] }]},
{name: "2018-04-17", data: [1], groups: [{name: "Unclassified", data:
[1] }]},
{name: "2018-04-16", data: [34], groups: [{name: "Unclassified", data:
[18] }, {name: "Phishing", data: [14] }]},
{name: "2018-04-15", data: [17], groups: [{name: "Access", data: [17] }]}
];
return JSON.stringify(data);
Python
data = [
{"name": "2018-04-12", "data": [10], "groups": [{"name": "Unclassified",
"data": [10] }]},
{"name": "2018-04-10", "data": [3], "groups": [{"name": "Unclassified",
"data": [2] }, {"name": "Access", "data": [1] }]},
{"name": "2018-04-17", "data": [1], "groups": [{"name": "Unclassified",
"data": [1] }]},
{"name": "2018-04-16", "data": [34], "groups": [{"name": "Unclassified",
"data": [18] }, {"name": "Phishing", "data": [14] }]},
{"name": "2018-04-15", "data": [17], "groups": [{"name": "Access", "data":
[17] }]}
];
demisto.results(json.dumps(data));
Python
{
"id": "1a2b3c4de345",
"name": "EmployeeInfo",
"dataType": "scripts",
After you have uploaded the script and created a widget you can add the widget to a dashboard or report.
The following widget displays the employee information:
STEP 1 | In the Indicators page, from the drop down list select the date range.
STEP 2 | In the query field, type the query criteria as required and run the query.
By default, the widget inherits the date range that you specify when creating the widget,
but you can modify the date range when you create the dashboard or report. If the date
range for the report or dashboard does not include the widget date range, the data is
blank. To override the dashboard or report’s date range, click Use Widget’s date range.
In the Automation page, when adding or editing the script you want to use, ensure that you
add the dynamic-section label.
STEP 4 | From the Layout Builder window, in the Library section, drag the General Purpose Dynamic
Section into the layout area you want it to appear.
STEP 5 | In the General Purpose Dynamic Section, click the edit button.
STEP 7 | In the Automation script field, from the drop down list select the automation script you want
to add.
If the automation script does not appear, you need to add the dynamic-section label to the
script in the Automation page.
STEP 2 | To edit the widget in a dashboard to report, from the widget, select the gear icon > Edit
Widget.
If the widget is not in a dashboard or report, you need to add the widget.
STEP 3 | To edit the widget in the Widgets Library, search for the widget and then click the edit button.
STEP 4 | In the Quick chart definitions window, edit the Widgets Parameters as required.
Key Value
Although the widget comes out of the box with Cortex XSOAR, you can add the Return on
Investment (ROI) widget in the Widgets Library, which is identical to the Saved by Dbot
widget.
The following parameters are used to calculate the amount saved by Dbot (ROI):
Parameter Description
Roundtrip The time it takes in minutes to run an integration task with any of the
integrated products. This can be a command within a script or inside the
War Room.
Script The time it takes to undertake an action that a script would do.
Keys Values
You can also change the currency symbol from Dollars to a currency of your choice.
{
"size":5,
"dataType":"roi",
"params":{
"currencySign":"€"
},
"query":"",
"modified":"2019-01-12T15:13:09.872797+02:00",
"shouldCommit":false,
"name":"Return On Investment (ROI)",
"shouldPush":false,
"dateRange":{
"fromDate":"0001-01-01T00:00:00Z",
"toDate":"0001-01-01T00:00:00Z",
"period":{
"by":"",
"byTo":"",
"byFrom":"days",
"toValue":null,
"fromValue":30,
"field":""
}
},
"commitMessage":"",
"isPredefined":true,
"version":13,
"id":"roi",
"shouldPublish":false,
"category":"others",
"sort":null,
"prevName":"Return On Investment (ROI)",
"widgetType":"number"
}
265
266 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Manage Indicators
© 2020 Palo Alto Networks, Inc.
Understand Indicators
Indicators are artifacts associated with incidents, and are an essential part of the incident management and
remediation process.
They help to correlate incidents, create hunting operations, and enable you to easily analyze incidents and
reduce MTTR.
Cortex XSOAR includes an Indicator repository, which collects and correlates indicators across all incidents,
alerts, and feeds flowing into Cortex XSOAR.
• Indicators Page
• Indicator Reputation
• Indicator Types
• Indicator Fields
Detect and ingest indicators
There are several methods by which indicators are detected and ingested in Cortex XSOAR.
Method Description
Integration • Feed: integrations that fetch indicators from a feed, for example TAXII,
AutoFocus, Office 365, and so on.
• Mail: integrations that consume emails with STIX or CSV files and add
the indicators to the indicator repository.
Feed Integrations
Cortex XSOAR has several out-of-the-box threat intelligence feed integrations.
• AutoFocus
• AWS
• Microsoft Azure
• Bambenek Consulting
• Blocklist_de
• Microsoft Office 365
• Palo Alto Networks PAN-OS EDL Service
• Proofpoint
• Recorded Future RiskList
Parameter Description
Name A meaningful name for the integration instance. For example, if you
have separate instances to fetch indicator types, you can include the
name of the indicator type that the instance fetches.
Fetch indicators Select this option for the integration instance to fetch indicators.
Some integrations can fetch indicators or incidents. Make sure you
select the relevant option for what you need to fetch in the instance.
Sub-Feeds Some feeds might have several lists or files that provide indicators.
The sub-feeds parameter enables you to select the specific list or file
from which to fetch indicators. For example, Bambenek Consulting
provides different lists for IPs and domains. Each of the Bambenek lists
are available as sub-feeds.
Fetch Interval How often the integration instance should fetch indicators from the
feed.
Indicator Reputation The indicator reputation to apply to all indicators fetched from this
integration instance.
Source Reliability The reliability of the source providing the threat intelligence data.
Indicator Expiration Method The method by which to expire indicators from this integration instance.
The default expiration method is the interval configured for the indicator
type to which this indicator belongs.
• Indicator Type: the expiration method defined for the indicator type
to which this indicator belongs (interval or never).
• Time Interval: expires indicators from this instance after the specified
time interval, in days or hours.
• Never Expire: indicators from this instance never expire.
• When removed from the feed: when the indicators are removed from
the feed they are expired in the system.
Bypass exclusion list When selected, the exclusion list is ignored for indicators from this feed.
This means that if an indicator from this feed is on the exclusion list, the
indicator might still be added to the system.
Use system proxy settings Runs the integration instance using the proxy server (HTTP or HTTPS)
that you defined in the server configuration.
Do not use by default Excludes this integration instance when running a generic command that
uses all available integrations.
Indicators Page
The Indicators page displays indicator dashboards, a table or summary view of all indicators, and enables
you to perform several indicator actions.
Indicator actions
You can perform the following actions on the Indicators page.
Action Description
Create incident Creates an incident from the selected indicators and populates relevant incident
fields with indicator data.
Edit You can edit a single indicator or select multiple indicators to perform a bulk
edit.
Delete and Exclude You can select to delete and exclude one on or more indicators from all
indicator types or from a subset of indicator types.
If you select the Do not add to exclusion list check box, the selected indicators
are only deleted.
Upload a STIX file Uploads a STIX file and adds the indicators from the file to the system.
Indicator query
You can search for indicators using any of the available search fields, but there are several fields specific to
indicators that you can use to search for indicators.
Field Description
Indicator Reputation
An indicator’s reputation is assigned according to the reputation returned by the source with the highest
reliability. In cases where multiple sources with the same reliability score return a different reputation for
the indicator, the worst reputation is taken.
Indicator reputations
Indicators are assigned a reputation on a scale of 0 to 3.
0 None No color
1 Good Green
2 Suspicious Orange
3 Bad Red
Example 1
In this example, two 3rd-party integrations, VirusTotal and AlienVault, return a different reputation for the
same indicator. VirusTotal returns a reputation of Good, and AlienVault returns a reputation of Bad. The
indicator’s reputation will be Bad.
Example 2
In this example, two sources with different reliability scores return a different reputation for the same
indicator. The first source is a TAXII feed with a reliability score of C - Fairly reliable, and the second source
is a CSV feed with a reliability score of B - Usually reliable. The TAXII feed returns a reputation of Bad
and the CSV feed returns a reputation of Good. The indicator’s reputation will be Good because the CSV
reliability score is higher than that of the TAXII feed.
Source reliability
The reliability of an intelligence-data source influences the reputation of an indicator and the values for
indicator fields when merging indicators.
Indicator fields are merged according to the source reliability hierarchy. This means that when there are two
different values for a single indicator field, the field will be populated with the value provided by the source
with the highest reliability score.
In rare cases, two sources with the same reliability score might return different values for the same indicator
field. In these cases, the field will be populated with the most recently provided source.
For the field types Tags and Multi-select, all values are appended, nothing is overridden.
C: Fairly reliable
E: Unreliable
Indicator expiration
Indicators can have the status Active or Expired, which is determined by the expirationStatus field. When
indicators expire, they still exist in Cortex XSOAR, meaning they are still displayed and you can still search
for them. A job runs every hour to check for newly expired indicators.
By default, indicators are expired according to either the expiration interval configured for the indicator
type to which the indicator belongs, or to never expire.
This is the hierarchy by which indicators are expired.
Method Description
Manual A user manually expires an indicator. This method overrides all other
methods.
Feed integration The expiration method configured for an integration instance, which
overrides the method defined for the indicator type.
Indicator type The expiration method defined for the indicator type to which this indicator
belongs (interval or never). This is the default expiration method for an
indicator.
STEP 1 | Go to the Automation page and locate the script you want to edit.
STEP 2 | Click Copy Automation and modify an existing reputation script, such as DataURLReputation.
In the following example, we redefine the values for each reputation:
STEP 4 | To add the script to the indicator, go to Settings > Indicator Types.
STEP 5 | Select the indicator type that you want to add the script and click Edit.
STEP 6 | In the Reputation Script field, select the script you modified in step 2.
Indicator Types
The indicators are categorized by indicator type, which determines the indicator layout (fields) that are
displayed and which scripts are run on indicators of that type.
There are several system-level indicator types.
• IP Address
• Registry Path Reputation
• File
• Email
• Username
• Hostname
• Domain
• File Enhancement Scripts
Table 1: Settings
Field Description
Regex The regular expression (regex) by which to identify indicators for this
indicator type.
Formatting Script The script to run on and modify how the indicator displays in Cortex
XSOAR, such as in the War Room, reports, and so on. For example,
the UnescapeURLs script extracts URLs that are redirected by security
tools or unescapes URLs that are escaped for safety (e.g., hxxps://
www[.]CortexXSOAR[.]com.
Reputation Script User-created scripts that either override the Cortex XSOAR command
algorithm or run on top of the data returned from the command. In
order for these scripts to be available in the drop-down menu, they
Indicator Expiration Method The method by which to expire indicators of this type. The expiration
method that you select is the default expiration method for indicators
of this indicator type.
The expiration can also be assigned when configuring a feed
integration instance, which overrides the default method.
• Never Expire: indicators of this type never expire.
• Time Interval: indicators of this type expire after the specified
number of days or hours.
Context path for reputation When an indicator is auto-extracted, the entry data from the
value (Advanced) command is mapped to the incident context. This path defines the
context key that the indicator reputation is mapped to.
Context value of reputation The value of this field defines the actual data that is mapped to the
(Advanced)) context path.
Cache expiration in minutes The amount of time (in minutes) after which the cache for indicators
(Advanced) of this type expire. The default is 4,320 minutes (three days).
Formatting scripts for out-of-the-box indicator types are now system level. This means that
the formatting scripts for these indicator types are not configurable. To create a formatting
script for an out-of-the-box indicator type, you need to disable the existing indicator type and
create a new (custom) indicator type. If you configured a formatting script before this change
and updated your content, this configuration will revert to content settings (empty).
File Indicators
Cortex XSOAR uses a single File indicator for file objects. Files appear with their SHA256 hash and all other
hashes associated with the file, (MD5, SHA1, and SSDeep) are listed as properties of the same indicator.
Also, when ingesting an incident through an integration, all file information is presented as one object.
For example, when viewing an incident, you can see a file indicator with a Bad Reputation value:
When clicking the indicator, you can see additional information for that indicator, including all of the other
known hashes associated with this file:
The new File indicator only affects new indicators ingested to the Cortex XSOAR platform.
Indicators that were already in Cortex XSOAR continue to appear as their respective hash-
related indicators.
If you want to have each file hash appear as its own indicator, do the following:
1. Go to Settings > Advanced > Indicator Types.
2. Select the File indicator and click Disable.
3. Select the following required hashes:
• File SHA-256
• File SHA-1
• File MD5
• SSDeep
4. Click Enable.
Indicator Fields
After you create a custom indicator field, you can add it to the indicator layout for the indicator types to
which you assicated the field.
• Create a Custom Indicator Field
• Map Custom Indicator Fields
Field Description
Case Sensitive If selected, the field is case sensitive, which affects searching for
the field in Cortex XSOAR.
Field Name A meaningful display name for the field. After you type a name,
you will see below the field that the Machine name is automatically
populated. The field’s machine name is applicable for searching and
the CLI.
Field Description
Add to indicator types By default, the Associate to all option is selected, which means this
field will be available to use in all incident types.
Clear the check box to associate this field to a subset of indicator
types.
Make data available for search The values for this field can be returned in searches.
STEP 2 | Select the check box for the indicator for which to map the custom fields.
STEP 5 | (Optional) In the Indicator Sample panel, enter an indicator relevant to the indicator type to load
sample data.
STEP 6 | Click Choose data path to map the custom field to a data path.
1. (Optional) Click the curly brackets to map the field to a context path.
2. (Optional) From the Indicator Sample panel, select a context key to map to the field.
Exclusion List
Indicators added to the exclusion list are ignored by the system and are not considered indicators. You can
still manually enrich IP addresses and URLs that are on the exclusion list, but the results are not posted to
the War Room.
There are several methods by which to add indicators to the exclusion list.
If you want to trigger a job after a feed completes a fetch operation, and the feed does
not change frequently, you can select the Reset last seen option in the feed integration
instance. The next time the feed fetches indicators, it will process them as new indicators in
the system.
Parameter Description
Playbook The playbook that will run when the conditions for the job are met.
Tags Add tags to apply to the job, which you can use as a search
parameter in the system.
true or
indicator.timeline.worker.enabled Enables you to add timeline comments through
false content integrations.
• Out of band - Indicators are enriched in parallel (or asynchronously) to other actions. The enriched data
is available within the incident, however, it is not available for immediate use in task inputs or outputs
since the information is not available in real time.
Component Key
By design, domains are extracted only from URLs and email addresses. Otherwise, the
amount of incorrect extractions would be huge and every <text>.<text> would be considered
as a domain indicator. So, for example, google.com will not be extracted, but https://
google.com will.
STEP 3 | Under Reputation command, enter the command to execute when auto extracting indicators
of this type.
STEP 4 | Under Exclude these integrations for the reputation command, select which integrations
should not be used when executing the reputation command.
STEP 5 | Under Reputation Script, select the script to run when enriching indicators of this indicator
type. The scripts override the reputation command.
STEP 1 | Navigate to the Playbookspage and search for the Process Email - Generic playbook.
This playbook parses the headers in the original email used in a phishing attack. It is important to parse
the original email used in the Phishing attack and not the email that was forwarded to make sure that
you are only extracting and enriching the email headers from the malicious email and not the one your
organization uses to report phishing attacks.
285
286 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Incidents
© 2020 Palo Alto Networks, Inc.
Incident Lifecycle
Cortex XSOAR is an orchestration and automation system used to bring all of the various pieces of your
security apparatus together.
Using Cortex XSOAR, you can define integrations with your 3rd-party security and incident management
vendors. You can then trigger events from these integrations that become incidents in Cortex XSOAR. Once
the incidents are created, you can run playbooks on these incidents to enrich them with information from
other products in your system, which helps you complete the picture.
In most cases, you can use rules and automation to determine if an incident requires further investigation
or can be closed based on the findings. This enables your analysts to focus on the minority of incidents that
require further investigation.
The following diagram explains the incident lifecycle in Cortex XSOAR.
Planning
Before you begin configuring integrations and ingesting information from 3rd parties, you should plan
ahead.
Phase Description
Create fields Used to display information from 3rd-party integrations and playbook
tasks when an incident is created or processed. For more information, see
Incident Fields.
Create incident types Classify the different types of attacks with which your organization deals.
Create incident layouts Customize your layouts for each incident type to make sure the most
relevant information is shown for each type. For more information, see
Customize Incident Layouts.
Configure Integrations
You configure integrations with your 3rd-party products to start fetching events. Events can be potential
phishing emails, authentication attempts, SIEM events, and more.
Classification Mapping
Once you configure the integrations, you have to determine how the events ingested from those
integrations will be classified as incidents. For example, for email integrations, you might want to classify
items based on the subject field, but for SIEM events, you will classify by event type. In addition, you have
to map the information coming from the integrations into the fields that you created in the planning stage.
For more information, see Classification and Mapping.
Pre-Processing
Pre-processing rules enable you to perform certain actions on incidents as they are ingested into Cortex
XSOAR directly from the UI. Using the rules, you can select incoming events on which to perform actions,
for example, link the incoming event to an existing incident, or based on configured conditions, drop the
incoming incident altogether. For more information, see Create Pre-Process Rules for Incidents.
Incident Created
Based on the definitions you provided in the Classification and Mapping stage, as well as the rules you
created for pre-processing events, incidents of various types are created. The incidents all appear in the
Incidents page of the Cortex XSOAR user interface, where you can start the process of investigating
Running Playbooks
Playbooks are triggered either when an incident is created or when you run them manually as part of an
investigation. When triggered as part of an incident that was created, the playbooks for the type of incident
that was classified will run on the incident. Alternatively, if you are manually running a playbook, you can
select whichever playbook is relevant for the investigation. For example, playbooks can take IP address
information from one integration and enrich that IP address with information from additional integrations or
sources.
Post-Processing
Once the incident is complete and you are ready to close it out, you can run various post-processing actions
on the incident. For example, send an email to the person who opened the incident informing them that
their incident has been resolved, or close an incident in a ticketing system.
To view the REST API documentation, select Settings > INTEGRATIONS > API Keys > View
Cortex XSOAR API.
If you turn off fetching for a period of time and then turn it on or disabled the instance and
enabled it, the instance remembers the "last run" timestamp, and pull all events that occurred
while it was off. If you don't want this to happen, verify that the instance is enabled and
then click Reset the “last run” timestamp in the settings window. Also, note that "last run" is
retained when an instance is renamed.
You set the objects to be fetched and their mapping in Settings > INTEGRATIONS > Classification &
Mapping.
Classification
Classification determines the type of incident that is created for events ingested from a specific integration.
You can classify events in the following ways:
• Defining an integration
Select the incident type that is created. When this is configured, it becomes the default incident type. If
you do not classify the event through classification and mapping, it is set as what you have defined here.
• Setting a classification key
Use the classification engine to determine the incident type. This overrides whatever you configured in
the integration settings.
STEP 1 | Open the Classification & Mapping window for the Integrations instance.
1. Go to Settings > Integrations > Servers & Services and next to the integration instance, click
Mapping
2. In the Classification & Mapping tab, from the dropdown menu, select the integration instance.
STEP 2 | In the Values to Identity column, drag values from the Unmapped Values column or type your
own value.
STEP 7 | Drag any unmapped value to the Values to Identify column for the incident type to which you
want to classify. Any unmapped values that you do not classify, an incident type as defined in
the integration is created.
You can map multiple values to an incident type, but you cannot map an unmapped value to multiple
incident types.
STEP 2 | In the Mapping Wizard, in the Map to column, click Choose data path.
STEP 3 | Click the event attribute to which you want to map. You can further manipulate the field using
filters and transformers.
The connectivity behavior that exists between third-party applications may trigger a fetch
failure, which will send a notification to an administrator and users. The notification may no
longer be relevant because the fetch might operate correctly just after the notification was
sent.
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Value
message.ignore.failedFetchIncidents
false.
STEP 3 | (Optional) Administrators that have multiple instances of mail sender configured that want to
receive only one email notification need to add the following key and value:
Key Value
STEP 2 | From the drop down list, select the date range for which you want to search.
By default, it is the last 7 days.
STEP 3 | If you want to customize the table summary view, click the gear icon above the table.
STEP 4 | If you want to customize the chart panel, go to one of the charts and from the drop down list
select the chart as required.
In this example, you need to search for all incidents according to the following criteria:
• Status is not closed
• category is not job
• type is phishing
• opened within the last 7 days
In addition, add the Created column to the table summary.
STEP 1 | In the Incidents page, from the drop down list select the date range.
STEP 2 | In the query field, type the query criteria as required and run the query.
By default, the widget inherits the date range that you specify when creating the widget,
but you can modify the date range when you create the dashboard or report. If the date
range for the report or dashboard does not include the widget date range, the data is
blank. To override the dashboard or report’s date range, click Use Widget’s date range.
STEP 2 | Click type the name (Closed Job Incidents with Access Investigation (past 6 months)) and save
the query results as a widget:
There are several Cortex XSOAR system layout sections and fields that you cannot remove,
but you can rearrange them in the layout and modify their queries and filters.
STEP 2 | Select the incident type check box that you want to customize.
View Description
Incident Quick View The fields and sections when displaying the
incident quick view.
STEP 5 | From the Library section, in the Cortex XSOAR Sections drag and drop the following sections
as required.
Section Description
Cortex XSOAR out of the box sections Out of the box sections such as Attachments,
Evidence, and so on.
General Purpose Dynamic Section Enables you to assign a script to this section.
For example, assign a script that calculates the
total number of entries that exist for an incident,
and it dynamically updates when new entries are
added to the incident.
STEP 2 | Select the incident type whose layout you want to edit and click Edit Layout.
You are presented with the current layout, which is populated with demo data so you can see how the
fields fit.
3. Enter a descriptive name for the button and select the script that you want to run when the button is
clicked.
4. Click Save.
In the Automation page, when adding or editing the script you want to use, ensure that you
add the dynamic-section label.
STEP 3 | Select the incident type in which you want the widget to appear and click Edit layout.
STEP 4 | From the Layout Builder window, in the Library section, drag the General Purpose Dynamic
Section into the layout area in which you want it to appear.
STEP 5 | In the General Purpose Dynamic Section, click the edit button.
STEP 7 | In the Automation script field, from the drop down list select the automation script you want
to add.
If the automation script does not appear, you need to add the dynamic-section label to the
script in the Automation page.
The following example, shows how to add the Indicator Widget Bar to all Phishing incident types in the
Case info tab.
1. In the Incidents Types tab, select Phishing.
2. Click Edit Layout.
3. After adding the General Purpose Dynamic Section into the layout area, edit the widget, by adding the
name and script.
STEP 1 | Select the incident type you want to add the General Purpose Dynamic Section by completing
steps 1 to 3 in Customize Incident View Layouts.
STEP 2 | In the Incident Summary tab, drag and drop the General Purpose Dynamic Section onto the
page.
You can also select this section in the Incident Quick View tab.
STEP 3 |
Select the General Purpose Dynamic Section, click and then Edit section settings.
STEP 4 | In the Name and Description fields, add a meaningful name and a description for the dynamic
section that explains what the script displays.
STEP 5 | In the Automation script field, from the drop down list, select the script that returns data for
the dynamic section.
For the script to appear, the script needs the dynamic-section tag assigned in the Automation page.
commonfields:
id: ShowLastNoteUserAndDate
version: -1
name: ShowLastNoteUserAndDate
script: |2
function getLastNote(incidentID) {
var body = {pageSize:1,categories:['notes']};
var res = executeCommand('demisto-api-post', {uri:'/investigation/' +
incidentID, body: body});
if (isError(res[0])) {
throw 'demisto-api-post failed for incidnet #'+incidentID+'\nbody
is ' + JSON.stringify(body) + '\n' + JSON.stringify(res);
}
if (!res[0].Contents.response.entries) {
lastNote = getLastNote(incidents[0].id);
if (lastNote) {
md = `#### Update by ${lastNote.user} on
${lastNote.modified.split('T')[0]}\n`;
md += `\n---\n`;
md += lastNote.contents + '\n';
STEP 2 | Select the incident type to add the script, by completing steps 1 to 4 in Add a Dynamic Section
to an Incident Layout.
STEP 3 | In the Automation script field, select the automation added in step 1.
STEP 4 | Go to the incident that you want to view the note information.
You can see note information, containing the last user and date.
STEP 1 | Navigate to the Automation page and duplicate the hideFieldsOnNewIncident automation.
1. Give the script a descriptive name.
2. Enter a useful description.
3. Under Tags, make sure that the field-display tag appears.
This tag must be applied for the script to be available to be used on the field.
4. Save the automation.
STEP 4 | Implement the Assign To field in the relevant layouts. For more information, see Customize
an Incident Type Layout.
incident = demisto.incidents()[0]
field = demisto.args()['field']
if incident.get('owner') == 'admin':
demisto.results({'hidden': False, 'options': ['jane','joe', 'bob']})
else:
demisto.results({'hidden': False, 'options': ['mark','jack', 'christine']})
where
• demisto.incidents is the incident in which this script is running.
• incident.get(‘owner’) is the field within the incident.
• demisto.results tells us whether to hide the field or not, and which values should appear in the field.
When the owner is admin, the values will be Jane, Joe, and Bob. When the owner is anyone else, the
values will be Mark, Jack, and Christine.
5. We navigate to Settings > Advanced > Fields and click +New Field.
• We’ll call the field Assign To:.
The Values field in the Basic Settings tab has been left blank because we hard-coded the values in
our script.
• Under the Attributes tab, in the Field display script field, select the changeAsigneesPerOwner
script we created above.
• Fill in the rest of the field definitions as desired and click Save.
incident = demisto.incidents()[0]
field = demisto.args()['field']
formType = demisto.args()['formType']
if incident["id"] == "":
# This is a new incident, hide the field
demisto.results({"hidden": True, "options": []})
else:
# This is an existing incident, we want to show the field, to know which
values to display
options = []
# The field type includes the word select, such as Single select or Multi
select
if "Select" in demisto.get(field, "type"):
# take the options from the field definition
options = demisto.get(field, "selectValues")
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
"data": [
1
],
"groups": None,
"name": "high",
"label": "incident.severity.high",
"color": "rgb(255, 23, 68)"
},
{
"data": [
1
],
"groups": None,
"name": "medium",
"label": "incident.severity.medium",
"color": "rgb(255, 144, 0)"
},
{
"data": [
2
],
"groups": None,
"name": "low",
"label": "incident.severity.low",
"color": "rgb(0, 205, 51)"
},
{
"data": [
8
],
"groups": None,
"name": "unknown",
"label": "incident.severity.unknown",
"color": "rgb(197, 197, 197)"
}
],
"params": {
"layout": "vertical"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout.
The following widget displays:
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
"data": [
1
],
"groups": None,
"name": "high",
"label": "incident.severity.high",
"color": "rgb(255, 23, 68)"
},
{
"data": [
1
],
"groups": None,
"name": "medium",
"label": "incident.severity.medium",
"color": "rgb(255, 144, 0)"
},
{
"data": [
2
],
"groups": None,
"name": "low",
"label": "incident.severity.low",
"color": "rgb(0, 205, 51)"
},
{
"data": [
8
],
"groups": None,
"name": "unknown",
"label": "incident.severity.unknown",
"color": "rgb(197, 197, 197)"
}
],
"params": {
"layout": "horizontal"
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout.
The following widget displays:
Stacked Bar
In this example, create a script in Python that displays a stacked bar showing the success and failures on
specific dates.
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
'name': 'time1',
'groups':
[
{
'name': 'Successes',
'data': [7],
'color': 'rgb(0, 205, 51)'
},
{
'name': 'Failures',
'data': [3],
'color': 'rgb(255, 144, 0)'
}
]
},
{
'name': 'time2',
'groups':
[
{
'name': 'Successes',
'data': [9],
'color': 'rgb(0, 205, 51)'
},
{
'name': 'Failures',
'data': [4],
'color': 'rgb(255, 144, 0)'
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout.
The following widget displays:
Pie
In this example, create a script in Python that queries and returns a pie chart.
data = {
"Type": 17,
"ContentsFormat": "pie",
"Contents": {
"stats": [
{
"data": [
1
],
"groups": None,
"name": "high",
"label": "incident.severity.high",
"color": "rgb(255, 23, 68)"
},
{
"data": [
1
],
"groups": None,
"name": "medium",
"label": "incident.severity.medium",
"color": "rgb(255, 144, 0)"
},
{
"data": [
2
],
"groups": None,
"name": "low",
"label": "incident.severity.low",
"color": "rgb(0, 205, 51)"
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget an incident layout. The
following widget displays indicator severity as a pie chart:
Duration
In this example, create a script in Python that queries and returns a time duration (specified in seconds), and
displays the data as a countdown clock.
data = {
"Type": 17,
"ContentsFormat": "duration",
"Contents": {
"stats": 60 * (30 + 10 * 60 + 3 * 60 * 24),
"params": {
"layout": "horizontal",
"name": "Lala",
"sign": "@",
"colors": {
"items": {
"#00CD33": {
"value": 10
},
"#FAC100": {
"value": 20
},
"green": {
"value": 40
}
}
},
"type": "above"
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout.
The following widget displays the time duration:
Number
This example shows how to create a single item widget that displays a number.
data = {
"Type": 17,
"ContentsFormat": "number",
"Contents": {
"stats": 53,
"params": {
"layout": "horizontal",
"name": "Lala",
"sign": "@",
"colors": {
"items": {
"#00CD33": {
"value": 10
},
"#FAC100": {
"value": 20
},
"green": {
"value": 40
}
}
},
"type": "above"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout.
The following widget displays:
data = {
"Type": 17,
"ContentsFormat": "number",
"Contents": {
"stats": { "prevSum": 53, "currSum": 60 },
"params": {
"layout": "horizontal",
"name": "Lala",
"sign": "@",
"colors": {
"items": {
"#00CD33": {
"value": 10
},
"#FAC100": {
"value": 20
},
"green": {
"value": 40
}
}
},
"type": "above"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout.
The following widget displays:
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
(Multi-Tenant) When changing the display name of security incidents, the URL link which
contains /incident may not work properly. For example, when changing the incident
to case, sometimes the links are formed with the/incident URL and not with the /case
URL. This can usually be corrected by clearing the browser cache and reloading the page.
STEP 4 | In the Value field enter the value for the term.
Value Term
0 Incidents (default)
1 Cases
2 Alerts
3 Events
4 Plays
5 Tickets
6 Issues
After an incident is created, it is assigned a Pending status in the incident table. When
you start to investigate an incident the status changes automatically to Active, which
starts the remediation process.
• CLI: If you want to open a incident in the CLI, type /investigate id=<incidentID#>.
Incidents page
When opening an incident, you see the following tabs, which assist you in the investigating the incident:
Tab Description
Incident/Case Info A summary of the incident, such as case details, work plan, evidence, and so on.
Most of the fields are for information only, although you can add the following:
• Evidence: A summary of data marked as evidence. You can add evidence in
this tab or in the Evidence Board.
• Notes: Displays any notes that have been entered. For example, understand
specific actions taken by the analyst and the underlying reasons, see chats
between analysts to highlight how they arrived at a certain decision, etc. You
can also see the thought process behind identifying key evidence and learn
about similar incidents in the future.
You can also add notes in the War Room.
• Tasks: View tasks to complete as part of an investigation. You can add tasks
in this tab or Create a To-Do Task.
You can send a permalink to a specific Investigation Summary by copying its URL.
Work Plan A visual representation of the running playbook that is assigned to the incident.
Evidence Handling View any entity which has been designated as evidence. The Evidence board
stores key artifacts for current and future analysis. You can reconstruct attack
chains and piece together key pieces of verification for root cause discovery.
Related Incidents A visual representation of incidents that share similar characteristics, such as
malicious indicators, or part of a phishing campaign.
Canvas Visually maps an incident, its elements, correlated investigation entities, and the
progression path of the incident, combining analyst intelligence with machine
learning.
The Related Incidents page is orientated towards exploration and searching for
similar data. The Canvas maps incidents and indicators by enabling you to decide
what you want to include in a layout of your choice.
You can Link Incidents, edit the incident, add a child incident, add tasks, notes, and so on. For more
information, see incident actions.
If you want to customize incident layouts, see Customize Incident View Layouts.
Action Description
Edit You can edit, format or delete your own entries. If an entry has been changed, a History
link will appear where you can view all changes to the entry.
Mark as Opens the Mark as evidence window where you specify the evidence details to be
Evidence saved in the Evidence Board. The Evidence Board stores key artifacts for current and
future analysis. You can add evidence in Case Info tab, the Evidence Board, or the War
Room.
Mark as note Marks the incident as note. Notes can help the analyst understand why certain action
was taken and assists future decisions. You can add them also in the Case Info tab.
Download Downloads an artifact according to the entry type, such txt files for text, json for a
artifact JSON entry, etc.
Add tags Add any relevant tags to use, which helps you find relevant information.
You can run various commands in the CLI, by typing the following:
• !: includes adding evidence, assigning an analyst, etc.
• /: includes adding notes, close an investigation, etc.
• @: send notifications to administrators, teams, analysts, etc.
You can edit incidents, create a report, add child incidents, and so on, as described in Incident Actions.
Filter Entities
You can filter entries by clicking . You can add any filter by selecting the checkbox or click to remove
that action. The filter menu contains three types of War Room entities by which you can filter:
• Actions
• Tags
• From
Use the And/Or toggles between the Actions, Tags and From sections.
• And: Use to combine two or more filters.
• Or: When one item is found it shows relevant entries.
You can save the filter by clicking Add. You can also retrieve Saved filters.
Cortex XSOAR does not index notes, chats, and pinned as evidence entries. If you want to
index these entries, see Index War Room Entries.
Depending on the number of cases in your system and server hardware, the re-indexing
operation can take a significant amount of time, during which the Cortex XSOAR server is
inaccessible. It is recommended to undertake this procedure when it has a minimal impact
on your organization. After completion, you should review your Cortex XSOAR server, as it
may have some impact on performance.
STEP 1 | Log in to your Cortex XSOAR server as root or an account with sudo privileges.
STEP 2 | Stop the Cortex XSOAR service by running the following command:
systemctl stop demisto
STEP 3 | Make a backup copy of your demisto.conf file by running the following command:
cp /etc/demisto.conf /etc/demisto.conf.bak
STEP 4 | Edit the /etc/demisto.conf file for all databases by adding the entries (highlighted in bold) in
the following format:
"server.entries.restore": true,
"db.index.entry.disable": false,
"DB" : {
"IndexEntryContent": true
}
"granular": {
"index": {
"entries": 7
}
}
STEP 6 | Delete the relevant War Room entries index on all databases by running the following
command on each database machine:
rm -rf /var/lib/demisto/data/demistoidx/entries_MMYYYY
For example, to re-index March 2020, run the following command:
STEP 7 | Start Cortex XSOAR from the command line by running one or more of the following
commands:
• For the current month:
# sudo -u demisto -g demisto -- /usr/local/demisto/server -stdout -
restore-index-name=entries_MMYYY
For example, to re-index March 2020, run the following command:
sudo -u demisto -g demisto -- /usr/local/demisto/server -stdout -restore-
index-name=entries_032020
• For multiple months, add the dates as CSV values:
sudo -u demisto -g demisto -- /usr/local/demisto/server -stdout -restore-
index-name=entries_MMYYYY,entries_MMYYYY,entries_MMYYYY
For example, to re-index January, February, March 2020, run the following command:
sudo -u demisto -g demisto -- /usr/local/demisto/server -stdout -restore-
index-name=entries_032020,entries_022020,entries_012020
A number of entries related to indexing appear, similar to below:
2019-03-21 19:00:45.651 info DB restoring 419 keys into index entries from
investigations-264/ (source: /home/circleci/.go_workspace/src/
github.com/demisto/server/repo/complexRepo/repo.go:1330)
When the re-indexing has completed, the above console messages cease and
Demisto runs automatically.
STEP 8 | Confirm that you can search your case comments through the search bar.
STEP 9 | Stop the service by using CTRL-C as the Cortex XSOAR server is running locally from the
command line.
In the following example, you need to add a custom widget that shows you the severity of the indicators in
an incident, as a bar chart.
Use the following script:
commonfields:
id: ee3b9604-324b-4ab5-8164-15ddf6e428ab
version: 49
indicators = []
scores = {HIGH: 0, SUSPICIOUS: 0, LOW: 0, NONE: 0}
incident_id = demisto.incidents()[0].get('id')
foundIndicators = demisto.executeCommand("findIndicators",
{"query":'investigationIDs:{}'.format(incident_id), 'size':999999})[0]
['Contents']
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
"data": [
scores[HIGH]
],
"groups": None,
"name": "high",
"label": "incident.severity.high",
"color": "rgb(255, 23, 68)"
},
{
"data": [
scores[SUSPICIOUS]
],
"groups": None,
"name": "medium",
"label": "incident.severity.medium",
"color": "rgb(255, 144, 0)"
},
{
"data": [
scores[LOW]
],
"groups": None,
"name": "low",
"label": "incident.severity.low",
"color": "rgb(0, 205, 51)"
},
{
"data": [
scores[NONE]
],
"groups": None,
"name": "unknown",
"label": "incident.severity.unknown",
"color": "rgb(197, 197, 197)"
}
],
"params": {
demisto.results(data)
type: python
tags:
- dynamic-section
enabled: true
scripttarget: 0
subtype: python3
runonce: false
dockerimage: demisto/python3:3.7.3.286
runas: DBotWeakRole
Create a new automation by adding the script and then in the War Room run the !IndicatorWidgetBar
command.
The custom widget appears in the War Room.
Work Plan
The Work Plan is visual representation of the running Playbook that is assigned to the incident. Playbooks
enable you to automate many of your security processes, including, but not limited to handling your
investigations and managing your tickets. Work Plans enable you to monitor and manage a Playbook work
flow, and add new tasks to tailor the Playbook to a specific investigation.
When clicking the Follow checkbox you can see the Playbook executing in real-time.
In the Work Plan you can do the following:
• View inputs and outputs of a playbook.
• View, create, and edit a playbook task for each required step.
When you create a task, add a name, automation, and description. The name and description should be
meaningful so that the task corresponds to the data that you are collecting.
For each task you can do the following:
Link Incidents
You can link or unlink incidents through the following:
• In the Related Incidents tab.
• A pre-process rule, so that as soon as an incident is ingested into Cortex XSOAR you can link incidents.
• Using the CLI.
After you link the incident, you can view linked incidents in the Case Info tab.
Shape Status
Pending status
Active status
Closed status
• The map has a time spectrum. Incidents on the right side of the map are newer than the current incident,
and the incidents on the left are older. Related incidents are spread across the spectrum according to
the time the incident was created. The time scope is 30 days before and 30 days after the currently
investigated incident. You can modify the range by using the Date Range.
• Use the Similarity Scale to display related incidents that are more similar or less similar to the current
incident.
• Hover over a related incident to view detailed information.
• Click an incident to view a comparison of the two incidents, which shows instances of similar indicators
between the incidents. You can click multiple incidents by using ctrl + click or command +
click. In the Similarities window, you can pair as Linked or as Duplicate. The incident appears as linked
in the Linked Incidents table in the Case info tab.
If you want to build your own related incidents and indicators a layout of your choice, use the Canvas. The
Related Incidents page is orientated towards exploration and searching for similar data.
You can configure an allow list or an ignore list for which incident fields to use for related incidents, as
described in Configure Incident Fields for Related Incidents.
Configure Incident Fields for Related Incidents
You can configure an allow list or an ignore list for which incident fields to use for related incidents. If you
define an allow list, related incidents only use specified fields for calculation. If you define an ignore list,
related incidents are calculated without the specified fields.
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
You can Edit Dbot Incident and Indicator Suggestions in the Entity Library.
Key Features
You can do the following:
• Auto Populate the Canvas with related incidents, suspicious URLs and so on by using machine learning.
The closer an entity appears to the center, the more closely related it is to the investigated incident.
• View an incident and indicator: view details of incident and indicator, including various actions in the
Dbot Suggestions: Quick View Window.
• Connect incidents: connect each incident by linking each incident and use comments on entity
connections to communicate important information with team members by adding notes to connectors
between entities.
STEP 1 | Go to the Canvas tab of the incident you are investigating and click Auto populate.
STEP 2 | If you want to customize the canvas, click Customize and select the following:
• If, and how many, related incidents appear.
• The maximum distance over which items are included in the canvas in the Similarity Max Distance
field.
By default the distance is set to 0.8. The closer the score is to 1, the less related they are to the
incident.
• Linked incidents
• Bad and suspicious common indicators
• Configure the threshold above which an indicator is ignored in the Indicators Ignore Threshold.
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Key Description
Incident Actions
In an incident, you can undertake a number of actions, such as edit the incident, add a child incident, add
tasks, notes, and so on.
When clicking Actions you can undertaken the following actions:
Action Description
Report Create a report to capture investigation specific data and share it with team
members.
Action Description
Quick View You can see a summary of the incident, timeline information, labels, and indicators.
Systems Details of any D2 Agents that are deployed to perform forensic tasks on machines.
Context Data View context data. The context is a map (dictionary) that is created for each
incident and is used to store structured results from the integration commands
and automation scripts. The context keys are strings and the values can be strings,
numbers, objects, and arrays.
You can use context data to:
• Pass data between playbook tasks.
• Capture the important structured data from automations and display the data in
the incident summary
You can also edit or add actions in the Case Info/Incident Info field.
Evidence Handling
You can view or designate any entity as evidence which enables you to reconstruct attack chains and piece
together key pieces of verification for root cause discovery.
In the War Room you can mark any entity as evidence by clicking the flag next to each entry. You can view
the evidence in the War Room or open the evidence entry from the Evidence Board. When adding evidence
you need to add a description which should contain enough details that can be used for future reference.
Incident Tasks
Incident tasks are tasks for users to complete as part of an investigation, which are split according to the
following:
• Playbook Tasks: you can view, assign an owner, complete, and set a due date for playbook tasks that
require attention.
• To Do Tasks: create tasks for users to complete as part of an investigation, and which are not attached
to the incident's playbook. A playbook can finish running and an incident can be closed even if the
incident contains open To-Do tasks.
You can Create a To-Do Task directly from the incident Case (incident) info tab or in the To Do Task
section.
Alternatively, you can create To Do tasks in the command line.
STEP 1 |
From the incident, click and then click Incident Task.
Parameter Description
Task Description A meaningful description for the task that provides sufficient information for
the assignee to complete the task.
Assignee The user to assign to the task. You can only assign a single user per task.
Set due date The due date for the task. If the task is not completed by this date, it is marked
as overdue but is not a roadblock for the investigation.
Incident Fields
Use Incident Fields to accept or populate incident data coming from incidents. You create fields for
information that arrives from third party integrations in which you want to insert information. The fields are
added to Incident Type layouts and are mapped using the Classification and Mapping feature.
Incident Fields can be populated by the incident team members during an investigation, at the beginning of
the investigation, or prior to closing the investigation.
Creating Incident Fields is an iterative process in which you continue to create fields as you
gain a better understanding of your needs and the information available in the third party
integrations that you use.
You can set and update all system incident fields using the setIncident command, of which each field is
a command argument.
Basic Settings
The following table lists the fields that appear in the Basic Settings page, and their descriptions. The Basic
Settings page is available for the following field types:
• Long text
• Mult select
• Short text
• Single select
• Tags
Name Description
Placeholder Define the text that appears in the field before users enter a value.
Values A comma separated list of values that are valid values for the field.
Timer/SLA Fields
The following table lists the fields specific to Timer/SLA fields, and their descriptions.
Name Description
SLA Determine the amount of time in which this item needs to be resolved. If
no value is entered, the field serves as a counter.
Risk Threshold Determine the point in time at which an item is considered at risk of not
meeting the SLA. By default, the threshold is 3 days, which is defined in
the global system parameter.
Run on SLA Breach In the Run on SLA Breach field, select the script to run when the SLA
time has passed. For example, email the supervisor or change the
assignee.
Name Description
Script upon change The script that dynamically changes the field value when script conditions
are met. For a script to be available, it must have the field-change-
triggered tag, when defining an automation. For more information, see
Field Trigger Scripts.
Field display script Determines which fields display in forms, as well as the values that are
available for single-select and multi-select fields. For more information,
see Create Dynamic Fields in Incident Forms.
Add to incident types Determines for which incident types this field is available. By default,
fields are available to all incident types. To change this, clear the
Associate to all checkbox and select the specific incident types to which
the field is available.
Default display on Determines at which point the field is available. For more information,
see Incident Field Examples.
Edit Permissions Determines whether only the owner of the incident can edit this field.
Make data available for Determines if the values in these fields are available when searching.
search
In most cases, Cortex XSOAR recommends that you
select this checkbox so values in the field are available
for indexing and querying. However, in some cases, to
avoid adverse affects on performance, you should clear
this checkbox. For example, if you are ingesting an email
to an email body field, we recommend that you not index
the field.
Add as optional graph Determine if you can create a graph based on the contents of this field.
This field does not appear for all field types.
SLA Fields
The following SLA field can be used to trigger a notification when the status effecting the SLA of an incident
changes. If the SLA is breached, we have configured the field such that an email is sent to the owner's
supervisor.
STEP 1 | Select Settings > ADVANCED > Fields > + New Field.
Depending on the field type, you can determine if the field contents are case-sensitive, as well as if the
field is mandatory.
STEP 2 | In the Field Type field, from the dropdown list select the Incident Field Types.
Field Description
Field Name A descriptive name indicating the information that the field contains.
STEP 7 | To add the field to the incident, go to Settings > ADVANCED > Incident Types, select the
incident, and click Edit Layout.
STEP 8 | In the Library dialog box, in the Cortex XSOAR Sections tab, drag and drop + New Section on
to the required tab.
STEP 9 | In the Incident field tab, drag and drop the field that you have created into the New Section.
STEP 1 | Select Settings > ADVANCED > Fields > + New Field.
Parameter Description
Tooltip (Optional) A brief descriptive message that explains what the field is and
how to use it.
User can add rows (Optional) Enables users to add/remove rows in the grid.
STEP 4 | In the Grid tab, add or remove the required rows and columns.
How you design the grid determines how it appears to users. If users can add rows, they can add rows,
but not columns.
STEP 5 | Configure each column by selecting the required field types, such as short text, boolean, URL,
etc, for each column.
For example, we want to define three mandatory columns: Name, Location, and Date. If you select the
Lock checkbox, the value for that field is static (not editable). If you do not select the Lock checkbox
(default), users can perform in-line editing.
If you select the Lock checkbox for a column, only a script can populate the values for that
column. If a column is unlocked (default), the column values can be entered manually (by
users), or by a script. For a script to be available in the Script upon change drop-down menu,
it must have the field-change-triggered tag.
fieldCliName = demisto.args().get('field')
currentValue = demisto.incidents()[0]["CustomFields"][fieldCliName];
if currentValue is None:
currentValue = [json.loads(demisto.args().get('row'))]
else:
currentValue.append(json.loads(demisto.args().get('row')))
The ExtraHop Reveal(x) Content Pack contains a field trigger script, which only tracks the
incident if it is from an ExtraHop Detection.
A common use case is to create the following automation that verifies that changes made by a playbook
only will take place and not by a user manually.
args = demisto.args()
user = args["user"]
if user:
demisto.executeCommand("setIncident", {args["name"]: args["old"]})
The automation checks who made the change using the user field. The name argument returns the field
name, so that it can be attached to multiple incident fields, and block changes to them, without the need to
have a different automation for each field.
Automation Arguments - Related Information
When an automation is triggered on a field, it has the following triggered field information available as
arguments (args):
Argument Description
associatedToAll If the field is associated to all or some incidents. Value: true or false.
associatedTypes An array of the incident types, with which the field is associated.
cliName The name of the field when called from the command line.
isReadOnly Specifies whether the field is non editable. Value: true or false.
ownerOnly Specifies that only the creator of the field can edit. Value: true or
false0.
selectValues If this is a multi select type field, these are the values the field can take.
validationRegex Whether there is a regex associated for validation the values the field
can hold.
Script Limitations
• Trigger scripts cannot close incidents.
• Post-processing scripts can modify an incident, but if a modified field has a trigger script, it is not called.
• Incident modifications executed within a trigger script are only saved to the database after the
modifications are completed.
Incident De-Duplication
In the lifecycle of incident management, there are cases when incidents are duplicated. Cortex XSOAR
provides the following de-duplication capabilities:
• Manual De-Duplication: You can manually de-duplicate incidents from the Incidents page or the Related
Incidents page. To de-duplicate incidents manually, see Manually De-Duplicate Incidents.
• Automatic De-Duplication: You can automate de-duplicate incidents by using Pre-Process Rules and
Scripts.
• Automations: You can create an automation that creates child incidents from duplicates.
• Playbooks: Identify, review or close duplicate incidents using playbooks.
Pre-Process Rules
Pre-Process rules enable you to perform certain actions on incidents as soon as they are ingested into
Cortex XSOAR directly from the user interface. Through these rules, you can select incoming events on
which to perform actions, for example, link the incoming incident to an existing incident, or under pre-
configured conditions, drop the incoming incident altogether.
You can de-duplicate incidents by selecting the Link and Close action in the Pre-Process Rules tab. To
create a pre-process rule, see Create Pre-Process Rules for Incidents.
The Link and Close action creates an entry in the Linked Incidents table of the existing incident to which
you link, and closes the incoming incident. If an existing incident matching the defining criteria is not found
an incident is created for the incoming event.
Playbooks
There are several out-of-the-box playbooks you can run to identify and close duplicate incidents.
Alternatively, you can use these playbooks as the basis for customized de-duplication playbooks. For
example, instead of automatically closing the duplicate incidents, include a manual review of the duplicate
incidents.
DeDup incidents -ML You can set the threshold for the duplicate
incidents. If duplicate incidents are found, they are
closed as duplicates.
FindSimilarIncidentsByText
• Identifies similar incidents based on text similarity. For this script you specify incident keys, labels, or
custom fields.
• The comparison is based on the TF-IDF method.
• A score is calculated for each candidate (0-1), and incidents are considered duplicates when exceeding
the threshold. The default threshold is 98%.
!FindSimilarIncidentsByText textFields=name,details
maximumNumberOfIncidents=1000 threshold=0.95 timeFrameHours=24
ignoreClosedIncidents=no
This command example checks for duplicate incidents using the following methodology:
1. Query for duplicate candidates:
• Incidents created in the previous 24 hours [timeFrameHours=24].
• Includes closed incidents [ignoreClosedIncidents].
• Maximum number of incidents to check is 1,000 [maximumNumberOfIncidents=1000].
2. For each candidate, concatenate name and details incident fields [textFields=name,details] into a text
document.
3. Compare the current incident text with all candidates using the TF-IDF method
4. Check if there is at least one similar candidate:
• Candidates with a TF-IDF score of 95% [threshold=0.95]. If there is at least one candidate, announce
duplicate.
!FindSimilarIncidents similarIncidentKeys="type,severity"
similarLabelsKeys="Email/from,Email/subject:*,Email/text:5"
ignoreClosedIncidents="yes" maxNumberOfIncidents="1000" hoursBack="48"
timeField="created" maxResults="10"
This command example checks for duplicate incidents using the following methodology:
1. Query for duplicate candidates:
• Incidents created in the 48 hours [hoursBack="48", timeField=created] before the original incidents
• Excludes closed incidents [ignoreClosedIncidents=yes]
• Maximum number of incidents to check is 1,000 [maxNumberOfIncidents=1000]
• Filters by the same incident type and severity [similarIncidentKeys=type,severity]
2. Check for candidate with the same Email/from label, or similar Email/subject label:
• Contains, or contained, the original incident Email/subject label, and similar Email/text label
• Equal or a maximum difference of 5 words from the original Email/text label
[similarLabelsKeys="Email/from,Email/subject:*,Email/text:5"]
3. If duplicate incidents are found, store the results in the context:
• Maximum of 10 [maxResults="10"]
GetDuplicatesMI
• Identifies duplicate incidents based on a machine learning (ML) algorithm, which uses ML techniques
with predefined data. Alternatively, you can use data from the local environment.
• This script takes several features into consideration: labels comparison, email labels (relevant for
phishing scenarios), incident time difference, and shared indicators (which you can customize with
arguments).
Rules are applied in descending order, and only one rule is applied per incident.
STEP 1 | Select Settings > Integrations > Pre-Process Rules > New Rule.
STEP 2 | In the Rule Name field, type a name for the rule.
It is recommended that you give meaningful names that help you identify what the rule does when
viewing the list of rules.
STEP 4 | In the Action section, from the drop down list determine which action to take in the event that
the incoming incident matches the rule.
STEP 6 | (Optional) To test the rules to ensure they are effective and efficient, click Test.
Testing is useful for ensuring that you are receiving the desired results before putting a rule in
production. It is recommended that you provide an existing incident as a sample incident against which
the rule can run.
In most cases, in a phishing campaign, the email subject is similar. In section 1, we create a condition for
incoming incidents that contains phishing in the email subject. For example, this is a phishing email.
As this is a campaign, we want to drop the incoming incident and link (update) it to an existing incident.
You need to Create a Post-Processing Script and then Add a Post-Processing Script to the Incident Type.
Arguments Exposed in the Post-Processing Script
These arguments are exposed in the post-processing script:
• closed (closed time)
• status
• openDuration
• closeNotes
• Custom fields are set at closure either explicitly (through the CLI) or implicitly (through Cortex XSOAR).
STEP 2 | Type a name for the post-processing script and click Save.
STEP 3 | In the Tags field, from the drop down list select Post-processing.
The following script example requires the user to verify all To Do tasks before closing an incident. Before
you start, you need to configure a Cortex XSOAR REST API instance.
inc_id = demisto.incidents()[0].get('id')
tasks = list(demisto.executeCommand("demisto-api-get", {"uri": "/todo/
{}".format(inc_id)})[0]['Contents']['response'])
if tasks:
if not task.get("completedBy"):
return_error("Please complete all ToDo tasks before closing the
incident")
break
STEP 2 | Select the incident type you want to add the post processing script.
STEP 4 | In the Post process using field, from the drop down list, select the script.
STEP 1 | In the Incident page, select the incident you want to restrict access.
STEP 3 | To check that the role was assigned to the incident, click the War Room tab.
Restrict an Investigation
You can restrict an investigation to the incident owner and the team associated with the investigation.
355
356 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Playbooks
© 2020 Palo Alto Networks, Inc.
Playbooks Overview
Playbooks are at the heart of the Cortex XSOAR system. They enable you to automate many of your
security processes, including, but not limited to handling your investigations and managing your tickets. You
can structure and automate security responses that were previously handled manually. For example, you
can use playbook tasks to parse the information in the incident, whether it be an email or a PDF attachment.
You can interact with users in your organization using communication tasks, or remediate an incident by
interacting with a 3rd party integration.
Playbooks have different task types for each of the actions you want to take along the way. There are
manual tasks where an analyst might have to confirm information or escalate an incident, and there are
conditional tasks with a loop to check if certain information is present so you can proceed with your
investigation. The playbook tasks can open tickets in a ticketing system, such as Jira, detonate a file using a
sandbox.
As you are building out your playbook, keep in mind the following:
• What actions do you need to take?
• Which conditions might apply along the way? Are these conditions manual or automatic?
• Do you need to include looping?
• Are there any time-sensitive aspects to the playbook?
• When is the incident considered remediated?
Task Types
The answers to the above questions will determine what kind of task you will need to create. Playbooks
support the following task types:
• Standard tasks - these range from manual tasks like creating an incident or escalating an existing
incident, to automated tasks such as parsing a file or enriching indicators. Automated tasks are based
on scripts that exist in the system. These scripts can be something that was created by you, the user, or
come pre-packaged as part of an integration. For example, the !file command enables you to enrich a file
using any number of integrations that you have installed in your system. Alternatively, the !ADGetUser
command is specific to the Active Directory integration.
• Conditional tasks - these tasks are used as decision trees in your flow chart. For example, were
indicators found. If yes, you can have a task to enrich them, and if not you can proceed to determine
that the incident is not malicious. Or, you can use conditional tasks to check if a certain integration is
available and enabled in your system. If it is, you can use that integration to perform an action, and if not,
you can continue to a different branch in the decision tree.
Conditional tasks can also be used to communicate with users through a single question survey, the answer
to which determines how a playbook will proceed.
• Data collection - these tasks are used to interact with users through a survey. The survey resides on an
external site that does not require authentication, thereby allowing survey recipients to respond without
restriction.
All responses are collected and recorded in the incident's context data, whether you receive responses
from a single user or multiple users. This enables you to use the survey questions and answers as input for
subsequent playbook tasks.
You can collect responses in custom fields, for example, a Grid field.
Field Mapping
You can map output from a playbook task directly to an incident field. This means that the value for an
output key populates the specified field per incident. This is a good alternative to using a task with a set
incident command.
STEP 1 | From the Playbooks page, click the playbook that you want to manage.
In the image above, we see a playbook that is triggered based on context data, meaning an incident. The
first two inputs are the SrcIP, which comes from the incident.src key, and DstIP, which is retrieved
from incident.dst.
In addition, the playbook itself creates output object whose entries serve the tasks throughout the
playbook.
Notice that the input for this task is Account.Manager, which is the output we highlighted in the playbooks
inputs, above.
STEP 3 | In the Task Name field, type a meaningful name for the task that corresponds to the data you
are collecting.
Option Description
Built-in Creates a logical statement using an entity from within the playbook. For
example, in an access investigation playbook, you can determine that if the
Asset ID of the person whose account was being accessed exists in a VIP list,
set the incident severity to High. Otherwise, proceed as normal.
Manual Creates a conditional task which must be manually resolved. For example, in
an access incident investigation, you might ask the user if they attempted to
access their account. A manual task checks if the user responded.
Choose automation Creates a conditional task based on the result of a script. For example, check if
an IP address is internal or external using the IsIPInRanges automation. When
using an automation, the Inputs and Outputs are defined by the automation
script.
STEP 5 | Complete the task configuration in the remaining tabs. Some configurations are required, and
some are optional.
Communication Tasks
Communication tasks enable you to send surveys to users, both internal and external, to collect data for an
incident. The collected data can be used for incident analysis, and also as input for subsequent playbook
tasks. For example, you might want to send a scheduled survey requesting analysts to send specific incident
updates, or send a single (stand-alone) question survey to determine how an issue was handled.
To allow users outside the Cortex XSOAR server network to access the communication task link, you need
to configure access to the communication task through an engine.
Ask task
The conditional Ask task is a single question survey, the answer to which determines how a playbook
will proceed. If you send the survey to multiple users, the first answer received is used, and subsequent
responses are disregarded.
Users interact with the survey directly from the message, meaning the question appears in the message and
they click an answer from the message.
The survey question and the first response is recorded in the incident's context data. This enables you to
use this response as the input for subsequent playbook tasks.
Since this is a conditional task, it's important to remember to create a condition for each of the answers.
For example, if the survey answers include, Yes, No, and Maybe, there should be a corresponding condition
(path) in the playbook for each of these answers.
You can collect responses in custom fields, for example, a Grid field.
STEP 3 | Enter a meaningful name for the task, which corresponds to the data that you are collecting.
STEP 5 | OptionalTo customize the look and feel of your email message, click Preview.
You can determine the color scheme and how text in the message header and body appear.
STEP 2 | Add the key messages.html.formats.externalAskSubmit, and add the following HTML
with your customizations as the value.
STEP 3 | Enter a meaningful name for the task that corresponds to the data you are collecting.
STEP 4 | Determine how the message will appear to users and how the message or survey will be sent.
The survey does not appear in the message. A link to the survey is automatically placed at the bottom of
the message.
STEP 6 | OptionalTo customize the look and feel of your email message, click Preview.
You can determine the color scheme and how text in the message header and body appear, as well as
the appearance and text of the button the user clicks to submit the survey.
STEP 2 | Add the key messages.html.formats.externalFormSubmit, and add the following HTML
with your customizations as the value.
STEP 2 | Add the key soc.name, and add the display name of your SOC as the value.
This name is used in the default message and email of the communication tasks, and the web survey for
all communication tasks.
Name Description
Only the assignee can complete the task Stop the playbook from proceeding until the task
assignee completes the task. By default, in addition
to the task assignee, the default administrator can
also complete the blocked task. You can also block
Set task reminder Define a reminder for the task, in weeks, days, or
hours.
Field Mapping
Map output from a playbook task directly to an incident field.
The output value is dynamic and is derived from the context at the time that the task is
processed. As a result, parallel tasks that are based on the same output, might return
inconsistent results.
Advanced Fields
Name Description
Ignore outputs When selected, this takes the results from the
Extend context field and overwrites existing
output.
Only the assignee can complete the task Stop the playbook from proceeding until the task
assignee completes the task. By default, in addition
to the task assignee, the default administrator can
also complete the blocked task. You can also block
tasks until a user with an external email address
completes the task.
Mark results as note Select to make the task results available as a note.
Notes are viewable in the War Room.
Skip this branch if this automation/playbook is Select to enable the playbook to continue
unavailable executing if an instance of the automation,
playbook, or sub-playbook is not available.
Name Description
Tag the result with Add a tag to the task result. You can use the tag to
filter entries in the War Room.
Timers Fields
Name Descriptions
Select timer field Select the field on which the timer is applied.
Field Description
Message body The text that displays in the body of the message.
Although this field is optional, if you don't write
the survey question in the Subject field, you should
include it in the message body. This is a long-text
field.
Timing Fields
The configuration options in the Timing tab define the frequency that the message and survey are resent to
recipients before the first response is received, and the task SLA.
Field Description
Task SLA Define the deadline for the task, in weeks, days, or
hours.
Questions Fields
Stand-alone questions
Field Description
Answer Type The field type for the answer field. Valid values
are:Short textLong textNumberSingle select -
requires you to define a reply option.Multi select
- requires you to define a reply option.Date
pickerAttachments
Help Message The message that displays when users hover over
the question mark help button for the survey
question.
Field-based questions
Field Description
Field associated with this question The field associated with the question will
automatically take all the parameters from the field
definition, unless otherwise defined.
Help Message The message that displays when users hover over
the question mark help button for the survey
question.
STEP 2 | In the Extend Contextfield, enter the name of the field in which you want the information to
appear and the value you want to return. For example, using the ad-get-user command, you
could enter user=attributes.displayname to place the user's name in the user key.
To include more than one field, separate the fields with a double colon. For example:
user=attributes.displayName::manager=attributes.manager
STEP 3 | To output only the values for Extend context and ignore the standard output for the command,
select the Ignore Outputs checkbox.
While this will improve performance, only the values that you request in the Extend Context field will be
returned. In addition, you cannot use Field Mapping as there is no output to which to map the fields.
STEP 1 | Run your command with the extend-context flag !<commandName> <argumentName> <value>
extend-context=contextKey=JsonOutputPath.
For example, to add the user and manager fields to context use the ad-get-user command, as follows:
!ad-get-user username=${user.manager.username} extend-
context=manager=attributes.manager::user=attributes.displayName
Example
By default, offenses pulled from QRadar to Cortex XSOAR return 11 fields, including event count, offense
type, description, and more. In the following example, we use extended context to show which additional
information is available and how to map it to a field:
• Run the command !qradar-offenses raw-response="true". You see that there are an
additional 20 fields or so that are retrieved.
• Identify the fields that you want to add and run your command. For example, to retrieve the
number of devices affected by a given offense, as well as the domain in which those devices
reside, run the following command: !qradar-offenses extend-context=device-
count=device_count::domain-id=domain_id
Prerequisites
• Start command: The command that fetches the initial state of the process and save it to the context.
This command usually starts the process that should be polled. For example:
Detonation: Submits a sample for analysis (detonated as part of the analysis). For example, joe-
analysis-submit-sample.
Scan: Starts a scan for specified asset IP addresses and host names. For example, nexpose-start-
assets-scan
Search: Searches in QRadar using AQL. For example, qradar-searches.
• Polling command: The command that polls the status of the process and saves it to the context. The
command input must be checked as Is array, as this allows the playbook to poll at once more than a
single process being executed. For example:
Detonation: Returns the status of the analysis execution. For example, joe-analysis-info.
Scan: Returns the specified scan. For example, nexpose-get-scan.
Search: Gets a specific search id and status. For example, qradar-get-search
Inputs
Input Description
PollingCommandArgName Argument name of the polling command. The argument should be the
name of the process identifier (usually an ID).
Timeout The amount of time that'll pass until the playbook will stop waiting for
the process to finish. After this time has passed the playbook will finish
If the polling command has more than a single argument you can add
AdditionalPollingCommandArgNames
their names via this input, for example: arg1,arg2,....
If the polling command has more than a single argument you can add
AdditionalPollingCommandArgValues
their values via this input for example: value1,value2,....
• Start command: The joe-analysis-submit-sample command starts a new analysis of a file in Joe
Security.
• Polling command: The joe-analysis-info command returns the status of the analysis execution.
• Argument name: The webid argument name of the polling command.
• Context path to store poll results: Joe.Analysis
ID context path: webid stores the ID of the process to be polled.
Status context path: Status stores the status of the process.
• Possible values returned from polling command: starting, running, finished.
• DT We want a list of IDs of the processes that are still running. Let's explain how it's built:
Path.To.Object(val.Status !== ‘finished’).ID Get the object that has a status other than
In Cortex XSOAR version 5.5 and below, if you want to filter more than one object such as
nested objects you need to either create a top level key using the Set command or use a
transformer as a filter.
Transformers
Transformers enable you to take one value and transform or render it to another value. For example,
converting a date in non-Unix format to Unix format. Another example is applying the count transformer,
which renders the number of elements.
To create a transformer, see create a transformer. When you have more than one transformer, they apply in
the order that they appear. You can reorder them using click-and-drag.
In Cortex XSOAR version 5.5 and below you cannot filter nested objects unless you use the
Set command, or use a filter as a transformer.
STEP 2 | In the field you want to add a filter or transformer, click and then select Filters and
Transformers.
STEP 3 | In the Get field, type or select data your want to filter or transform.
STEP 6 | (Optional) To test the filter or transformation click Test and select the investigation or add it
manually.
STEP 1 | In the Get field, type the data you want to use. For example, EWS.Email.
STEP 1 | Add the nested context data to a top level key by typing the following command in the
Playground:
!set key=<data to add> value=<value of the data to add>
For example, to filter URL data, type !set key=URLData value=$(URL.Data). You can see that
URLData has been added to the top level:
Format Example
RFC1123Z Tues, 02 Jan 2019 15:04:05 -0700 // RFC1123 with numeric zone
RFC3339 2019-01-02T15:04:05Z07:00
RFC3339Nano 2019-01-02T15:04:05.999999999Z07:00
Kitchen 3.04PM
Mon, 02 Jan 2006 15:04:05 MST Mon, 02 Jan 2003 15:04:05 MST True
Mon, 02 Jan 2003 15:04:05 MST Mon, 02 Jan 2006 15:04:05 MST False
a, b, c a True
a, b, c d False
• String: Determines the relationship between the left-side string value and the right-side string value,
such as starts with, includes, in list, and so on. The string filter returns partial matches as True.
The following table shows an example of the Matches (regex) string filter.
8 2 True
8 8 False
• Unknown: Miscellaneous filter category
Transformers Operators
Transformers enable you to take one value and transform or render it to another value. When you have
more than one transformer, you can reorder them using click-and-drag.
Note the following:
• Transformers try to cast the transformed value (and arguments) to the necessary type. Tasks will fail if
casting has failed, for example ({ “some”: “object” } To upper case => Error)
• Some transformers are applied on each item of the result. For example, a, b, c To upper case => A,
B, C.
• Some transformers operate on the entire list. For example, a, b, c count => 3.
• Some transformers are implemented as automations (meaning custom transformers automation with the
transformer tag. You can find examples in the automation description. For more information about
creating custom transformers, see Create Custom Filters and Transformers Operators.
Transformer Categories
Date: Transforms the date. For example:
Date to Converts any date to a specified string format Mon, 02 Jan 2006 15:04:05
string to the given format. String to format. By default MST => 02 Jan 06 15:04 MST.
RFC822 (02 Jan 06 15:04 MST).
Date to Converts any date to Unix format. See the Filter Mon, 02 Jan 2006 15:04:05
Unix Operators for a list of supported time and date MST => 1136214245
formats.
General: Includes general transformers, such as sort, splice, stringify, etc. The following table describes the
General examples:
Slice Returns part of a specified list in a range of from a, b, c, d from: 1, to: 3 => b, c
index (included) through to index (not included)
from: Zero based index at which to begin
extraction (default: 0)
to: Zero based index before which to end
extraction (default: list length)
Slice by Returns part of a list specified in a range of from a, b, c, d from: b, to: c => b,
item item (included) through to item (not included). c
from: Item from which to begin the extraction. If
not specified, extracts from the beginning of the list.
to: Item before which to end the extraction. If not
specified, extracts from the end of the list.
Index Returns the first index of the element in the array, a, b, a, c, d, a, b, item: b =>
of or -1 if not found. 1
item: Item to locate in the array. a, b, a, c, d, a, b, item: a
fromLast: true => 5
fromLast: true to get the index from last. (default
is false).
Get Extracts a given field from the given object. {“name”: “john”, “color”:
field “white”} field: “color”
field: (required) The field to extract from the result
“white”
String: Transforms strings. To make regex case non-sensitive, use the (?i) prefix (for example (?
i)yourRegexText. The following table describes string examples.
Substring Returns a subset of a string between one index pluto is not a planet from: 4
and another, or through the end of the string. to: 10 => o is n”
from (required): An integer between 0 and the
length of the string, specifying the offset into
the string of the first character to include in the
returned substring.
to (optional): An integer between 0 and the length
of the string, which specifies the offset into the
string of the first character not to include in the
returned substring.
Split Splits a string into an array of strings, using a hello world,bye bye world =>
specified delimiter string to determine where to hello world, bye bye world
make each split.
hello world delimiter
Split & Splits a string into an array of strings and removes hello & world delimiter: & =>
trim whitespace from both ends of the string, using a hello, world
specified delimiter string to determine where to
make each split.Argumentsdelimiter: Specifies the
string which denotes the points at which each split
should occur (default delimiter is”,”).
From Returns a subset of a string from the first from pluto is not a planet from:
string string occurrence. pluto is => not a planet
from (required): String to substring from.
To Returns a subset of a string until the first to string pluto is not a planet to: a
string occurrence. planet => pluto is not
to (required): String to substring until.
concat Returns a string concatenated with given prefix night prefix good => good night
and suffix.
night suffix shift=> night shift
prefix: A prefix to concat to the start of the
argument.
suffix: A suffix to concat to the end of the
argument.
Ceil Returns the lowest integer greater than or equal to 1.2 =>2
the number.
Round Returns the nearest integer, rounding half way 7.68 => 8
from zero.
2.43 => 2
2.5 => 3
Decimal Truncates the number of digits after the decimal 8.6666 by 2 => 8.66
precision point, according to the by argument.
by: Number of digits to keep after the decimal
point, default is 0.
Modulus The modular operator (%) returns the division 20 by: 3 => 2
(remainder)remainder.
by (required): Modulo by, default:0
Quadratic Returns the result of the Quadratic Formula.b 1 b: 3 c: 2=> -1.00, -2.00
equation (required): The b number of: ax2 + bx + c = 0,
3 b: 2 c: 4=> (-0.333 +1.106i),
default is 0.c (required): The c number of: ax2 + bx
(-0.333 -1.106i)
+ c = 0, default is 0.
STEP 2 | Type a meaningful name for the Automation script, and click Save.
Argument Description
left Mark as mandatory. This argument defines the left-side value of the
transformer operation. In this example, this is the value being checked if
it falls within the range specified in the right-side value.
right Mark as mandatory. This argument defines the right-side value of the
transformer operation. In this example, this is the range to check if the
left-side value is in.
Argument Description
value Mark as mandatory. The value to transform. In this example, this is the UNIX
epoch timestamp to convert to ISO format.
STEP 5 | Go to the filters and transformers window and select the operator.
positiveUrl Gets the Entry parameter from the WarRoom and Entry parameter
checks each Reputation Tool to determine if the URL
in this Entry parameter is malicious or non-malicious.
You can change the threshold by changing the
Thresholds dictionary.
The function returns true if the URL is safe, otherwise
returns false.
positiveFile Gets the Entry parameter from the WarRoom and Entry parameter
checks each Reputation Tool to determine if the file in
this Entry parameter is malicious or non-malicious.
You can change the threshold by changing the
Thresholds dictionary.
The function returns true if the file is safe, otherwise
returns false.
vtCountPositives Gets the entry parameter and checks how many good Entry parameter
URLs are hosted on the IP in this entry.
shortUrl Gets the Entry parameter from the War Room and Entry parameter
checks it in the Reputation tools (adding information
to context and formatting the response to the War
Room) when checking the URL.
shortFile Gets the Entry parameter from the War Room and Entry parameter
checks it in the Reputation tools (adding information
to context and formatting the response to the War
Room) when checking the IP.
FormatADTimestamp Gets the Entry parameter from the War Room and Entry parameter
formats the timestamp returned from AD.
formatCell Gets the JSON parameter string and formats it to a JSON parameter
regular string that can be used in a table. string
flattenCell Gets the JSON parameter string and converts it to a JSON parameter
string that can be used in a table. Also supports tables string
containing sub cells.
flattenRow Gets a key and data and adds it to the context. Checks Key and data,
if the key already exists. It it exists, it creates an array and optionally
in the existing key. dedup=False – don’t
add duplicate items
fileResult Creates a new file that contains the data and displays Filename and data
the file in the WarRoom.
tableToMarkdown Converts a Cortex XSOAR table in JSON format Table name, JSON object
to a Markdown format table. and the headers to
display.
setSeverity Sets the severity of an incident. The incident arg that has 2 keys:
must be related to the current investigation.
• 'id' - the incident id
• 'severity' - the new
severity value (Critical,
High, Medium etc.)
setIncident Sets fields of the incident. The incident must be Dictionary of args - has 5
related to the current investigation and be the optional keys:
only incident in it.
• type
• severity
createNewIncident Creates a new incident with the fields specified. Dictionary of args - has
This is only carried out if an incident with the 5 optional keys: type,
same name does not exist as an active incident. severity, details, name
and the incident systems.
setOwner Sets the owner of the incident. The incident Owner user name.
must be related to the current investigation.
positiveUrl Gets the Entry parameter from the WarRoom Entry from War Room.
and checke each Reputation Tool if the URL
in this Entry parameter is malicious or non-
malicious.
You can change the threshold by changing the
Thresholds dictionary.
The function returns true if the URL is safe,
otherwise returns false.
positiveFile Gets the Entry parameter from the WarRoom Entry from War Room.
and checks each Reputation Tool if the file
in this Entry parameter is malicious or non-
malicious.
You can change the threshold by changing the
Thresholds dictionary.
The function returns true if the file is safe,
otherwise returns false.
positiveIP Gets the Entry parameter from the WarRoom Entry from War Room.
and checks each Reputation Tool if the IP in this
Entry parameter is malicious or non-malicious.
shortCrowdStrike Formats the response from CrowdStrike to a Entry from War Room.
pretty Markdown.
shortUrl Gets the Entry parameter from the War Room Entry from War Room.
and checks it in the Reputation tools (adding
information to context and formatting the
response to the War Room) when checking the
URL.
shortFile Gets the Entry parameter from the War Room Entry from War Room.
and checks it in the Reputation tools (adding
information to context and formatting the
response to the War Room) when checking the
file.
shortIp Gets the Entry parameter from the War Room Entry from War Room.
and checks it in the Reputation tools (adding
information to context and formatting the
response to the War Room) when checking the
IP.
403
404 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Work with SLAs
© 2020 Palo Alto Networks, Inc.
SLA Overview
Cortex XSOAR supports specific fields for managing SLAs and timers.
SLAs are an important aspect of case management. You can incorporate SLA fields in your cases so you can
view how much time is left before the SLA becomes past due, as well as configure actions to take in the
event that the SLA does pass.
In addition, you can now view the number of cases that are at risk of passing the SLA, or are already late,
using pre-configured widgets. The widgets present information based on the default threshold, which can
be configured globally.
Present SLAs in Incident Summary Layouts
Once you have configured the SLA fields and timers, your incident summary screens will display information
about the status of the SLA, if any of the SLAs are past due, and if so, by how much.
In the image above, for example, we see that the timers for several of the fields are in various states.
Detection SLA is past due, while Remediation SLA has nearly 5 days remaining.
Customize CSV Reports for SLA Fields
You can add SLA specific information to your CSV reports. Edit the table columns field in the JSON report
to include the SLA data that you want.
For example, assuming that you have an existing timer field named myslatimer, we can use the following
options as csv columns:
• myslatimer: displays a summary of the timer status and sla.
• myslatimer.runStatus: displays a run status of the current timer.
• myslatimer.totalElapsed: displays the total elapsed time, in seconds, of the current timer. If the timer has
ended, it displays the total duration.
STEP 5 | Define a duration for the SLA of this field. If no value is entered, the field serves as a counter.
STEP 6 | Determine the risk threshold for this timer. When the timer falls below this threshold, it is
considered at risk. By default, the threshold is 3 days, which is defined in the global system
parameter.
STEP 7 | Under Run on SLA Breach, select the script to run when the SLA time has passed. For example,
email the supervisor or change the assignee.
Only scripts to which you have added the SLA tag will appear in list of scripts that you can select.
When defining the values for the slaField and timer commands, all values must be in
lowercase and cannot have any spaces.
• resetTimer - Resets a timer. This command should be used to enable a timer that was stopped.
Example
The following example shows you how to pause a timer for a specific field in the current incident:
!pauseTimer timerField=timetodetection
You can specify the incidentID to change the timer for a different incident.
STEP 2 | Select the playbook to which you want to add the timer and click Edit.
STEP 3 | Click the + symbol to add a new task or click an existing task to edit the task.
STEP 4 | In the Timers tab, select the action that you want the timer to perform for the given task. Valid
options are:
• Start - Starts the timer.
dueDate Date The date by which the SLA for this timer is due.
sla INT (in The period defined as the SLA for this timer. This is the value that
minutes) you defined in the timer field.
lastPauseDate Date The last date at which the SLA timer was paused.
startDate Date The date at which the SLA timer was started.
accumulatedPause INT (in The total number of seconds that the timer was in a paused state.
seconds)
totalDuration INT (in The total number of seconds that the timer was running. This
seconds) property is populated after the timer is stopped.
runStatus String Represents the current status of the timer. Values are:
idle
running
paused
ended
The SLA status is not defined unless the timer is in a stopped mode, meaning, either
paused or ended.
• based on an SLA field
• based on a timer field
For example, you can search for all of the timer fields that are currently running, or you can search for all
incidents with a specific SLA status.
STEP 2 | To search for an incident whose timer is still active, enter the following:
• The name of the field
• The run status
• The due date. This is required for queries who run status is neither ended nor paused to improve
query performance.
STEP 3 | To search for an incident whose timer is no longer active, enter the SLA Status.
Examples
In the following example, we are searching for all incidents that have an SLA timer called slatimer and fulfill
the following criteria:
• The run status is neither ended nor paused AND the due date is later than now, meaning, the due date
has not yet passed.
OR
• Incidents whose run status is ended or paused and the SLA status is within the alloted time.
In the following example, we are searching for all incidents that fulfill the following criteria:
• The run status is either ended or paused AND the due date is earlier than now, meaning, the due date
has already passed.
OR
• Incidents whose run status is ended or paused and the SLA status is late.
In the following example, we are searching for all incidents that fulfill the following criteria:
• The run status is neither ended nor paused AND the due date is between now and 5 hours. The 5 hours
represents our risk threshold.
415
416 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Machine Learning Models
© 2020 Palo Alto Networks, Inc.
Machine Learning Models Overview
Machine learning models enable Cortex XSOAR to analyze and predict behavior through incident types
and fields. The model uses past incidents that have already been classified to classify incoming events
automatically.
Machine learning models are used mainly for phishing incidents. You can train it to automatically recognize,
for example, phishing emails, emails that are legitimate, and those that contain spam.
Machine learning models enable you to do the following:
• Use as part of a scoring/severity set.
• To close incidents automatically more accurately than manually defining a threshold.
• Handle only incidents that the classifier marks as malicious.
You train models by inputting data through incident types and fields. Cortex XSOAR returns all the incidents
containing the specified field. You can then map these field values into different verdicts. The verdicts
determine what the model predicts, so you should make the verdict definitions meaningful.
By default, Cortex XSOAR trains models from input data contained in an Email body, Email HTML, and
Email subject. You can change the name of the fields containing the subject and body. Cortex XSOAR then
trains a model and returns the accuracy of the model against each category.
To create a machine learning model, see Create a Machine Learning Model.
The machine learning model for phishing can be used as following:
• Part of the Phishing Investigation - Generic v2 playbook, when adding the
DbotPredictPhishingWords command, or when creating a playbook.
When Cortex XSOAR runs the playbook it takes the machine learning model that you have defined.
• Run the !DbotPredictPhishingWords command in the War Room or in the Machine
Learning page, by typing: !DbotPredictPhishingWords modelName="name"
emailBody="body"emailbodyhtml=”email body html” emailsubject=”email subject”.
For examples, see Phishing Command Examples Using a Machine Learning Model
You can run a phishing classifier demo, without the need to create a machine learning model.
STEP 1 | Select Settings > Advanced > ML Models > New Model.
STEP 2 | In the Model name field, type the name of the model that you want to create.
STEP 3 | (Optional). In the Description field, type a meaningful description for the machine learning
model.
STEP 4 | In the Incident type field, from the drop down list, select the type of incident where you want
you want to the machine to be trained, such as Phishing.
STEP 5 | In the Incident field, from the drop down list, select the incident field where you want the
model to learn to predict. The model trains using these fields as a label. For example, Email
Classification.
STEP 6 | Select the date range where you want to run the machine learning. The more incidents, the
better results. It is recommended to use a longer period.
STEP 7 | In the Maximum number of incidents to test, type the number you want to test that is used
to train the model. Reduce the number only if the number of incidents is too large and causes
performance problems. Use a higher number if you have more samples in your environment
Default is 3000. The results appear in the Field Mapping field.
STEP 8 | In the Verdict fields, define the name of the verdict for which to map your data.
Verdicts are group of labels, for which each verdict includes 1 label or more. You must map all existing
labels into 2 or 3 different verdicts. The model is trained using these verdicts. All labels that are
mapped into the same verdict are treated as if they have the same label. You can choose any label for
your verdict field, but the training model calculates the model based on the verdict, so it should be a
meaningful name.
STEP 9 | In the Field Mapping field, drag and drop the Field Mapping data into Verdict fields.
You need a minimum of 50 results returned. For an example. see Machine Learning Model Example.
STEP 10 | If you want to change the fields where email body and email subject are stored in the
incident, in the Argument Mapping select the equivalent fields for Email body, Email HTML
and Email subject.
By default, the machine learning model trains the Email body, Email HTML and Email subject.
4. Drag and drop the data from Field Mapping into the relevant Verdict fields.
The returned data shows that it found 3 categories together with the percentage scores, which reflect the
precision of the results.
You can now use the machine learning model in the Phishing Investigation - Generic V2 playbook, in the
Machine Learning page or in the War Room. For examples how to use it in the War Room, see Phishing
Command Examples Using a Machine Learning Model.
For an example how to create the machine learning model, see Machine Learning Model Example.
After running the command, Cortex XSOAR returns the following information:
• TextTokensHighlighted: The text of the email message with the highlighted positive words (if found).
• Label: The predicted label found by the model.
• Probability: The prediction probability.
• PositiveWords: Words that encouraged the model to make the prediction.
• NegativeWords: Words that are in general not correlated with the predicted class and reduced the
model’s confidence in its prediction.
In the War Room, run the following commands:
!DBotPredictPhishingWords modelName="demoModel" emailBody=”Your email account
was LOGIN today by Unknown IP address: 10.240.180.228, click on UPDATE
The main purpose is to demonstrate how the phishing classifier feature works, so that
you learn how to train a classifier using your own data. We do not recommend using it for
production.
To run the phishing classifier, in the War Room, type !DBotPredictOutOfTheBox, and add the relevant
parameters.
The output parameters are the same as the output of DBotPredictPhishingWord. The
DBotPredictPhishingWord automation allows you to get a prediction for a phishing
incident, using a model trained using your own classifier. For more information, see Machine
Learning Models Overview.
DbotPredictOutOfTheBox Parameters
The following table describes DbotPredictOutOfTheBox parameters.
Parameter Description
emailBody The plain text of the email body for which you want to get the
prediction.
emailBodyHTML The HTML of the email for which you want to get the prediction.
If the email body is filled, this field can be left empty.
emailSubject The plain text of the email subject for which you want to the
prediction.
labelProbabilityThreshold All predictions are given in a confidence value between 0-1. If this
parameter is set to 0, all model predictions are given. If more than
0, only confidence predictions higher than this value are given.
minTextLength Minimum length of text (subject and body) required for getting a
prediction.
wordThreshold the lower this value is, the more words will be highlighted in the
results.
DbotPredictOutOfTheBox Parameters
The following table describes DbotPredictOutOfTheBox parameters.
TextTokensHighlighted The text of the email message with the highlighted positive words (if
found).
Probability The prediction probability between 0-1. The higher this value, the
more confident the classifier is in its prediction.
NegativeWords Words that are in general not correlated with the predicted class and
reduced the model’s confidence in its prediction.
DbotPredictOutOfTheBox Examples
The following examples describe the parameters and output of the DbotPredictOutOfTheBox
automation.
Run the following command in the War Room:
!DBotPredictOutOfTheBox emailBody="<Message>".
Label Message
Malicious Your email account was LOGIN today by Unknown IP address: 10.240.180.228, click on
UPDATE <http://helpd.moonfruit.com/> to validate and verify your email account now to
avoid Outlook Web App been disabled for user
Spam Your Outlook Exceeded its storage limit Click here <https://docs.google.com/forms/d/
e/1FAIpQLSckF75SUgErVFmTEfHhhFkiX2-4V2tgC0nssDvpkqZnPz4pkQ/viewform> fill
and SUBMIT for more space or you wont be able to send Mail.
Malicious Your email password expires in 2 days to retain email password and details. CLICK HERE
https://docs.google.com/forms/d/e/1FAIpQLSewQbYraWXtr4atKnGGyNncumJFKy-
En54dvjVK6-Mxlu5G-A/viewform to update immediately
Spam lose 22.5lbs in 3 weeks! flush fat away forever! free 30-day supply **http://
www.adclick.ws/p.cfm?o=423&s=pk19.** to unsubscribe, click below: http://
u2.azoogle.com/?z=93-1090346-62llc4
427
428 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Lists
© 2020 Palo Alto Networks, Inc.
Work With Lists
A list is a collection of one or more items of the same type, for example plain text, JSON, or HTML, that you
can use in scripts, playbooks, or any other place where the context button appears (double-curly brackets).
Use cases
These are some common use cases for creating and using lists in Cortex XSOAR.
• A list of allowed executable files against which to check potentially malicious executable files.
• An HTML template that you can define to use as part of a Communication task.
• Store data object, for example JSON, that you can call as inputs for scripts and playbooks.
• Use the getList or addToList commands in a script to take action based on the list data, for example,
res = demisto.executeCommand("getList", {"listName": demisto.args()
["listName"]}) will return all list entries in the script.
List commands
You can use the following list commands in scripts and playbook tasks.
getList
Retrieves the contents of the specified list. The command has the following required arguments.
• listName: the name of the list for which to retrieve the contents.
createList
Creates a list with the supplied data. The command has the following required arguments.
• listName: the name of the list to which to append items.
• listData: the data to add to the new list.
addToList
Appends the supplied items to the specified list. If you add multiple items, make sure you use the same list
separator that the list currently uses, for example a comma or a semicolon. The command has the following
required arguments.
• listName: the name of the list to which to append items.
• listData: the data to add to the specified list. The data will be appended to the existing data in the list.
setList
Adds the supplied data to the specified list and overwrites existing list data.The command has the following
required arguments.
• listName: the name of the list to which to append items.
• listData: the data to add to the specified list. The data will overwrite the existing data in the list.
removeFromList
Removes a single item from the specified list. The command has the following required arguments.
• listName: the name of the list from which to remove an item.
• listData: the item to remove from the specified list.
STEP 6 | (Optional) To modify the list, select the list and click the edit icon.
433
434 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App
© 2020 Palo Alto Networks, Inc.
Cortex XSOAR Enterprise Mobile App
Overview
The Cortex XSOAR Enterprise mobile app provides a set of features that enable you to make incident and
task-based decisions on a mobile device.
The App sends the same notifications as the web application. It enables you to do the following:
• View system dashboards and incidents
• Assign analysts to incidents
• View, send and receive messages, upload, download files, and attachments in the War Room.
• Update incident types and severity
• Modify incident tasks
• Close incidents
• View, assign and mark as complete your tasks
System Requirements
Download
Download the App from Google Play (Android) and the App Store (iOS).
Cortex XSOAR Enterprise mobile app has OS specific and app specific requirements.
Ensure you meet all system requirements before downloading and installing the App.
After downloading the App, you can use the app, which has some of the same features as the web app.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App 435
© 2020 Palo Alto Networks, Inc.
Before you add the CA-signed certificate to Cortex XSOAR server, ensure the certificate contains the full
certificate chain. If the certificate does not contain the full certificate chain, you need to Obtain the Full
Certificate Chain for a Certificate.
Users
If you do not use a trusted official CA signed SSL certificate or you do not use the Android APK,you need to
Configure the Mobile Device for Users.
Check whether you can connect to Cortex XSOAR through your browser, even if you cannot
connect through the Cortex XSOAR app. If you cannot connect to the server through your
browser, there could be other issues, such as VPN connectivity into the organization’s
private network.
This procedure enables you to deploy the android apk file in an environment with a self-signed certificate
and a MDM, or other internal distribution mechanism. You do this by manually changing the android apk file
and allowing distribution of the apk to your users through direct link to the apk or MDM of your choice.
STEP 2 | Place the privately issued certificate (.crt file) that you wish to deploy in the Android app, on
the same computer, as referred to in step 1.
STEP 6 | Distribute the apk to your users (by direct link to the apk or MDM of your choice) and ensure
connectivity is made.
STEP 7 | (Optional) If the MDM environment issues an error (for example, APK is not zip aligned,
APK signature is invalid or does not exist, or similar) you need to re-run the script
with zipalign and jarsigner enabled.
1. Ensure that you install zipalign, which is part of Android Studio.
2. Ensure that you install jarsigner, which is part of JDK.
Ensure your machine’s path is set correctly to include the jarsigner tool.
3. Run the script in step 4 and add the following options:
436 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App
© 2020 Palo Alto Networks, Inc.
-z,--zipalign: The path to the zipalign tool
-k,--keystore: The path to the keystore to use for jarsigning the apk
-a,--alias: The Alias
If the MDM environment issues an Upload a new apk file with different package, or a
similar error, contact Customer support.
STEP 8 | Repeat the process for every build of the apk that you wish to deploy.
Relevant for Administrators only. The following instructions are for Android 9 Pie. For other
Android versions, the navigation path to find trusted credentials is different.
STEP 2 | To upload the CA-signed certificate to a Cortex XSOAR server, follow the instructions in
HTTPS with a Signed Certificate.
STEP 4 | (Optional ) Locate the root certificate and vertify that it includes the ---BEGIN CERTIFICATE---
header and the --- END CERTIFICATE--- footer.
STEP 7 | (Optional) Paste the entire certificate chain directly under the root certificate in the cert.pem
file.
STEP 1 | In Cortex XSOAR go to Settings > About > Troubleshooting > Security and download the
certificate.
STEP 3 | Copy the .crt file to the /sdcard folder on your mobile device.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App 437
© 2020 Palo Alto Networks, Inc.
STEP 4 | On your mobile device, go to Settings > Security & location > Advanced > Encryption &
credentials and when prompted, enter the certificate name for the .crt file.
438 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App
© 2020 Palo Alto Networks, Inc.
Switch Accounts in Multi-Tenants Deployments
Similar to Cortex XSOAR, when you log in to a Multi-Tenant deployment, the Cortex XSOAR Enterprise
mobile app displays the main account, where you can see dashboards and incidents for all the accounts you
manage.
STEP 2 | In the Account field, from the drop down list, select the account to manage.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App 439
© 2020 Palo Alto Networks, Inc.
Manage Dashboards in the Cortex XSOAR Enterprise Mobile App
In the Home tab, you view your dashboard. To view all dashboards, tap and select the dashboard
you want. Like the web application, the mobile app remembers and displays the most recently viewed
dashboard.
440 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App
© 2020 Palo Alto Networks, Inc.
Some widgets in the dashboard open the incident list, which is automatically filtered according to the
widget’s filter. For example, tapping on Today’s New Incidents widget, takes you to the Incidents list,
showing only incidents opened today.
Some dashboard widgets are not supported in the mobile app. To view full dashboards, go to the web
application. Use the Dashboard Builder (in the web app) to create a custom dashboard for the mobile app.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App 441
© 2020 Palo Alto Networks, Inc.
• War Room Chat
You can view, send and receive messages, upload, download files, and attachments.
442 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App
© 2020 Palo Alto Networks, Inc.
• Tasks
You can assign analysts to the task, chose options to complete the task and manually mark the task as
complete.
CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App 443
© 2020 Palo Alto Networks, Inc.
To perform action on the incident, tap . Support. Supported incidnet actions include:
• Assign an analyst to the incident
• Change the incident severity
• Change the incident type
• Close the incident
444 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Cortex XSOAR Enterprise Mobile App
Agents
> Agents Overview
> Shared Agents
> D2 Agent
> Agent Tools
445
446 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Agents
© 2020 Palo Alto Networks, Inc.
Agents Overview
Agents enable you to transfer files and execute commands on remote machines.
You can create a D2 Agent for a specific incident or you can create a shared agent for a number of
incidents.
You need to install the D2 Content Pack, which enables you use commands and automations for D2 and
Shared Agents. You can use the automation scripts that come out-of-the box or configure and create scripts
in Agent Tools.
Most D2 automations and commands are relevant for both D2 Agents and Shared Agents.
After installation, you can run Powershell commands directly from Cortex XSOAR on common applications,
such as Office 365 and Active Directory. You need to Configure Cortex XSOAR to Use PowerShell, before
running PowerShell commands.
STEP 3 | Create and configure a new integration instance by clicking Add instance.
Parameter Description
Ciphers (Linux only) You can change the cipher mechanism used by SSH to install
the agent.
SMB protocol version For Widows remote installation SMB protocol is used (port 445). In case
of SMB errors, you may need to specify SMB 2 or SMB 3.
STEP 4 | (Optional) Configure Agent Tools that invoke existing forensic applications.
If you want to create agents for more than one incident, create a shared agent
D2 Agents are usually installed on Windows, as UNIX systems have different solutions, such
as SSH. If you cannot access a target machine, you might need to set up a Cortex XSOAR
engine before you can install and run agents on that machine.
Install a D2 Agent
Install a D2 agent to assist you when performing an investigation in the War Room.
Before you begin, do the following:
• (Windows) You have at least Power User credentials on the target machine.
• (Windows) Enable the Service Message Block Protocol on the target machine.
• (Remote installations) Firewall Port 445 (SMB) is open on the target machine.
You can install the D2 agent manually or remotely. When port 445 is open, you can install the D2 agent
remotely (from the Cortex XSOAR server) the first time you communicate with it. If you experience issues
during installation on Windows machines, see Troubleshoot a Remote Installation (Windows).
3. On the target machine, unzip and run the agent zip file.
4. (Optional) type the following command to test the agent installation:
!D2Exec cmd=`cmd /c dir` using=<agent-instance-name>
STEP 4 | (Optional) Configure Agent Tools that invoke existing forensic applications.
NT_STATUS_LOGON_FAILURE Verify that the username and password are correct in the
integration instance configuration settings.
NT_STATUS_DUPLICATE This error is related to a DNS issue. If you are using Amazon,
use the actual IP address and not the URL.
NT_STATUS_CONNECTION_RESET The target machine might not support SMB 1 connection. Make
sure SMB2 is active on the target machine and specify the
SMB argument value as 2. For more information about how
to detect, enable, and disable different versions of SMB for
Windows and Windows server, see SMB assistance.
Agent is installed but unresponsive Verify that the base URL for D2 agents and engines is correct
and reachable from the network segment where the agent is
installed. Go to Settings > About > Troubleshooting and verify
that you defined the external IP address or base URL of your
Cortex XSOAR server.
param([string]$myarg = "")
Write-Host "This is my argument: " $myarg
1. Zip up the file. In this example, we will call the file script.
Important to note:
command.push("powershell.exe"): Runs the PowerShell.
command.push("'" + which("printorg.ps1") + "'"): The absolute path of the executable
script.
//+ script/printorg.ps1: Annotation that tells the agent which tools to send to the Windows
machine. The name of the zip file (script) and the script name (printarg.ps1).
Cortex XSOAR server comes with a few example agent scripts. These will help you get more
acquainted with the functions. You can copy the scripts, change them and check the results.
pwd function string pwd(); Returns the absolute path of the working
folder.
which function string which(path string); Returns the absolute path for a given path or
executable.
Example:
console.log(which('ls'));/bin/
lsconsole.log(which('syslog'));/
usr/bin/syslog
pack function null pack(content object, Returns the content as an entry on the
contentformat string[optional]); investigation. Content can be a JSON object
or when specified value.contentformat may be
one of the following: 'table', 'text' or 'json'. If
not provided, the format will be determined
according the type of content.
pack_file function null pack_file(path string, Returns the path as a file entry on the
content string[optional]); investigation. If content is provided, it will be
attached to the file.
files function []FileInfo files(folder Retrieves a list of files from the folder. If
string, recurse bool[=false], hashes recurse is true, sub-folders will be included. If
bool[=false], regex string[=""]); hashes is true, it will compute hashes for each
file. If regex is provided, it will return only file
names matching the regex.
Returns an array of:{Created int CreatedStr
string Accessed int AccessedStr string Changed
int ChangedStr string Path string Type string
Size int Mode string MD5 string SHA1 string
SHA256 string SHA512 string SSDeep string}
Example:
console.log(JSON.stringify(files('/
tmp',true,true)));
copy function int copy(src string, dest Copies the source (src) to the destination (dest).
string, overwrite bool[=false], regex If overwrite is false, it will throw an exception
string[=""]); if the destination exists. If regex is provided,
it will copy only files matching the regex. This
function is not recursive.
Returns: The number of items copied.
move function int move(src string, dest Same as copy, but also deletes the source files.
string, overwrite bool[=false], regex
string[=""]);
del function int del(file string, regex[=""]); Deletes the file. If the file is a folder, and
regex is not empty, it will remove only the files
matching regex from that folder.
grep function []GrepMatch grep(path string, Searches the given path for files matching
regex string, recursive bool[=false]); regex. If recursive is true, it will dive into the
sub folders.
Returns an array of: { Path string // Path to
file matching Offsets [][]int // The matching
indexes on the line}
strings function []string strings(path string, Searches strings contained in the file provided
min int[=4], max int[=1024]); by path. Use min and max to control the sizes
of the strings that are captured.
Example:
console.log(JSON.stringify(strings('/
bin/ls')));
bytes function string bytes(file string, offset Returns a size bytes chunk of a file starting at
int[=0], size int[=1024]; offset.
Example:
console.log(JSON.stringify(bytes('ddb',0,15)))
mkdir function bool mkdir(path string); Returns 'true' if a folder was created. Throws
an exception otherwise.
rmdir function bool rmdir(path string); Removes the folder provided by *path.
Returns: 'true' if a folder was removed. Throws
an exception otherwise.
join_path function string join_path(part1, part2... Joins the paths provided by part1 to partN.
string);
Returns: Path string.
Example: console.log(join_path("/
tmp","one","two","three.file"));/
tmp/one/two/three.file
http function HTTPResponse http(url string, Performs HTTP GET call to URL with the
arg object); provided arg as a request body.
Returns object:{StatusCode int // HTTP
response code Status string // HTTP status as
text Cookies []http.Cookie Body string Headers
string[][]}
http.cookie object: Name string Value string
Path string // optional Domain string //
optional Expires time.Time // optional
RawExpires string // for reading cookies only //
MaxAge=0 means no 'Max-Age' attribute
specified. // MaxAge<0 means delete cookie
now, equivalently 'Max-Age: 0' // MaxAge>0
means Max-Age attribute present and given
in seconds MaxAge int Secure bool HttpOnly
bool Raw string Unparsed []string // Raw text
of unparsed attribute-value pairs
Example:
console.log(JSON.stringify(http("http://
www.google.com/lala")));
read_file function string read_file(path string); Returns the entire content of the path. Throws
an exception if it does not exist.
wait function string wait(seconds int); Sleeps for the number of defined seconds.
registry function Object[] registy(path string); Gets all values under the registry path provided
by path as a set of JSON objects. This function is
always recursive if a key name is provided.
The key name must start with one of
the following: "HKEY_CLASSES_ROOT",
"HKEY_CURRENT_USER",
"HKEY_LOCAL_MACHINE", "HKEY_USERS" or
"HKEY_CURRENT_CONFIG".
ifconfig function Object[] ifconfig(); Returns a list of all interface adapters and their
configurations.
accounts function Object[] accounts(); Returns a list of all defined user accounts.
STEP 3 | In the //+winpmem/winpmem_2.0.1.exe line in the script, change it to the file you want to
run. For example, //+New-collectorD2/New-collectorD2.bat
STEP 4 | In the var exename = 'winpmem_2.0.1.exe'; line write the file you want to execute.
//+New-collectorD2/New-collectorD2.bat
// {
if (env.OS !== 'windows') {
throw ('script can only run on Windows');
}
var arch = wmi_query('select OSArchitecture from win32_operatingsystem')
[0].OSArchitecture;
var exename = 'Testd2.bat';
var dumpFile = env.TEMP+ '\\New-collectorD2.bat';
var output = execute('cmd /c dir /s ' + env.TEMP , 30); // 10 minutes
timeout
pack(output);
//if (output.Success) {
// pack_file(dumpFile);
// del(dumpFile);
// } else {
// throw output.Error;
//}
// pack('Winpmem failed: ' + ex);
//}
cd c:\
dir
commonfields:
id: 9a18460a-e72f-488a-8112-044c9a7be76a
version: 13
name: D2Run
script: |-
//+TestBatch/TestBatch.bat
STEP 1 | Go the incident where you want to run the D2 Processes automation.
463
464 CORTEX XSOAR ADMINISTRATOR’S GUIDE | Logs
© 2020 Palo Alto Networks, Inc.
Audit Trail
The audit trail displays a log of all administrative user interactions with Cortex XSOAR. The log is sorted by
date and covers which users interacted in what way with system objects, and associated data. The audit trail
does not include actions performed in the war room. These actions are documented in the war room.
You can search the audit trail log for user interactions based on free text.
To view an audit trail, navigate to Settings > Users and Roles > Audit Trail.
To customize which columns are visible in the audit trail log, click the table settings button.
To export the audit trail log, use the GetAudits command from the Cortex XSOAR REST API. See the Cortex
XSOAR REST API documentation.
Extract a Day’s Audit Trail
You can write a script that runs daily to extract that day's audit trail, and upload it to your SIEM with
uploader programs. The following is an example of a curl command that will fetch all audits from June 22,
2017 and later - up to 10,000 actions.
curl -k -X POST https://<IP>:<PORT>/settings/audits -H 'accept: application/
json' -H 'authorization: <API KEY>' -H 'content-type: application/json' -d
'{"size" : 10000,"query": "modified:>2017-06-22T00:00:00"}'
Purge Audit Entries
You can define the retention period of the audit trail. By default audit entries will be retained forever. To
purge periodically, add a server setting in Settings > About > Troubleshooting where the key is:
• demisto.audits.purge (true will start the purging process)
• demisto.audits.purge.retention. The value is the number of days to save the log. Default is
365.
To define how often to check the audit trail log, in Settings > About > Troubleshooting
add demisto.audits.purge.delay where the value is how often to run the retention
(demisto.audits.purge.retention). The default is every 24 hours.
Purging can also be done manually. The following is an example of a curl command that will purge all audits
from June 22, 2017 to June 30, 2017.
curl -k -X POST https://<IP>:<PORT>/settings/audits/purge -
H 'accept: application/json' -H 'authorization: <API KEY>' -
H 'content-type: application/json' -d '{"page": 0, "size":
100,"fromDate": "2017-07-22T09:01:08.462954465+03:00","toDate":
"2017-07-30T12:23:08.462954597+03:00","period": {"by": "","toValue": null,
"fromValue": null, "field": "" }, "fromDateLicense": "0001-01-01T00:00:00Z"}'
The following table describes components and actions.
Component Actions
account • block
• unblock
• add
• delete
• stop
• start
APIKeys • delete
• add
AppServer • restart
backup • edit
Canvas • add
• edit
• delete
classifier • add
• copy
• edit
content • install
credentials • add
• edit
• delete
Dashboard • add
• delete
• edit
engine • add
• edit
entry • restore
• delete
• removeentrypermanently
• edit
execute • add
host • delete
• downloadconf
• add
HyperProcess(reputation) • add
• delete
incident • edit
• close
• execute
• delete
• duplicate
• notcreated
• add
incidentField • add
• edit
• delete
IncidentType • disable
• enable
• delete
• edit
• add
indicator • edit
• add
• delete
integrations • add
• delete
• edit
integrationsConfig • add
• edit
• delete
• upload
investigation • close
• reopen
• edit
• add
invite • add
• utilized
• delete
Jobs • add
• edit
• disable
• enable
• delete
• pause
• resume
• runnow
• abort
Layout • add
• copy
License • invalid
List • edit
• add
• delete
LiveBackup • switch
• add
• delete
login • failure
• in
• out
• outall
• outmyself
• outmyselfothersessions
• outuser
logout • failure
PasswordPolicy • edit
playbook • add
• edit
• attach
• detach
• upload
• copy
• delete
PreprocessRule • edit
• add
PropagationLabel • delete
• add
• edit
RemoteDB • download
• enable
• disable
• add
• create
role • add
• edit
• delete
script • copy
• upload
• edit
• add
• delete
ServerConfiguration • edit
task • add
• copy
Telemetry • edit
user • edit
• lockout
• unlock
• add
• enable
• setpassword
whitelist • delete
• batchcreate
• add
Widget • edit
• add
• reset
STEP 1 | Select Settings > About > Troubleshooting > Add Server Configuration.
Filter Example
In this example, we want to match audit trail entries of login success and login failure. To
accomplish this, we set the syslog.filter parameter to login/.*.
Sample Syslog
CEF:0|Demisto|Demisto Enterprise|3.6.0-
master.27665.da330b76ddbdf9bbf8e1dab82978550f2b5446c8|login|in|3|suser=john
startTime=1521835934123052 cs1=john cs1Label=identifier cs2=host/ip:
[::1]:62296\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36
cs2Label=details client: 127.0.0.1:57284