Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views105 pages

Module 2.4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 105

12

Mitigating
Vulnerabiliti
es
In this chapter you will learn:

■ How common attacks may threaten your organization

■ Best practices for securing environments from


commonly used attacks

■ Common classes of vulnerabilities

■ Mitigating controls for common vulnerabilities


12.1 Attack Types
• Auditing is the process of examining the actions of an entity, such as an individual
user, with the goal of conclusively tying the actions taken in a system, or on a
resource, to that entity.
• Auditing directly supports accountability and nonrepudiation, in that individuals
and entities can be held accountable for their actions, and they cannot dispute that
they took those actions if auditing is properly configured.
• Auditing would not be possible without the process of logging. Logging is the activity
that takes place at the lower level to ensure that we can audit actions.
• Logging is the process of recording these actions, as well as specific data related to
those actions.
Injection Attacks
In injection attacks, attackers execute
malicious operations by submitting untrusted
input into an interpreter that is then
evaluated. Injection vulnerabilities, though
sometimes obscure, can be very easy to
discover and exploit.
Remote Code Execution (RCE)
• Remote code execution (RCE) describes an attacker’s ability to execute malicious code
on a target platform, often following an injection attack.
• RCE is widely considered one of the most dangerous types of computer vulnerabilities,
because it may allow for arbitrary command execution and does not require physical
connectivity.
http://victimwebsite.com/?code=system('whoami’);
• Here are some common tools and techniques for directly mitigating RCE:
Input validation and sanitization
Application firewalls
Runtime application self-protection (RASP)
Containerization and virtualization
Extensible Markup Language Attack
• Extensible Markup Language (XML) injection is a class of attack that relies on
manipulation or compromise of an application by abusing the logic of an XML parser to
cause some unwanted action.
• XML injections that target vulnerable parsers generally take two forms: XML bombs
and XML External Entity (XXE) attacks.
• An XML bomb is an attack designed to cause the XML parser or the application it
supports to crash by overloading it with data.
• XXE can be used to initiate denial of service, conduct port scanning, and perform
server-side request forgery (SSRF) attacks. The attack works though abuse of the XML
parser to execute functions on behalf of the attacker, using a reference to an external
entity.
Extensible Markup Language Attack
(cont.)

• Here are some common tools and techniques for XXE mitigation:
Input validation and sanitization
Disabling external entities and DTDs
Using a secure XML parser
Containerization and virtualization
Using a web application firewall
Regularly updating software
Structured Query Language Injection
(SQLi)
• SQL injection (SQLi) is a popular form of injection in which an attacker injects arbitrary
SQL commands to extract data, read files, or even escalate to an RCE.
• These attacks are not particularly sophisticated, but the consequences of their
successful usage are particularly damaging, because an attacker can obtain, corrupt,
or destroy database contents.
• Attackers can use several different types of SQL injection attacks to compromise a web
application.
• If you are examining logs, you should be able to recognize why a query such as
“SELECT * FROM table WHERE id=1-SLEEP(15)” looks suspicious.
Structured Query Language Injection
(cont.)
• The following are some common tools and techniques for SQLi mitigation:
Input validation and sanitization
Parameterized queries
Least privilege access
Database firewalls (DBFWs) and proxies
Regularly updating software
Database encryption
Database activity monitoring
Using stored procedures
Cross-Site Scripting
• Cross-site (XSS)
scripting (XSS) is a type
of injection attack that leverages a
user’s browser to execute malicious
code that can access sensitive
information such as passwords and
session information.
• XSS comes in two forms: persistent
(stored) and nonpersistent
(reflected).
• Another type of attack, called a
DOM-based XSS attack, occurs
when an attacker injects a malicious
script into the client-side HTML
being parsed by a browser.
Cross-Site Scripting (cont.)
• The following are several tools and techniques for XSS prevention and mitigation:
Input validation and sanitization
Output encoding
Contextual output encoding
Content Security Policy (CSP)
HTTPOnly cookies
X-XSS-Protection header
Web application firewall
Security testing
Regularly updating software
Cross-Site Request Forgery (CSRF)
• Cross-site request forgery (CSRF) is an attack that exploits the trust a website has in a
user’s browser.
• The attack works by tricking a user into performing an action on a website without
their knowledge or consent. This is achieved by a malicious actor crafting a request
that is sent to the target website and is designed to mimic a legitimate request.
• When the victim interacts with the website, the browser will send the crafted request,
causing the website to perform an unintended action.
• The victim will be unaware of the attack, as the attacker is able to piggyback on the
victim’s previously authenticated session.
Cross-Site Request Forgery (cont.)
• Here are some tools and techniques for preventing and mitigating CSRF attacks:
HTTP Referer
SameSite attribute
CAPTCHAs
CSRF tokens
CSRF protection frameworks
User re-authentication
Multifactor authentication
Web application firewall
Regularly updating software
Directory Traversal
• A directory traversal attack enables an attacker to view, modify, or execute files in a
system that they wouldn’t normally be able to access.
• For web applications, these files normally reside outside of the web root directory and
should not be viewable.
• However, if the server has poorly configured permissions, a user may be able to view
other assets on the server.
• If an attacker determines a web application is vulnerable to directory traversal attack,
they may use one or more explicit Unix-compliant directory traversal character
sequences (../) or an encoded variation of it to bypass security filters and access files
outside of the web root directory.
Directory Traversal (cont.)
• Here are some common tools and techniques for preventing and mitigating directory
traversal attacks:
Input validation
Principle of least privilege
Filename sanitization
Secure coding practices
File-handling libraries
Reverse proxy
Process isolation
Web application firewall
Regularly updating software
Server-Side Request Forgery (SSRF)
• SSRF, or server-side request forgery, refers to a vulnerability that arises when a web
application allows a user to specify a URL to fetch a remote resource, without properly
validating the URL.
• This can allow an attacker to send a crafted request to an unexpected destination,
potentially bypassing firewalls, virtual private networks (VPNs), and other network
access controls.
• Common tools and techniques for prevention and mitigation of SSRF are as follows:
Input validation
Principle of least privilege
URL validation and sanitization
Secure coding practices
Web application firewall
Buffer Overflow
Vulnerabilities
Attackers often will write malware that takes
advantage of some quality or operation of
main memory.
Buffer Overflow
• The temporary space that a program has allocated to perform operating system or
application functions is referred to as the buffer.
• Buffers usually reside in main memory, but they may also exist in hard drive and
cache space.
• When the volume of data exceeds the capacity of the buffer, the result is buffer
overflow. If this occurs, a system may attempt to write data past the limits of the
buffer and into other memory spaces.
• Buffer overflows affect nearly every type of software and can result in unexpected
results if not managed correctly.
Buffer Overflow (cont.)
• Here is a list of prevention and mitigation techniques for buffer overflow
vulnerabilities:
Input validation
Principle of least privilege
Runtime application self-protection (RASP)
Secure coding practices
Stack canaries
Address space layout randomization (ASLR)
Data execution prevention (DEP)
Code signing
Heap randomization
Stack-Based Attacks
• Stack-based buffer overflows work by overwriting key areas of the stack with too much
data to enable custom code, located elsewhere in memory, to be executed in place of
legitimate code.
• The first widely distributed Internet worm was made possible through a successful
stack-based buffer attack.
• The Morris Worm, written by graduate student Robert Tappan Morris from Cornell
University in the late 1980s, took advantage of a buffer overflow vulnerability in a
widely used version of fingerd, a daemon for a simple network protocol used to
exchange user information.
• The stack is a very structured, sequential memory space, so the relative distance
between any two local variables in-memory is guaranteed to be relatively small.
• Buffer overflows affect nearly every type of software and can result in unexpected
Heap-Based Attacks
• Attacks targeting the memory heap are usually more difficult for attackers to
implement because the heap is dynamically allocated.
• In many cases, heap attacks involve exhausting the memory space allocated for a
program.
Integer Attacks
• An integer overflow takes advantage of the fixed architecturedefined memory regions
associated with integer variables.
• In many cases, heap attacks involve exhausting the memory space allocated for a
program.
Broken Access
Control
Broken access control is a term used to
describe the failure of access controls to
prevent unauthorized access to sensitive data
or functionality.
Broken Access Control

• This vulnerability can occur when an application fails to properly enforce access
controls, such as authentication or authorization.
• The impact of broken access control can be severe, allowing attackers to access
sensitive information, modify data, or execute unauthorized actions on behalf of
legitimate users.
• Broken authentication attacks attempt to gain control of one or more accounts by
granting the attacker the same privileges as the victim. Authentication is "broken"
when attackers are able to assume user identities via compromising passwords, keys
or session tokens, user account information, and other details.
• Common examples of broken access control vulnerabilities include vertical privilege
escalation, horizontal privilege escalation, insecure direct object references, and lack
Broken Object Level Authorization

• When exposing services via APIs, some servers fail to authorize on an object basis,
potentially creating the opportunity for attackers to access resources without the
proper authorization to do so.
• In some cases, an attacker can simply change a URI to reflect a target resource and
gain access.
• Broken authentication attacks attempt to gain control of one or more accounts by
granting the attacker the same privileges as the victim. Authentication is "broken"
when attackers are able to assume user identities via compromising passwords, keys
or session tokens, user account information, and other details.
• Broken object level authorization (BOLA) checks should always be implemented and
access granted based on the specific role of the user.
Broken Object Level Authorization (cont.)
• Here is a list of common tools and techniques for mitigating and preventing broken
object level authorization:
Role-based access control (RBAC)
Attribute-based access control (ABAC)
Least privilege
Access control testing
Proper error handling
Session management
Regular software updates
Web application firewall
Business object level authorization (BOLA)
Business flow level authorization (BFLA)
Broken User Authentication

• When authentication mechanisms fail to validate credentials correctly, allow


credentials that are too weak, accept credentials in an insecure manner, or allow brute
forcing of credentials, they create conditions that an attacker can take advantage of.
• This vulnerability arises when user authentication mechanisms are improperly
implemented or fail to provide adequate protection against authentication-related
attacks.
• Attackers exploit these vulnerabilities to gain unauthorized access to sensitive data or
functionality, compromising user accounts and other critical assets.
• One of the most common examples of broken user authentication is when weak
passwords are used, making it easier for attackers to guess or brute force their way
into user accounts.
Broken Function Level Authorization
(BFLA)
• Broken function level authorization (BFLA) is a critical security risk that occurs when
an application fails to properly restrict access to certain functionality based on the
user’s role or privileges.
• This can lead to unauthorized access to sensitive data or functionality, including the
ability to modify or delete data, execute commands, or perform other malicious
actions.
• The vulnerability arises when an application relies solely on client-side checks to
enforce access control policies or when access control policies are not properly
designed or implemented.
• Attackers can exploit this vulnerability through a variety of means, including direct
manipulation of URLs or parameters, forging HTTP requests, and by exploiting known
Cryptographic
Failures
Cryptographic failures are vulnerabilities or
weaknesses in a system’s cryptographic
algorithms, protocols, or key management.
Cryptographic Failures

• Attackers can exploit these weaknesses to bypass or break cryptographic protections,


such as encryption or digital signatures, and gain unauthorized access to sensitive
information or systems.
• One of the main causes of cryptographic failures is the improper implementation or
use of cryptographic techniques.
• For example, weak or outdated encryption algorithms, poor key management
practices, and using the same key for multiple purposes can all create vulnerabilities
in a system’s cryptographic protections.
Cryptographic Failures (cont.)
• Here is a list of common mitigations and prevention techniques for cryptographic
failures:
Algorithm selection
Key management
Password storage
Secure random number generators
Certificate pinning
Digital signatures
Application firewalls
Regularly updating software and firmware
Data Poisoning
Data poisoning is an attack that involves the
manipulation of training data used to create
machine learning (ML) or artificial intelligence
(AI) models.
Data Poisoning

• The goal of the attacker is to introduce biased or malicious data into the training set,
which can then result in incorrect or harmful decisions by the model.
• Attackers can conduct data poisoning attacks through a variety of methods, including
injecting biased or malicious data directly into the training set, manipulating data
sources or sensors to produce biased data, and manipulating human input used to
label or classify data.
• The impact of a successful data poisoning attack can be severe, ranging from incorrect
or inaccurate decisions by the model to potentially dangerous actions taken based on
the model’s output.
Data Poisoning (cont.)
• The following are some tools and techniques that can be used in the prevention and
mitigation of data poisoning attacks:
Data quality controls
Data monitoring
Outlier detection
Model validation and testing
Data preprocessing
Access controls
Regular software updates
Input validation
Principle of least privilege
Privilege
Escalation
Privilege escalation is simply any action that
enables a user to perform tasks they are not
normally allowed to do.
Privilege Escalation

• This often involves exploiting a bug, implementation flaw, or misconfiguration.


• Escalation can happen in a vertical manner, meaning that a user gains the privileges
of a higher privilege user.
• Alternatively, horizontal privilege escalation can be performed to get the access of
others in the same privilege level.
• Attackers will use these privileges to modify files, download sensitive information, or
install malicious code.
• Jailbreaking and Rooting of mobile phones are common examples of privilege
escalation.
Privilege Escalation (cont.)
• Here is a list of common mitigations and protections against privilege escalation:
Least privilege
Role-based access control (RBAC)
Attribute-based access control (ABAC)
Segmentation and isolation
Logging and monitoring
Penetration testing
Application firewalls
Regular software updates
Identification and
Authentication
Attacks
In securing authentication systems, the main
challenge lies in identifying and
communicating just the right amount of
information to the authentication system to
make an accurate decision.
Identification and Authentication Attacks
• If an attacker is clever enough to fabricate a user’s information sufficiently, they are
effectively the same person in the eyes of the authentication system.
• Some prevention techniques for identification and authentication attacks are as
follows:
Strong password policy
Multifactor authentication (MFA)
CAPTCHA
Rate limiting
Session management
Encryption
Access control
User education
Password Spraying
• Password spraying is a type of brute-force technique in which an attacker tries a single
password against a system and then iterates though multiple systems on a network
using the same password.
• Detecting spraying attempts is much easier if there is a unified interface, such as that
provided by a SIEM solution
• Common techniques to detect password spraying include the following:
High number of authentication attempts within a defined period of time across
multiple systems
High number of bad usernames, or usernames that don’t match company
standard
High number of account lockouts over a defined period of time
Multiple successful logins from a single IP in a short time frame
Credential Stuffing

• Credential stuffing is a type of brute-force attack in which credentials obtained from a


data breach of one service are used to authenticate to another system in an attempt
to gain access.
• Recent data breaches have each exposed hundreds of thousands to millions of
credentials.
• If even a small fraction of a credentials list can be used to gain access to accounts, it’s
worth it from the attacker’s vantage point, especially with the use of automation.
• For organizations, mandating MFA is effective in slowing the effectiveness of attacks,
especially those that are automated.
Impersonation

• Sometimes attackers will impersonate a service to harvest credentials or intercept


communications.
• Fooling a client can be done one of several ways.
• First, if the server key is stolen, the attacker appears to be the server without the
client possibly knowing about it.
• Additionally, if an attacker can somehow gain trust as the certificate authority (CA)
from the client, or if the client does not check to see if the attacker is actually a
trusted CA, then the impersonation will be successful.
Man-in-the-Middle

• Essentially, man-in-the-middle, or monkey-in the-middle (MITM), attacks are


impersonation attacks that face both ways: the attacker impersonates both the client
to the real server and the server to the real client.
• Acting as a proxy or relay, the attacker will use their position in the middle of the
conversation between parties to collect credentials, capture traffic, or introduce false
communications.
• In the case of HTTPS, the client browser establishes a Secure Sockets Layer (SSL)
connection with the attacker, and the attacker establishes a second SSL connection
with the web server.
Session Hijacking

• Session hijacking is a class of attacks by which an attacker takes advantage of valid


session information, often by stealing and replaying it.
• HTTP traffic is stateless and often uses multiple TCP connections, so it uses sessions to
keep track of client authentication.
• Session information is just a string of characters that appears in a cookie file, the URL
itself, or other parts of the HTTP traffic.
• An attacker can get existing session information through traffic capture, an MITM
attack, or by predicting the session token information.
• By capturing and repeating session information, an attacker may be able to take over,
or hijack, the existing web session to impersonate a victim.
Local File
Inclusion/Remote File
Inclusion Attacks
Local file inclusion (LFI) and remote file
inclusion (RFI) attacks are a common
vulnerability found in web applications.
LFI/RFI Attacks
• LFI occurs when an attacker is able to include a file located on the server in a web
page, allowing them to view sensitive information or execute arbitrary code.
• RFI is similar to LFI, but instead of including a file on the server, an attacker is able to
include a file located on a remote server, giving them even more control over the
targeted system.
• Both can happen when the application doesn’t properly validate user input or restrict
access to sensitive files or remote resources.
• Attackers can leverage these vulnerabilities to access sensitive files or execute
malicious code on the server.
LFI/RFI Attacks (cont.)
• Some common mitigations and prevention techniques for LFI/RFI attacks are as
follows:
Input validation
Access controls
Filename sanitization
Secure coding practices
Web application firewall
Regularly updating software
File-handling libraries
Rootkits
Rootkits are among the most challenging types
of malware because they are specially
designed to maintain persistence and root-
level access on a system without being
detected.
Rootkits
• As with other types of malware, rootkits can be introduced by an attacker leveraging
vulnerabilities to achieve privilege escalation and clandestine installation.
• Alternatively, they may be presented to a system as an update to BIOS or firmware.
• Rootkits are difficult to detect because they sometimes reside in the lower levels of an
operating system, such as in device drivers and in the kernel, or even in the computer
hardware itself, so the system cannot necessarily be trusted to report any
modifications it has undergone.
Rootkits (cont.)
• A few protection and mitigation techniques for rootkits are as follows:
Regular system updates
Endpoint protection
Secure boot process
Kernel patch protection
Insecure Design
Vulnerabilities
Attackers often use software vulnerabilities to
get around an organization’s security policies
intended to protect its data.
Insecure Design Vulnerabilities
• Here are some common prevention and mitigation techniques for insecure design:
Secure development lifecycle (SDLC)
Component analysis (software composition analysis [SCA])
Static application security testing (SAST) / Dynamic application security testing
(DAST)
Runtime application self-protection (RASP)
Access controls
Error handling
Configuration management
Logging and monitoring
Application firewalls
Regular software updates
Improper Error Handling
• Error handling is an important and normal function in software development.
• Improper error handling can be a security concern when too much information is
disclosed about an exception to outside users.
• Sometimes error messages don’t reveal a lot of detail, but they can still provide clues
to help attackers.
• As part of a secure coding practice, policies on error handling should be documented,
including what kind of information will be visible to users and how this information is
logged.
Dereferencing
• A dereferencing vulnerability, or null point dereference, is a common flaw that occurs
when software attempts to access a value stored in memory that does not exist.
• A null pointer is a practice in programming used to direct a program to a location in
memory that does not contain a valid object.
• Often, dereferencing results in an immediate crash and subsequent instability of an
application.
• Attackers will try to trigger a null pointer dereference in the hopes that the resulting
errors enable them to bypass security measures or learn more about how the program
works by reading the exception information.
Insecure Object Reference
• Insecure object reference vulnerabilities occur when the object identifiers in requests
are used in a way that reveals a format or pattern in underlying or back-end
technologies, such as files, directories, database records, and URLs.
• Rather than referencing sources directly, developers can avoid exposing resources by
using an indirect reference map.
• With this method, a random value is used in place of a direct internal reference,
preventing inadvertent disclosure of internal asset locations.
• Enforcing access controls at the object level also addresses the primary issue with this
vulnerability: insufficient or missing access check.
Race Condition
• A race condition vulnerability is a defect in code that creates an unstable quality in the
operation of a program arising from timing variances produced by programming logic.
• A subclass of race condition vulnerabilities, time-of-check to time-of-use (TOCTOU), is
often leveraged by attackers to cause issues with data integrity.
• TOCTOU is used by software to check the state of a resource before its use.
• CVE-2016-7098 refers to such a possibility of bypassing any restrictions imposed by an
access control list (ACL). in its description of a flaw in wget, a popular command-line
tool that allows file retrieval from servers.
Sensitive Data Exposure
• Sensitive data exposure vulnerabilities occur when an application or system does not
adequately protect data from access to unauthorized parties.
• Data can include authentication information, such as logins, passwords, and tokens, as
well as personally identifiable information (PII), protected health information (PHI), or
financial information.
• At the minimum, all organizations can follow a set of practices to ensure baseline
protections against exploits of data exposure vulnerabilities.
• The process begins with identifying which data is sensitive according to privacy laws,
regulatory requirements, or your organization’s own definitions.
• In terms of data storage, it’s important to collect and store only the minimum amount
of data needed to fulfill a business requirement.
Insecure Components
• Though modern software development practices make use of open source and
external components to speed up the software development process, overreliance on
these components, especially when they’re not fully examined from a security aspect,
can lead to dangerous exposure of your organization’s data.
• At the minimum, all organizations can follow a set of practices to ensure baseline
protections against exploits of data exposure vulnerabilities.
• The process begins with identifying which data is sensitive according to privacy laws,
regulatory requirements, or your organization’s own definitions.
• In terms of data storage, it’s important to collect and store only the minimum amount
of data needed to fulfill a business requirement.
1.2 Operating System Concepts
• Although most people are familiar with the four major variations of operating systems,
such as Windows, UNIX, Linux, and macOS, there are countless other variations, including
embedded and real-time operating systems as well as older operating systems, such as
BSD, OS/2 Warp, and so on.
• Operating System Characteristics:
The OS is in charge of managing all the hardware and software resources on the
system
The OS also provides abstraction layers between the end user and the hardware
The operating system also serves as a mediator for applications installed on the
system to interact with both the user and system hardware.
• At its basic level, the operating system consists of critical core system files and also
provide the ability to execute applications on the system. They also provide the ability for
Windows Registry

• Every single configuration element for the Windows operating system is contained in its
registry. The registry is the central repository database for all configurations settings in
Windows, whether they are simple desktop color preferences or networking and
security configuration items.
• The registry is the most critical portion of the operating system, other than its core
executables.
• It is one of the first places an analyst goes to look for issues, along with the log files, if
Windows is not functioning properly, or if the analyst suspects the operating system
has been compromised.
• The Windows registry is a hierarchical database, which is highly protected from a
security perspective. Only specific programs, processes, and users with high-level
Windows Registry
Hives
The five hives are shown in the
attached image.
Although the registry stores all
configuration details for the Windows
operating system and installed
applications, configuration changes
routinely are not made to the registry
itself. They are usually made through
other configuration utilities that are
part of the operating system and its
applications. For instance, you would
not make changes to group policy
directly in the registry; you would
simply use the group policy editor,
which would update the registry.
Linux Configuration Settings

• Rather than storing configuration settings in a proprietary hierarchical database, as


Windows does, Linux configuration settings are stored in simple text files.
• Most of these configuration text files are stored in the /etc directory and its related
subdirectories. These text files are not integrated, although you will often see
references to other configuration files in them.
• It’s important to note that configuration files in Linux are well protected and only
certain applications, daemons, and users, such as root, have direct access to them.
• On the exam, you may be asked to answer questions regarding scenarios that test your
understanding on the fundamentals of both Windows and Linux configuration settings.
At minimum, you should understand the very basic concepts of configuration settings
such as the registry.
System Hardening
• Systems when shipped from a vendor typically come with default configuration
settings. An example is the common use of the default credentials admin : admin.
• Before a system is put into operation, it should go through a process known as
hardening, which means that the configuration of the system is made more secure
and locked down. This can be done manually or automated using hardening tools or
scripts.
• Typical actions taken for system hardening include the following:
Updates with the most recent operating system and application patches
Unnecessary services turned off
Unnecessary open network ports closed
Password changes for all accounts on the system to more complex passwords
New accounts created with very restrictive privileges
Installation of antimalware, host-based intrusion detection, and EDR software
File Structure
• The file structure of an operating system dictates how files are stored and accessed on
storage media, such as a hard drive. The file structure is operating system dependent.
• Most modern operating systems organize files into hierarchical logical structures,
resembling upside-down trees, where a top-level node in the tree is usually a directory
and is represented as a folder in the GUI.
• Most often files have individual extensions that are not only somewhat descriptive of
their function but also make it easier for the applications that use those files to access
them. Examples: .docx, .pdf, .jpeg, .mkv.
• Windows uses the NTFS file system, proprietary to Microsoft, and most modern Linux
variants use a file system known as ext4.
• Files typically have unique signatures that are easily identifiable using file verification
utilities. This can help a cybersecurity analyst to determine if a file has been
System Processes
• Processes are ongoing activities that execute to carry out a multitude of tasks for
operating systems and applications.
• As a cybersecurity analyst, you should be familiar with the basic processes for
operating systems and applications that you work with on a daily basis.
• Windows processes can be viewed using a utility such as Task Manager.
• In Linux and other UNIX-based systems, processes are referred to as being spawned by
Linux services, or daemons. One of the basic tools for viewing processes in Linux is the
ps command.
• You should understand how to view and interact with processes for both Windows and
Linux for the exam, using Task Manager and the ps command, respectively.
Viewing processes in Linux using the ps
command.

Viewing processes in the Windows Task


Manager
Hardware Architecture
• Hardware architecture refers not only to the logical and physical placement of hardware
in the network architecture design but also the architecture of how the hardware itself
is designed with regard to the CPU, trusted computing base, memory allocation, secure
storage, boot verification processes, and so on.
• Some hardware architectures are better suited from a security perspective than others;
many hardware architectures have security mechanisms built in, such as those that use
Trusted Platform Modules (TPMs) and secure boot mechanisms, for instance.
• The TPM is a secure, tamper-resistant location for storing encryption keys and
performing highly trusted cryptographic operations on Windows and Linux devices.
With Apple, it’s called the Secure Enclave and with Android, it’s called the Android
Knox.
• Secure boot helps/prevents unauthorized OS's (or any software) from booting.
1.3 Network Architecture
• A network architecture refers to the nodes on a computer network and the manner in
which they are connected to one another.
• There is no universal way to depict this, but it is common to, at least, draw the various
subnetworks and the network devices (such as routers, firewalls) that connect them. It
is also better to list the individual devices included in each subnet, at least for valuable
resources such as servers, switches, and network appliances.
• The most mature organizations draw on their asset management systems to provide a
rich amount of detail as well as helpful visualizations.
• It should not just be an exercise in documenting where things are but in placing them
deliberately in certain areas. Secure boot helps/prevents unauthorized OS's (or any
software) from booting.
Hybrid
network
architecture
In most cases, network
architectures are hybrid
constructs
incorporating physical,
software-defined, virtual,
and cloud assets
On-premises Architecture
• The most traditional network architecture is a physical one. In a physical network
architecture, we describe the manner in which physical devices such as workstations,
servers, firewalls, and routers relate to one another.
• Along the way, we decide what traffic is allowed, from where to where, and develop the
policies that will control those flows. These policies are then implemented in the
devices themselves—for example, as firewall rules or access control lists (ACLs) in
routers.
• Most organizations use a physical network architecture, or perhaps more than one.
Network Segmentation
• Network segmentation is the practice of breaking up networks into smaller
subnetworks.
• Segmentation enables network administrators to implement granular controls over the
manner in which traffic is allowed to flow from one subnetwork to another.
• Some of the goals of network segmentation are to thwart an adversary’s efforts,
improve traffic management, and prevent spillover of sensitive data.
• Network segmentation can be implemented through physical or logical means.
• Physical network segmentation uses network devices such as switches and routers.
• Logical segments are implemented through technologies such as virtual local area
networking (VLAN), software-defined networking, end-to-end encryption, and so on.
• The reason why you might want to segment hosts from others on the network is to
protect sensitive data.
Zero Trust
• Zero Trust simply means that hosts, applications, users, and other entities are not
trusted by default, but once they are trusted, through strong identification and
authentication policies and processes, that trust must be periodically reverified.
• Zero Trust is a security concept in which organizations assume that attackers have
already breached their network perimeter defenses and, as a result, do not
automatically trust any user or device that is inside the perimeter. Instead, zero trust
networks require all users and devices to be authenticated and authorized before they
are allowed to access resources on the network.
• Never trust, Always Verify.
• Zero Trust can be implemented through Zero trust network access (ZTNA). In a ZTNA
architecture, access to network resources is controlled through the use of software-
defined perimeters that are created around specific resources. These perimeters are
dynamically established and enforced by a central control plane.
Software-
Defined
Networking
Software-defined networking
(SDN) is a network architecture in
which software applications are
responsible for deciding how best
to route data (the control layer)
and then for actually moving those
packets around (the data layer).
One of the most powerful aspects
of SDN is that it decouples data
forwarding functions (the data
plane) from decision-making
functions (the control plane),
allowing for holistic and adaptive
control of how data moves around
Secure Access
Secure Edge

(SASE)
SASE combines the concepts of
software-defined wide area networking
and zero trust, and the services are
delivered through cloud-based
deployments.
• SASE is identity based; in other words,
access is allowed based on the proper
identification and authentication of both
users and devices.
• Another key characteristic is that SASE
is entirely cloud-based, with both its
infrastructure and security mechanisms
delivered through cloud.
• SASE is designed to be globally
distributed; secure connections are
Cloud Service Models
Cloud computing enables organizations to access on-demand
network, storage, and compute power, usually from a shared pool
of resources.

Infrastructure
Software as a Platform as a as a Service
Service (SaaS) Service (PaaS) (IaaS)
Google Apps, Dropbox, AWS Lambda, Microsoft DigitalOcean, Linode,
Salesforce, Office 365, Azure, Google App Rackspace, AWS, Cisco
iCloud, are all examples Engine, Apache Stratos, Metapod, Microsoft
of SaaS. AWS Elastic Beanstalk, Azure, Google Compute
Heroku. Engine (GCE)
Software as a
Service
• SaaS allows users to connect to and
use cloud-based apps over the
Internet.
• Organizations access applications and
functionality directly from a service
provider with minimal requirements
to develop custom code in-house.
• The vendor provides the service and
all of the supporting technologies
beneath it.
• Any security problems that arise
occur at the data-handling level.
• The most common types of SaaS
vulnerabilities exist in one or more of
three spaces: visibility, management,
Platform as a
Service
• PaaS provides customers a complete
cloud platform for developing, running
and managing applications without the
cost, complexity and inflexibility that
often comes with building and
maintaining that platform on premises.
• PaaS solutions are optimized to provide
value focused on software
development.
• PaaS is designed to provide
organizations with tools that interact
directly with what may be the most
important company asset: its source
code.
• Service Providers assume the
Infrastructure as a
Service
• IaaS is internet access to 'raw' IT
infrastructure—physical servers, virtual
machines, storage, networking and
firewalls—hosted by a cloud provider.
IaaS eliminates cost and the work of
owning, managing and maintaining on-
premises infrastructure.
• The organization provides its own
application platform and applications.
• Remember that SaaS typically only
offers applications, PaaS generally
offers a configured host with the
operating system only, and IaaS usually
offers a base server on which the
organization installs its own operating
Security as a Service
• SECaaS is a cloud-based model for
service delivery by a specialized
security service provider. SECaaS
providers usually offer services such as
authentication, antivirus, intrusion
detection, and security assessments.
• SECaaS serves as an extension of MSSP
capabilities, providing incident
response, investigation, and recovery.
• Examples include; Identity and access
management, Antivirus management,
Data loss prevention (DLP), Continuous
monitoring, Firewall as a Service
(FWaaS), Vulnerability scanning.
Cloud Deployment Models

Public Cloud Private Cloud


Shared infrastructure that Infrastructure is owned and
supports all users who want maintained by a single
to make use of a computing organization. Public cloud
resource. providers can also emulate a
private cloud within a public
Community
cloud (Virtual Private Cloud).
Hybrid Cloud Cloud
An organization makes use Multiple organizations with
of interconnected private a common interest in how
and public cloud data is stored and
infrastructure. processed sharing
computing resources.
Cloud Access Security Broker (CASB)
• A CASB sits between each user and each cloud service, monitoring all activity,
enforcing policies, and alerting you when something seems to be wrong.
• Four pillars of CASBs:
Visibility
Threat Protection
Compliance
Data Security
• A CASB mediates access between internal clients and cloud-based services. It is
normally installed and managed jointly by both the client organization and the CSP.
1.4 Infrastructure Concepts
• Infrastructure is more that networks and networking components. It also the host
devices, their operating systems and applications, management structures, processes,
architectures, and so on.
• A cybersecurity analyst should become intimately familiar with their organization’s
infrastructure.
• They also need to be aware of some of the specific technologies their organization
uses, including serverless architecture, virtualization, containerization, and so on, and
how they interact with the infrastructure.
Virtualization
• Virtualization is technology that you can use to create virtual representations of
servers, storage, networks, and other physical machines. Virtual software mimics the
functions of physical hardware to run multiple virtual machines simultaneously on a
single physical machine.
• Virtualization technologies have vastly reduced the hardware needed to provide a wide
array of service and network functions.
• For the average user, virtualization has proven to be a low-cost way to gain exposure to
new software and training.
Hypervisors
• The use of hypervisors is the most
common method of achieving
virtualization.
• It manages the physical hardware
and performs the functions necessary
to share those resources across
multiple virtual instances.
• Classifications of hypervisors:
 Type 1 (Bare-metal hypervisors)
VMware ESXi, Microsoft Hyper-V,
and Kernelbased Virtual Machine
(KVM).
 Type 2.
Vmware Workstation, Oracle VM
VirtualBox, and Parallels Desktop.
Containerizati

on
A software deployment option that
involves packaging up software code
and its dependencies, so it is easier
to deploy across computing
environments.
• Containerization is simply running a
particular application in a virtualized
space, rather than running the entire
guest operating system.
• Each container operates in a sandbox,
with the only means to interact being
through the user interface or API
calls.
• The benefits of containers include
faster deployment, less overhead,
easier migration, greater scalability,
and more fault tolerance.
Serverless
Architecture
• Serverless computing is a model
where backend services are provided
on an as-used basis.
• Serverless architectures rely on the
concepts of containerization and
virtualization to run small pieces of
microcode in a virtualized
environment to provide very specific
services and functions.
1.5 Identity and Access Management
(IAM)
• IAM is a broad term that encompasses the use of different technologies and policies to
identify, authenticate, and authorize users through automated means.
• Identification describes a method by which a subject (user, program, or process)
claims to have a specific identity (username, account number, or e-mail address)
• Authentication is the process by which a system verifies the identity of the subject,
usually by requiring a piece of information that only the claimed identity should have.
• Authorization is a check against some type of policy to verify that this user has
indeed been authorized to access the requested resource and perform the requested
actions.
Multifactor Authentication (MFA)
• Authentication is still commonly based on credentials consisting of a username and a
password.
• Multifactor authentication is the preferred modern authentication method. MFA just
means that more than one authentication factor is used.
• Most Common authentication factors:
Something you know (knowledge factor), Threat Protection
Something you are (biometric or inherence factor)
Something you have (possession factor)
• Other factors can be included with multifactor authentication to further secure the
process, including temporal factors (such as time of day) and location (such as logical
IP address, hostname, or even geographic location).
• Passwordless authentication is essentially any method of authentication that does not
Single Sign-On
• (SSO)
Single sign-on (SSO) enables users to
authenticate only once and then be able
to access all their authorized resources
regardless of where they are.
• SSO centralizes the authentication
mechanism, that system becomes a
critical asset and thus target for attacks.
Compromise of the SSO system, or loss
of availability, means loss of access to
the entire organization’s suite of
applications that rely on the SSO
system.
• SAML, a widely used method of
implementing SSO, provides access and
authorization decisions using a system
Federation
• Federated identity is the concept of
using a person’s digital identity
credentials to gain access to various
services, often across organizations.
• Federation services can be considered
SSO but applied across multiple
organizations.
• The user’s identity credentials are
provided by a broker known as the
Federated Identity Manager or Identity
Provider (IdP).
• Many popular platforms, such as Google,
Amazon, and Twitter, take advantage of
their large memberships to provide
federated identity services for third-
OpenID
• OpenID is an open standard for user
authentication by third parties.
• Designed with web and mobile
applications in mind.
• SAML is mainly used for Enterprise and
Government applications
• OpenID defines three roles:
End user
Relying party
OpenID provider
• Very commonly used on websites where
the user is asked to authenticate using
IdPs like Google, Twitter, Apple or
Facebook.
Privileged Access Management (PAM)
• As users change roles or move from one department to another, they often are
assigned more and more access rights and permissions. This is commonly referred to
as authorization creep.
• Enforce least privilege on user accounts.
• Because of the power privileged accounts have, they are frequently among the first
targets for an attacker.
• Here are some best practices for managing privileged accounts:
 Minimize the number of privileged accounts.
 Ensure that each administrator has a unique account (that is, no shared accounts).
 Elevate user privileges when necessary, after which the user should return to regular
account privileges.
 Maintain an accurate, up-to-date account inventory.
1.6 Encryption
• Encryption is a method of transforming readable data, called plaintext, into a form
that appears to be random and unreadable, which is called ciphertext.
• Encryption enables the transmission of confidential information over insecure channels
without unauthorized disclosure.
• The science behind encryption and decryption is called cryptography, and a system
that encrypts and/or decrypts data is called a cryptosystem.
• The two main pieces of any cryptosystem are the algorithms and the keys.
• Algorithms used in cryptography are complex mathematical formulas that dictate the
rules of how the plaintext will be turned into ciphertext, and vice versa.
• A key is a string of random bits that will be used by the algorithm to add to the
randomness of the encryption process.
Symmetric
• Encryption
The sender and receiver use the
same key for encryption and
decryption
• Also known as private key
cryptography, it is often used for
high-volume data processing
where speed, efficiency, and
complexity are important.
• Some types of symmetric
encryption algorithms include;
DES, 3DES, AES, Blowfish, Twofish.
• Some drawbacks are:
 if the secret key is
compromised, all messages
ever encrypted with that key
can be decrypted and read by
Asymmetric
• Encryption
Also known as public-key
cryptography, it uses two keys,
designated as a public and
private key.
• The key pairs are mathematically
related to each other in a way that
enables anything that is encrypted
by one to be decrypted by the
other.
• Only the public key is shared with
anyone; the private key is
maintained securely by its owner.
• Solves the key distribution
problem.
• Some types of asymmetric
encryption algorithms include;
Symmetric vs. Asymmetric Cryptography
• Key length and encryption/decryption time.
• Symmetric encryption algorithms are significantly faster than asymmetric ones.
• The synergistic use of both symmetric and asymmetric encryption together is what
makes Public Key Infrastructure, along with digital certificates and digital signatures,
possible.
1.7 Public Key Infrastructure (PKI)
• Use of a web of trust (decentralized), to ascertain identities.
• PKI ensures a formal process for verifying identities.
• Certificate authorities (CAs) verify someone’s identity and then digitally sign that public
key, packaging it into a digital certificate or a public key certificate (X.509).
• A certificate revocation list (CRL), which is maintained by a revocation authority (RA), is
the authoritative reference for certificates that are no longer trustworthy.
• Many organizations disable CRL checks because they can slow down essential business
processes.
Digital
• Signatures
Digital signatures are short
sequences of data that prove that a
larger data sequence (say, an e-mail
message or a file) was created by a
given person and has not been
modified by anyone else after being
signed.
• In practice, digital signatures are
handled by email and other
cryptographic-enabled applications,
so all the hashing and decryption are
done automatically, not by the end
user.
1.8 Sensitive Data Protection
• There are some types of data that need special consideration with regard to storage
and transmission.
• Unauthorized disclosure of the following types of data may have serious, adverse
effects on the associated business, government, or individual.
Personally Identifiable Information /
Personal Health Information
• PII is data that can be used to identify an individual.
• PII is sometimes referred to as sensitive personal information.
• This data requires protection because of the risk of personal harm that could result
from its disclosure, alteration, or destruction.
• PHI is any data that relates to an individual’s past, present, or future physical or mental
health conditions.
• The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a law that
establishes standards to protect individuals’ personal health information (PHI).
Cardholder Data
• Mandates by the European Union’s General Data Protection Regulation (GDPR), for
example, have introduced a sweeping number of protections for the handling of
personal data, which includes financial information.
• The Gramm-Leach-Bliley Act (GLBA) of 1999, for example, covers all US-regulated
financial services corporations.
• The Federal Trade Commission’s Financial Privacy Rule governs the collection of
customers’ personal financial information and identifies requirements regarding privacy
disclosure on a recurring basis.
• PCI DSS is an example of an industry policing itself and was created by the major credit
card companies such as Visa, MasterCard, and so on, to reduce credit card fraud and
protect cardholder information.
• You should implement security mechanisms that meet regulatory requirements for
Data Loss Prevention
• Data loss prevention (DLP) comprises the actions that organizations take to prevent
unauthorized external parties from gaining access to sensitive data.
• Many DLP solutions work in a similar fashion to IDSs by inspecting the type of traffic
moving across the network, attempting to classify it, and making a go or no-go decision
based on the aggregate of signals.
• Some SaaS platforms feature DLP solutions that can be enabled to help your
organization comply with business standards and industry regulations.
• Data loss prevention is both a security management activity and a collection of
technologies.
Data Inventories
• Data loss prevention (DLP) comprises the actions that organizations take to prevent
unauthorized external parties from gaining access to sensitive data.
• Many DLP solutions work in a similar fashion to IDSs by inspecting the type of traffic
moving across the network, attempting to classify it, and making a go or no-go decision
based on the aggregate of signals.
• Some SaaS platforms feature DLP solutions that can be enabled to help your
organization comply with business standards and industry regulations.
• Data loss prevention is both a security management activity and a collection of
technologies.
• A good first step is to find and characterize all the data in your organization before you
even look at DLP solutions.
• Understanding data flows at the intersection between business and IT is critical to
Implementing DLPs
• Network DLP (NDLP) applies data protection policies to data in motion. NDLP
products are normally implemented as appliances that are deployed at the perimeter of
an organization’s networks.
• Endpoint DLP (EDLP) applies protection policies to data at rest and data in use. EDLP
is implemented in software running on each protected endpoint.
• EDLP provides a degree of protection that is normally not possible with NDLP. The
reason is that the data is observable at the point of creation.
• Another approach to DLP is to deploy both NDLP and EDLP across the enterprise.
Obviously, this approach is the costliest and most complex. For organizations that can
afford it, however, it offers the best coverage.
• Most modern DLP solutions offer both network and endpoint DLP components, making
them hybrid solutions.
Secure Sockets Layer and Transport Layer
Security Inspection
• The process of breaking certificate-based network encryption as SSL inspection.
• SSL/TLS inspection is the process of interrupting encrypted session between an end
user and a secure web-based server for the purposes of breaking data encryption and
inspecting the contents of the message traffic that is transmitted and received between
the user and the web server.
• The primary reason organizations may engage in SSL/TLS inspection is to prevent
sensitive data loss or exfiltration.
• In the practical world, SSL has been deprecated and is no longer considered secure. TLS
is the preferred replacement for SSL applications.
• Most modern DLP solutions offer both network and endpoint DLP components, making
them hybrid solutions.

You might also like