Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
1 views33 pages

Module 2.3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 33

11

Analyzing
and
Prioritizing
Vulnerabiliti
In this chapter you will learn:

■ How the Common Vulnerability Scoring System works

■ How to validate vulnerabilities and separate actual


vulnerabilities from false positives

■ How the different contexts of vulnerabilities can affect


their severity
11.1 Common Vulnerability Scoring
System
• CVSS is a framework designed to standardize the severity ratings for vulnerabilities.
• This system ensures accurate qualitative measurement so that users can better
understand the impact of these weaknesses.
• CVSS only provides a qualitative measurement for vulnerability severity and should
not be used as a measurement for overall risk.
• CVSS uses three metric groups: base, temporal, and environmental.
• The CVSS is managed by FIRST.org, a professional organization
• focused on assisting computer security incident response teams
11.1.1 Base Metric Group
• The base metric group of the CVSS is the group that generates the overall numerical
score indicating the severity of a particular vulnerability.
• This group describes characteristics of a vulnerability that do not change over time
and are consistent across user environments.
• It consists of three sets of metrics: exploitability metrics, scope, and impact metrics.
• Exploitability metrics describe the details of how exploitable a particular
vulnerability may be. It includes Attack Vector, Attack Complexity, Privileges Required,
and User Interaction.
• Impact metrics describe how the standard three goals of security—confidentiality,
integrity, and availability—are affected.
• The Scope metric describes how one vulnerability can affect another.
Attack Vector
• The Attack Vector (AV) metric describes how exploitation of a given vulnerability could
happen.
• The possible values for the AV metric are as follows:
Network (N): The vulnerability is connected in some way to the network stack
and can be exploited remotely.
Adjacent (A): The vulnerability is connected to the network stack but is limited
to the protocol used or to a logically adjacent network.
Local (L): The vulnerability is not connected to the network stack, and the
attack vector is executed via permission or privileges assigned to the resource, such
as the ability to read, write, or execute locally, or via a remote connection.
Physical (P): For the vulnerability to be exploited, the attacker must physically
interact with the vulnerable component.
Attack Complexity
• The Attack Complexity (AC) metric characterizes any factors that are not within the
attacker’s control and must be favorable for the exploit to be successful.
• There are only two values for the AC metric.
• A value of Low (L) means that there are generally no external unfavorable conditions
that exist that might prevent the attacker from being successful.
• The other value is High (H), which means that a successful attack depends on factors
that the attacker cannot control or affect but may have to navigate in order to be
successful.
Privileges Required
• Privileges Required (PR) is the metric that indicates the level of privileges an attacker
will require before they are successful in exploiting a vulnerability.
• Metric values here are:
• None (N), which indicates that the attacker requires no special privileges;
• Low (L), indicating that the attacker requires only basic user or resource owner
privileges; and
• High (H), which indicates that the attacker requires significant privileges, such as
administrative rights, to execute the attack.
• As a general rule, the less privileges that are required for the attacker to execute a
given vulnerability, the higher the potential base score.
User Interaction
• The User Interaction (UI) metric describes the requirement for a user to interact with
the exploit, either intentionally or inadvertently, to cause a successful exploitation of
the vulnerability.
• The possible metric values are:
• None (N), indicating that the vulnerability could be exploited without any user action
• Required (R), indicating that user interaction is required for the successful
exploitation of the vulnerability.
• Note that if no user interaction is required, this results in a much higher base score.
Scope
• The scope metric indicates whether a vulnerability in one area may affect a vulnerable
component in another area.
• This metric tells you whether or not a vulnerability is isolated to only its own security
context.
• If a vulnerability in a particular application could affect other components or resources
of another application or system, then its scope increases.
• The two possible values for the scope metric are
• Unchanged (U) and
• Changed (C). A value of Changed, indicating that the vulnerability can affect
components outside of its own security scope, causes the base score to be higher.
Impact
• CVSS measures impact in the base metric group with regard to confidentiality,
integrity, and availability
• Confidentiality and integrity values measure impacts that affect the data itself.
• The availability values refer to the actual operation of a system or service, not the
availability of the data itself.
• The confidentiality metric, measuring the impact on information access or disclosure
by unauthorized entities, can have a value of High (H), Low (L), or None (N).
• The integrity metric measures any change of data integrity within the system. As with
confidentiality, the metric values are High (H), Low (L), and None (N).
• The availability metric applies to the loss of availability of the service or system
affected by the vulnerability can have a value of High (H), Low (L), and None (N).
11.1.2 Temporal Metric Group
• The temporal metric group describes vulnerability characteristics that may change
over time.
• This group consists of three metrics: Exploit Code Maturity (E), Remediation Level (RL),
and Report Confidence (RC).
Exploit Code Maturity
• The Exploit Code Maturity (E) metric describes the likelihood of the vulnerability being
attacked and successfully exploited, in relation to the availability of exploitable code.
• The possible values for the AV metric are as follows:
High (H): Functional exploit code exists, is reliable, and is widely available;
there is no exploit required, and the code works in every environment.
Functional (F): There is functional exploit code available, and the code
commonly works in systems where the vulnerability exists.
Proof-of-Concept (P) Proof-of-concept exploit code is available but may or may
not work in all situations and may require substantial modification, depending on the
system and its environment.
Unproven (U) The exploit is only theoretical, or there is no known exploit code
available.
Remediation Level
• Remediation Level (RL) refers to the availability of a workaround, temporary fix, or
official vendor solution, such as a patch, to remediate a vulnerability.
• The five possible values for RL are as follows:
Unavailable (U) There is no known remediation solution available, or it cannot
be applied.
Workaround (W) There is a temporary fix available, but it’s not officially
supported by the vendor and may come from nonvendor sources.
Temporary Fix (T) A temporary fix, officially released by the vendor, is
available. This may take the form of a temporary hotfix, tool, or other workaround.
Official Fix (O) An officially released vendor solution is available, typically in the
form of an official patch or upgrade.
Not Defined (X) There is insufficient information available to select one of the
Report Confidence
• The Report Confidence (RC) metric indicates the degree of confidence that the
vulnerability actually exists and the veracity of its technical details.
• The four possible metric values for RC include the following:
Confirmed (C) Detailed information on the vulnerability exists that indicates the
vulnerability, and its exploitation can be reproduced; vulnerability research can be
independently verified or the vendor has verified the vulnerability.
Reasonable (R) Significant details regarding the vulnerability have been
researched and published, but the root cause is not completely known, or researchers
do not have full access to the source code to confirm all the details that may lead to
exploitation of the vulnerability.
Unknown (U) There are anecdotal indications that the vulnerability is present
and can be exploited, but those assertions are unverifiable and uncertain.
Not Defined (X) There is insufficient information to select one of the other
11.1.3 Environmental Metric Group
• The environmental metric group allows the end user to modify the CVSS base score as
it applies to their unique environment.
• The key to this modification is how the organization values confidentiality, integrity,
and availability, which are indicated in the score as CR, IR, and AR, respectively.
• The values assigned to each of the three elements in this metric group are High (H),
Medium (M), Low (L), and Not Defined (X).
• These values are assigned based on the effect that a loss of confidentiality, integrity,
or availability would have on the organization.
• Note that both temporal group and environmental group metrics are optional in CVSS
scoring.
11.1.4 CVSS Scores
• Overall CVSS scores are derived by expressing what is called a vector, which
essentially describes all of these factors in the shorthand notation detailed previously.
• For example, a vulnerability with base metric values of “Attack Vector: Local, Attack
Complexity: High, Privileges Required: High, User Interaction: None, Scope: Changed,
Confidentiality: Moderate, Integrity: Low, Availability: None” and no expressed
temporal or environmental group metrics would be expressed as the following vector:

CVSS:3.1/AV:L/AC:H/PR:H/UI:N/S:C/C:M/I:L/A:N
11.2 Validating Vulnerabilities
• Vulnerability scanners do not always pick up every single vulnerability, nor do they
always accurately detect vulnerabilities.
• It is up to the analyst to review and make sense of vulnerability data and findings
before passing that information on to others in the organization.
• The two most important outcomes of the review process are to determine the validity
of reported vulnerabilities and to determine exceptions to policies that may affect how
vulnerabilities are mitigated.
True Positives
• A true positive is a piece of data that can be verified and validated as correct.
• Distinguishing true positives from false positives, as well as their opposites, true and
false negatives, can be a tricky part of vulnerability remediation and prioritization.
• Once you receive data from vulnerability scan reports, you may find that verifying the
results is a straight-forward process.
• The goal of major databases such as the National Vulnerability Database (NVD) is to
publish common vulnerabilities and exposures (CVEs) for public awareness. These
databases are incredibly useful but do not always have complete information because
many vulnerabilities are still being researched. Therefore, you should use
supplemental sources in your research, such as Bugtraq, OWASP, and CERT.
False Positives
• A false positive (sometimes called a Type I error) is data that shows that a particular vulnerability
exists, but in fact, it does not. Usually, these issues arise from a misconfigured system, application,
or even the vulnerability scanning tool. An example of this would be if the vulnerability tool shows a
misconfiguration in the Apache web server on a system, when in reality Apache is not installed on
that system. This may have happened because the HTTPD service is running, for instance, but
there is no actual web server installed on the system.
False Positives (cont.)
• Although vulnerability scanners have improved over the years, operating system (OS) and
software detection in vulnerability scanners isn’t perfect. This makes detection in environments
with custom operating systems and devices particularly challenging. Many devices use lightweight
versions of Linux and Apache web server that are burned directly onto the device’s read-only
memory. You should expect to get a higher number of alerts in these cases because the
vulnerability scanner may not be able to tell exactly what kind of system it is. However, you should
also take care not to dismiss alerts immediately on these systems either. Sometimes a well-known
vulnerability may exist in unexpected places because of how the vulnerable software packages
were ported over to the new system.
True Negatives
• A true negative is an odd instance to understand. This is when the vulnerability scanner does not
detect an issue, and it’s because the issue truly doesn’t exist. So why would this be of concern?
Based on research and other indicators from the system, you may suspect that a vulnerability is
there, so you test for it.
• When the vulnerability scanner detects no actual vulnerability, it may cause you to wonder if the
tool is reporting accurately. You may be experiencing issues from the system, such as too much
bandwidth consumption or high processor utilization. While troubleshooting these issues, it’s not
uncommon to scan using a vulnerability tool. The true negative rules out the possibility that any
vulnerabilities you suspect exist are actually on the system. This is when you should start looking
at other causes—possibly hardware or software failures.
False Negatives
• A false negative, also referred to as a Type II error, is a result that indicates that no vulnerability is
present when, in fact, a vulnerability does actually exist. Reasons behind this outcome include a
lack of technical capability to detect the vulnerability.

• It could well be that a vulnerability is too new and that detection rules for the scanner do not yet
exist. Or perhaps an incorrect type of vulnerability scan was initiated by the analyst.
False Negatives (cont.)
• As troublesome as false positives are in terms of the effort expended to prove they are false, a
false negative is far worse, because in this case a vulnerability actually exists but is undetected, so
it will not be remediated and will be possibly exploitable. If other indicators lead you to believe that
there is an actual vulnerability present but the scanning tool does not detect it, you may have to do
further research and look at other indicators that could confirm the existence of the vulnerability,
such as critical file changes, processor utilization, bandwidth utilization, changes to privileges, and
so on. The worst false negative is the one that you never even know exists.
False Negatives (cont.)
• Exam Tip: When validating vulnerabilities, keep in mind that true positives are actual
vulnerabilities and should be included in your remediation strategy. False positives may use up a
lot of your time and energy simply to prove that they aren’t real vulnerabilities. True negatives
occur because there actually isn’t a vulnerability, but you may be led to believe that there is one
due to performance issues related to hardware or software. False negatives are the worst of all
vulnerabilities because there may in fact be a vulnerability that exists in the system that cannot be
detected through normal vulnerability scanning. You’ll likely see many scenarios on the exam that
focus on false positives, so understanding how to interpret data to determine if a false positive
exists is important.
Examining True Positives
• Compare to Best Practices or Compliance
• Reconcile Results
• Review Related Logs and/or Other Data Sources
• Determine Trends

Gaining as much information as possible about true-positive vulnerabilities will assist you in making
better remediation strategy decisions for them. Of particular importance is determining how a
vulnerability affects your compliance with governance since regulations or policy may require that you
mitigate a vulnerability faster, regardless of severity, system criticality, or other factors.
Context Awareness

• Context awareness is way of saying that you should weigh the likelihood of the vulnerability
actually being exploited, given the situation, versus its impact if it is exploited. Infrastructure
architecture and design weighs heavily on the situation in this case. Whether the system that may
contain the vulnerability is an internal system, an external system, or even physically or logically
isolated, the risk should be weighed in prioritizing vulnerability remediation.
• Context awareness is another way of saying that you should weigh the likelihood of the
vulnerability actually being exploited, given the situation, versus its impact if it is exploited.
Infrastructure architecture and design weighs heavily on the situation in this case. Whether the
system that may contain the vulnerability is an internal system, an external system, or even
physically or logically isolated, the risk should be weighed in prioritizing vulnerability remediation
Context Awareness

Internal
• Typically, internal systems contain the most sensitive information and are the most critical to
business operations. It only makes sense, then, that they would also be the most well protected.
This means that, if we go back to basic risk concepts, the impact to the system if it were
compromised by exploiting a vulnerability residing on that system would likely be very high.
• But the likelihood of an attacker being able to exploit the system may be very low due to
mitigations such as strong authentication and encryption controls, restrictive permissions, multiple
layers of defense, and so on. These factors should be considered in the situation when deciding
how quickly a vulnerability must be remediated.
Context Awareness

External
• External systems likely contain less sensitive information and may be less critical business
operations. This may not be the case, however, for publicly available web servers where
customers input transactions such as ordering goods and services, for instance. Therefore, these
external systems should also be highly protected, given the sensitivity of the information that is
processed on them, as well as the criticality to key business processes.
• These systems may require much faster vulnerability remediation as well since they are more
exposed to attacks than internal systems and may be reasonably expected to be compromised a
bit faster. Again, it really depends on two primary factors: information sensitivity and criticality to
business processes.
Context Awareness

Isolated
• Isolated systems are those that are logically and physically segmented or otherwise separated
from other networks and hosts. They may be completely physically separated, with no external
connections, or architected such that they only have minimum unidirectional connections to the
internal network and are separated by internal firewalls or other security devices, VLANs, or other
logical means.
• These segments are typically harder for an attacker to reach, except for a malicious insider, and
may not fall victim to the typical network-based attacks. Due to their isolation, the risk is much
lower, because the likelihood of a vulnerability being exploited on one of these hosts is much
lower. Vulnerabilities on these hosts may be prioritized at a lower remediation priority than others
since they are not easily exploitable, but they should still be mitigated as soon as practical.
Exploitability and Weaponization

• Another consideration in determining how quickly you need to remediate a vulnerability is how
easily the vulnerability is exploitable and weaponized. This relates somewhat to attack complexity,
so the CVSS Attack Complexity metric and other research you perform about the vulnerability will
help inform this decision. Generally, the more complex the attack required for exploitation is, the
less likely it is to succeed, and the less likely the vulnerability is to be exploited.
Exploitability and Weaponization

Asset Value
• Asset value is a very important consideration in the decision-making process for vulnerability
remediation and prioritization. Obviously, the higher value the asset, the more you will focus on
remediating vulnerabilities that are found in the system and its software. Asset value, remember, is
related to two important factors: sensitivity and criticality.
• Sensitivity refers to the confidentiality of the information or system, in that the more sensitive the
information, the more you will want to protect it from unauthorized access.
• Criticality means that the asset is very important to your business processes. Most of the time,
assets will be both sensitive and critical, but this is not always the case. For example, personal
information that resides on the system may be collected incidentally to the business but is not very
critical to its processes. However, that information still has to be stringently protected.
Exploitability and Weaponization

Asset Value (Cont.)


• Asset sensitivity and criticality, along with vulnerability severity, are likely the most important
factors in making vulnerability remediation decisions. These are risk-based decisions that must
account for the possibility of unauthorized access to sensitive information and the unavailability of
systems needed for critical business processes. Either of these factors could drastically increase
the priority of remediating vulnerabilities, but it has to be balanced with the need for users to
access systems while they are down for remediation.
Zero Day

• It refers to either a vulnerability or exploit never before seen in public. A zero-day vulnerability is a
flaw in a piece of software that the vendor is unaware of and has not issued a patch or advisory
for. The code written to take advantage of this flaw is called the zero-day exploit.

• Zero-day vulnerabilities must be carefully and constantly monitored because often they are of
sufficient severity to warrant immediate remediation, especially when discovered on sensitive or
critical systems.

You might also like