Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views

Module 4

The document discusses various security testing methodologies and software vulnerabilities. It covers topics like PTES, OSSTMM, OWASP methodology and the top 10 web application security risks such as injection, broken authentication, sensitive data exposure, XML external entities and others.

Uploaded by

Ewen Benana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Module 4

The document discusses various security testing methodologies and software vulnerabilities. It covers topics like PTES, OSSTMM, OWASP methodology and the top 10 web application security risks such as injection, broken authentication, sensitive data exposure, XML external entities and others.

Uploaded by

Ewen Benana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

APPLICATIONS

SECURITY

Mr. MASSOUD, IT SECURITY EXPERT


MODULE 4: Website Basics and
Footprinting
 SECURITY TESTING METHODOLOGIES
 SOFWARE VULNERABILITIES REVIEW
 LINUX BASICS
 WEB APPLICATION OVERVIEW
SECURITY TESTING
METHODOLOGIES
1. SECURITY TESTING METHODOLOGIES

 OSSTMM
 NIST
 PTES
 ISSAF
 OWASP
1.1 PTES (Penetration Testing
Methodologies and Standards)
The PTES (Penetration Testing Methodologies and Standards) recommends a
structured approach to a penetration test. On one side, the PTES guides you
through the phases of penetration testing, beginning with communication,
information gathering, and threat modeling phases. On the other hand,
penetration testers acquaint themselves with the organization’s
processes, which helps them identify the most vulnerable areas that are prone to
attacks.
1.1 PTES (Penetration Testing
Methodologies and Standards)
PTES provides guidelines to the testers for post-exploitation testing. If required,
they can validate the successful fixing of previously identified vulnerabilities.
The standard has seven phases that guarantee successful penetration testing with
recommendations to rely on.
1.2 OSSTMM (Open Source Security
Testing Methodology Manual)
The OSSTMM (Open Source Security Testing Methodology Manual) is a recognized
framework that details industry standards. The framework provides a scientific
methodology for network penetration testing and vulnerability assessment. It is a
comprehensive guide to the network development team and penetration testers
to identify security vulnerabilities present in the network.
1.2 OSSTMM (Open Source Security
Testing Methodology Manual)
The OSSTMM methodology enables penetration testers to perform customized
testing that fits the technological and specific needs of the organization. A
customized assessment gives an overview of the network’s security, along with
reliable solutions to make appropriate decisions to secure an organization’s
network.
1.3 OWASP (Open Web Application
Security Project)
The OWASP (Open Web Application Security Project) is another recognized
standard that powers organizations to control application vulnerabilities. This
framework helps identify vulnerabilities in web and mobile applications. At the
same time, the OWASP also complicates logical flaws arising in unsafe
development practices.
1.3 OWASP (Open Web Application
Security Project)
The updated guide of OWASP provides over 66 controls to identify and assess
vulnerabilities with numerous functionalities found in the latest applications
today. However, it equips organizations with the resources to secure their
applications and potential business losses. By leveraging the OWASP standard in
security assessment, the penetration tester ensures almost nil vulnerabilities.
Besides, it also enhances realistic recommendations to specific features and
technologies in the applications.
SOFWARE
VULNERABILITIES
REVIEW
2.1 Top 10 Web Application Security
Risks
1. Injection. Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is
sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the
interpreter into executing unintended commands or accessing data without proper authorization.
2. Broken Authentication. Application functions related to authentication and session management are
often implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or
to exploit other implementation flaws to assume other users’ identities temporarily or permanently.
3. Sensitive Data Exposure. Many web applications and APIs do not properly protect sensitive data, such
as financial, healthcare, and PII. Attackers may steal or modify such weakly protected data to conduct
credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra
protection, such as encryption at rest or in transit, and requires special precautions when exchanged
with the browser.
4. XML External Entities (XXE). Many older or poorly configured XML processors evaluate external entity
references within XML documents. External entities can be used to disclose internal files using the file
URI handler, internal file shares, internal port scanning, remote code execution, and denial of service
attacks.
5. Broken Access Control. Restrictions on what authenticated users are allowed to do are often not
properly enforced. Attackers can exploit these flaws to access unauthorized functionality and/or data,
such as access other users’ accounts, view sensitive files, modify other users’ data, change access rights,
etc.
2.1 Top 10 Web Application Security
Risks
6. Security Misconfiguration. Security misconfiguration is the most commonly seen issue. This is commonly a
result of insecure default configurations, incomplete or ad hoc configurations, open cloud storage,
misconfigured HTTP headers, and verbose error messages containing sensitive information. Not only must all
operating systems, frameworks, libraries, and applications be securely configured, but they must be
patched/upgraded in a timely fashion.
7. Cross-Site Scripting XSS. XSS flaws occur whenever an application includes untrusted data in a new web page
without proper validation or escaping, or updates an existing web page with user-supplied data using a browser
API that can create HTML or JavaScript. XSS allows attackers to execute scripts in the victim’s browser which can
hijack user sessions, deface web sites, or redirect the user to malicious sites.
8. Insecure Deserialization. Insecure deserialization often leads to remote code execution. Even if
deserialization flaws do not result in remote code execution, they can be used to perform attacks, including
replay attacks, injection attacks, and privilege escalation attacks.
9. Using Components with Known Vulnerabilities. Components, such as libraries, frameworks, and other software
modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack
can facilitate serious data loss or server takeover. Applications and APIs using components with known
vulnerabilities may undermine application defenses and enable various attacks and impacts.
10. Insufficient Logging & Monitoring. Insufficient logging and monitoring, coupled with missing or ineffective
integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to
more systems, and tamper, extract, or destroy data. Most breach studies show time to detect a breach is over
200 days, typically detected by external parties rather than internal processes or monitoring.
LINUX BASICS
3.1 Linux Key Components

Basically, Linux operating system has three primary components:

• Kernel – Kernel is the core part of Linux. It is responsible for all major
activities of this operating system. It consists of various modules and it interacts
directly with the underlying hardware. Kernel provides the required abstraction to
hide low-level hardware details from system or application programs.

• System Library – System libraries are special functions or programs that use
application programs or system utilities to access the Kernel’s features. These
libraries implement most of the functionalities of the operating system and do not
require kernel module’s code access rights.

• System Utility – System Utility programs are responsible for performing


specialized, individual level tasks.
3.1 Linux Key Components

The diagram below provides the overall design view of Linux components.
3.1 Linux Key Components

Linux Operating System Architecture generally consists of the following layers:

• Hardware layer – Hardware consists of all peripheral devices (RAM/ HDD/ CPU,
etc.).

• Kernel – Core component of the operating system, interacts directly with


hardware, provides low level services to upper layer components.

• Shell – An interface to the kernel, hiding the complexity of kernel’s functions


from users. Takes commands from user and executes kernel’s functions.

• Utilities – Utility programs giving user most of the functionalities of an


operating system.
3.1 Linux Key Components

Following are some of the important features of Linux operating system:

• Portable – Portability means software can works on different types of hardware


in the same way. Linux kernel and application programs support their installation on
any kind of hardware platform.

• Open Source – Linux source code is freely available and it is a community


based development project. Multiple teams work in collaboration to enhance the
capability of Linux operating system and it is continuously evolving.

• Multi-User – Linux is a multiuser system, meaning multiple users can access


system resources, like memory/ RAM/ application programs, at the same time.
3.1 Linux Key Components

• Multiprogramming – Linux is a multiprogramming system, meaning


multiple applications can run at the same time.

• Hierarchical File System – Linux provides a standard file structure in which


system files/ user files are arranged.

• Shell – Linux provides a special interpreter program which can be used to


execute commands of the operating system. It can be used to do various types of
operations, call application programs, etc.

• Security – Linux provides user security using authentication features, like


password protection/ controlled access to specific files/ encryption of data.
3.2 Linux Shell

The shell command interpreter is the command line interface between the user
and the operating system. It is what you will be presented with once you have
successfully logged into the system.

Linux Shell allows you to enter commands that you would like to run, and also
allows you to manage the jobs once they are running. The shell also enables you
to make modifications to your requested commands.
3.2 Linux Shell

Different types of Shell


The Bourne-Again shell is not the only shell command interpreter available. Indeed, it is
descended from the Bourne Shell (sh), written by Steve Bourne of Bell Labs. This shell is
available on all Unix variants, and is the most suitable for writing portable shell scripts.
3.2 Linux Shell

However, the TC Shell (tcsh) is an extension of the C shell.


A very popular shell on most commercial variants of Unix is the Korn Shell.
Written by David Korn of Bell Labs, it includes features from both the Bourne
shell and C shell.
Last, but not least, one of the most powerful and interesting shells, although one
that hasn’t been standardized on any distribution that I’ve seen, is the Z shell.
The zsh combines the best of what is available from the csh line of shell utilities
as well as the best that is available from the Bourne or bash line of shell utilities.

More about different types of shell on the following link.


3.3 Linux File System Hierarchy

The Linux file system is broken up into a hierarchy similar to the one depicted
below, of course, you may not see this entire structure if you are working with
the simulated Linux environment, however, the below presented layout is the
general level layout which is considered as universal structure.
3.3 Linux File System Hierarchy
3.3 Linux File System Hierarchy

• The “/” directory is known as the root of the file system, or the root directory (not to be
confused with the root user though).

• The “/boot” directory contains all the files that Linux requires in order to bootstrap the
system; this is typically just the Linux kernel and its associated driver modules.

• The “/dev” directory contains all the device file nodes that the kernel and system would
make use of.

• The “/bin”, “/sbin” and “/lib” directories contain critical binary (executable) files which
are necessary to boot the system up into a usable state, as well as utilities to help repair the
system should there be a problem.

• The “/bin” directory contains user utilities which are fundamental to both single-user and
multi-user environments. The “/sbin” directory contains system utilities.
3.3 Linux File System Hierarchy

• The “/usr” directory was historically used to store “user” files, but its use has changed in
time and is now used to store files which are used during everyday running of the machine, but
which are not critical to booting the machine up. These utilities are similarly broken up into
“/usr/sbin” for system utilities, and “/usr/bin” for normal user applications.

• The “/etc” directory contains almost all of the system configuration files. This is probably
the most important directory on the system; after an installation the default system
configuration files are the ones that will be modified once you start setting up the system to suit
your requirements.

• The “/home” directory contains all the users data files.

• The “/var” directory contains the user files that are continually changing.

• The “/usr” directory contains the static user files.


3.4 Some Linux Commands and their usage

pwd : Use the pwd command to find out the path of the current working
directory (folder) you’re in. The command will return an absolute (full) path,
which is basically a path of all the directories that starts with a forward slash (/).
An example of an absolute path is /home/username.
cd : To navigate through the Linux files and directories, use the cd command. It
requires either the full path or the name of the directory, depending on the
current working directory that you’re in.
ls : The ls command is used to view the contents of a directory. By default, this
command will display the contents of your current working directory
cat : cat is one of the most frequently used commands in Linux. It is used to list
the contents of a file on the standard output (sdout). To run this command, type
cat followed by the file’s name and its extension. For instance: cat file.txt.
3.4 Some Linux Commands and their usage

cp : Use the cp command to copy files from the current directory to a different
directory. For instance, the command cp scenery.jpg /home/username/Pictures
would create a copy of scenery.jpg (from your current directory) into the
Pictures directory.
mv : The primary use of the mv command is to move files, although it can also be
used to rename files.
mkdir : Use mkdir command to make a new directory — if you type mkdir Music it
will create a directory called Music.
rmdir : If you need to delete a directory, use the rmdir command. However,
rmdir only allows you to delete empty directories
3.4 Some Linux Commands and their usage

 rm
 touch
 locate
 find
 head
 tail
 grep
 chmod
 chown
 jobs
 kill
WEB APPLICATION
OVERVIEW
4.1 HTTP protocol

The underlying protocol that carries web application traffic between the web
server and the client is known as the Hypertext Transport Protocol (HTTP).
HTTP/1.1, the most common implementation of the protocol, is defined in RFCs
7230-7237, which replaced the older version defined in RFC 2616. The latest
version, known as HTTP/2, was published in May 2015, and it is defined in RFC
7540. The first release, HTTP/1.0, is now considered obsolete and is not
recommended.
4.1.1 Knowing an HTTP request and
response
An HTTP request is the message a client sends to the server in order to get some
information or execute some action. It has two parts separated by a blank line:
the header and body. The header contains all of the information related to the
request itself, response expected, cookies, and other relevant control
information, and the body contains the data exchanged. An HTTP response has
the same structure, changing the content and use of the information contained
within it.
4.1.2 The request header

Here is an HTTP request captured using a web application proxy when browsing
to www.bing.com:
4.1.2 The request header

Host: This specifies the host and port number of the resource being requested. A web server may
contain more than one site, or it may contain technologies such as shared hosting or load
balancing. This parameter is used to distinguish between different sites/applications served by
the same infrastructure.
User-Agent: This field is used by the server to identify the type of client (that is, web browser)
which will receive the information. It is useful for developers in that the response can be adapted
according to the user's configuration, as not all features in the HTTP protocol and in web
development languages will be compatible with all browsers.
Cookie: Cookies are temporary values exchanged between the client and server and used, among
other reasons, to keep session information.
Content-Type: This indicates to the server the media type contained within the request's body.
Authorization: HTTP allows for per-request client authentication through this parameter. There
are multiple modes of authenticating, with the most common being Basic, Digest, NTLM, and
Bearer.
4.1.3 The response header

Upon receiving a request and processing its contents, the server may respond
with a message such as the one shown here:
4.1.3 The response header

Status code: There is no field named status code, but the value is passed in the
header. The 2xx series of status codes are used to communicate a successful operation
back to the web browser. The 3xx series is used to indicate redirection when a server
wants the client to connect to another URL when a web page is moved. The 4xx series
is used to indicate an error in the client request and that the user will have to modify
the request before resending. The 5xx series indicates an error on the server side, as
the server was unable to complete the operation. In the preceding header, the status
code is 200, which means that the operation was successful. A full list of HTTP status
codes can be found at https://developer.mozilla.org/en-US/docs/Web/HTTP/Status.
Set-Cookie: This field, if defined, will establish a cookie value in the client that can
be used by the server to identify the client and store temporary data.
Cache-Control: This indicates whether or not the contents of the response (images,
script code, or HTML) should be stored in the browser's cache to reduce page loading
times and how this should be done.
4.1.3 The response header

Server: This field indicates the server type and version. As this information may
be of interest for potential attackers, it is good practice to configure servers to
omit its responses, as is the case in the header shown in the preceding
screenshot.
Content-Length: This field will contain a value indicating the number of bytes in
the body of the response. It is used so that the other party can know when the
current request/response has finished.
4.2 HTTP methods

 GET
 POST
 TRACE
 PUT
 DELETE
 HEAD
 OPTIONS
4.2.1 GET method

The GET method is used to retrieve whatever information is identified by the URL
or generated by a process identified by it. A GET request can take parameters
from the client, which are then passed to the web application via the URL itself
by appending a question mark ? followed by the parameters' names and values.
As shown in the following header, when you send a search query for web
penetration testing in the Bing search engine, it is sent via the URL:
4.2.2 POST method

The POST method is similar to the GET method. It is used to retrieve data from
the server, but it passes the content via the body of the request. Since the data
is now passed in the body of the request, it becomes more difficult for an
attacker to detect and attack the underlying operation. As shown in the following
POST request, the username (login) and password (pwd) are not sent in the URL
but rather in the body, which is separated from the header by a blank line:
4.3 Cookies

In HTTP communication, a cookie is a single piece of information with name,


value, and some behavior parameters stored by the server in the client's
filesystem or web browser's memory. Cookies are the de facto standard
mechanism through which the session ID is passed back and forth between the
client and the web server. When using cookies, the server assigns the client a
unique ID by setting the Set-Cookie field in the HTTP response header. When the
client receives the header, it will store the value of the cookie; that is, the
session ID within a local file or the browser's memory, and it will associate it with
the website URL that sent it. When a user revisits the original website, the
browser will send the cookie value across, identifying the user.
4.3.1 Cookie parameters

In addition to the name and value of the cookie, there are several other
parameters set by the web server that defines the reach and availability of the
cookie, as shown in the following response header:
4.3.1 Cookie parameters

The following are details of some of the parameters:


Domain: This specifies the domain to which the cookie would be sent.
Path: To lock down the cookie further, the Path parameter can be specified. If
the domain specified is email.com and the path is set to /mail, the cookie would
only be sent to the pages inside email.com/mail.
HttpOnly: This is a parameter that is set to mitigate the risk posed by Cross-site
Scripting (XSS) attacks, as JavaScript won't be able to access the cookie.
Secure: If this is set, the cookie must only be sent over secure communication
channels, namely SSL and TLS.
Expires: The cookie will be stored until the time specified in this parameter.
THANKS

You might also like