CNS endsem
CNS endsem
CNS endsem
Advantages of NAT:
1. IP Address Conservation: Reduces the need for a large number of public IP
addresses, conserving the global IP address space.
2. Security: Hides internal IP addresses from external networks, providing a layer of
security against external threats.
3. Flexibility: Allows internal network changes without affecting external connections.
Disadvantages of NAT:
1. Performance Overhead: Introduces a slight delay due to the address translation
process.
2. Compatibility Issues: Some protocols and applications may not work well with NAT,
particularly those that embed IP address information within the payload.
Version: The first 4-bit header field informs about the current IP version in use,
which, in this case, is IPv4
Internet Header Length (IHL): The IHL has four bits that specify the number of 32-
bit words in the header – the minimum header length is 20 data bytes and the
minimum value of this field is five
Service Type: This field provides the queuing of the IP packets in their transmission
Total Length: This is the total size of the header and data in bytes, where the
minimum size of the Total length field is 20 bytes and the maximum size is 65,535
bytes
Identification: If the IP datagram is fragmented (broken into smaller pieces), the ID
field helps identify fragments and determine to which IP packet they belong to
IP Flags: This is a 3-bit field that uses a few possible configuration combinations of
control flags for fragmentation:
Bit 0 is reserved and always set to 0
Bit 1 represents the Don’t Fragment (DF) flag, which indicates that this packet
should not be fragmented
Bit 2 represents the More Fragments (MF) flag, which is set on all fragmented
packets except the last one
Fragmentation Offset (Fragment Offset): The Fragment Offset field takes up 13
bits, and it performs packet tracing by representing the data bytes ahead; i.e., it
determines where in the original packet the particular fragment belongs
Time to Live (TTL): TTL limits the datagram’s lifetime to prevent packets from an
endless loop in the internet system by causing undeliverable datagrams to be
discarded automatically
Protocol: This 8-bit field defines which protocol is used in the data portion of the
packet
Header Checksum: If there are any communication errors in the header, the Header
Checksum field detects them
Source IP Address: The 32-bit IPv4 address of the sender of the packet
Destination IP Address: The 32-bit IPv4 address of the receiver of the packet
Options: This optional feature is used when the value in the Internet Header Length
is greater than five, hence the header length field increases (it may contain Time
Stamp, Record Route or another optional field)
Class B
IP address belonging to class B is assigned to networks that range from medium-sized to
large-sized networks.
The network ID is 16 bits long.
The host ID is 16 bits long.
The higher-order bits of the first octet of IP addresses of class B are always set to 10. The
remaining 14 bits are used to determine the network ID. The 16 bits of host ID are used
to determine the host in any network. The default subnet mask for class B is 255.255.x.x.
Class B has a total of:
2^14 = 16384 network address
2^16 – 2 = 65534 host address
IP addresses belonging to class B ranges from 128.0.0.0 – 191.255.255.255.
Class C
IP addresses belonging to class C are assigned to small-sized networks.
The network ID is 24 bits long.
The host ID is 8 bits long.
The higher-order bits of the first octet of IP addresses of class C is always set to 110. The
remaining 21 bits are used to determine the network ID. The 8 bits of host ID are used to
determine the host in any network. The default subnet mask for class C is 255.255.255.x.
Class C has a total of:
2^21 = 2097152 network address
2^8 – 2 = 254 host address
IP addresses belonging to class C range from 192.0.0.0 – 223.255.255.255.
Class D
IP address belonging to class D is reserved for multi-casting. The higher-order bits of the
first octet of IP addresses belonging to class D is always set to 1110. The remaining bits
are for the address that interested hosts recognize.
Class D does not possess any subnet mask. IP addresses belonging to class D range from
224.0.0.0 – 239.255.255.255.
Class E
IP addresses belonging to class E are reserved for experimental and research purposes.
IP addresses of class E range from 240.0.0.0 – 255.255.255.255. This class doesn’t have
any subnet mask. The higher-order bits of the first octet of class E are always set to
1111.
V)Mobile IP
Mobile IP is a protocol that enables devices (mobile nodes) to maintain seamless
connectivity and reachability while moving across different networks. It ensures that the
mobile node retains the same IP address, regardless of its physical location, enabling
uninterrupted communication.
Key Components:
1. Mobile Node (MN): The device that moves between different networks, such as a
smartphone or laptop.
2. Home Agent (HA): A router in the mobile node's home network that keeps track of
its current location and forwards data to the mobile node.
3. Foreign Agent (FA): A router in the visited network that provides services to the
mobile node while it is away from its home network.
4. Care-of Address (CoA): A temporary address assigned to the mobile node while it is
in the foreign network. This address identifies the node’s current location.
How Mobile IP Works:
1. Registration:
o When the mobile node moves to a foreign network, it registers its Care-of Address
(CoA) with its Home Agent (HA).
o The foreign agent assists in this process.
2. Tunneling:
o The home agent encapsulates data packets destined for the mobile node and
forwards them to the care-of address via a tunnel.
o The foreign agent decapsulates the packets and delivers them to the mobile node.
3. Data Flow:
o Data from the mobile node is sent directly to the correspondent node (a device
communicating with the mobile node).
o Data destined for the mobile node first goes to the home agent, which forwards it to
the care-of address.
Advantages:
1. Seamless Mobility: Maintains uninterrupted communication as the mobile node
moves across networks.
2. Transparency: Applications and devices do not require reconfiguration when the
mobile node changes networks.
3. Scalability: Works in large, distributed networks.
Disadvantages:
1. Triangular Routing Problem: Packets follow an indirect route through the home
agent, leading to inefficiencies.
2. Latency: Increases due to tunneling and additional routing.
3. Security Concerns: Vulnerable to attacks like impersonation or session hijacking.
10. 192.168.5.71 / 26 for given address find out the i) Subnet mask? ii)
What is first ip address for given series? Iii)What is last ip address for
given series?
To solve the problem, let’s analyze the given IP address 192.168.5.71 / 26.
1. Subnet Mask:
o CIDR notation /26 means the first 26 bits are the network portion.
o This corresponds to the binary subnet mask:
11111111.11111111.11111111.11000000
o Converting to decimal:
Subnet Mask = 255.255.255.192
2. First IP Address:
o The first address in the range is the network address, where all host bits are 0.
o IP address in binary: 192.168.5.71 = 11000000.10101000.00000101.01000111
o Keeping only the network bits (26 bits) and setting host bits (last 6 bits) to 0:
11000000.10101000.00000101.01000000 = 192.168.5.64
o First IP Address = 192.168.5.64
3. Last IP Address:
o The last address in the range is the broadcast address, where all host bits are 1.
o Starting with the network bits (26 bits) and setting the host bits (last 6 bits) to 1:
11000000.10101000.00000101.01111111 = 192.168.5.127
o Last IP Address = 192.168.5.127
11. Draw and explain Header format of IPV6.
1. Version (4 bits):
o Specifies the IP version number. For IPv6, this value is set to 6.
2. Traffic Class (8 bits):
o Used for packet classification and prioritization. It allows differentiated services and
quality of service (QoS).
3. Flow Label (20 bits):
o Used to identify and handle packets that belong to the same flow. A flow is a
sequence of packets with the same source and destination, requiring special handling.
4. Payload Length (16 bits):
o Specifies the length of the payload (data) in bytes. It does not include the length of
the IPv6 header.
5. Next Header (8 bits):
o Identifies the type of header immediately following the IPv6 header. It can indicate
protocols like TCP, UDP, or extension headers.
6. Hop Limit (8 bits):
o Replaces the Time to Live (TTL) field in IPv4. It specifies the maximum number of hops
a packet can traverse. It is decremented by one at each hop, and the packet is
discarded if the value reaches zero.
7. Source Address (128 bits):
o The IPv6 address of the originator of the packet.
8. Destination Address (128 bits):
o The IPv6 address of the intended recipient of the packet.
12. Explain Distance vector routing.
Principle :In this protocol each node maintains a vector (table) of minimum
distances(minimum cost/metric) to every node Here distance mean any chosen metric.
( i.e. number of hops or delay or throughput etc) It is based on algorithm called Bellman-
Ford to find the shortest path between routers in a graph, given the distance between
routers. It was the routing algorithm used in first internet “ ARPANET”
Each router keeps a table ( called a vector) mentioning distance( cost) to all other
routers & output port ( interface or next node ) to reach to all other routers. Then least
distances ( cost of the chosen metric ), are computed using information from the
neighbors‟ distance vectors.
It has three stages 1) Initialization 2) Sharing 3) Updating
Initialization each router starts creating its own routing table when it is booted. After
booting ,each node sends a Hello message to the immediate neighbors and find the
distance between itself and these neighbors.
Distance Vector Table Initialization –
● Distance to itself = 0
● Distance to neighboring routers = distance ( cost of metric ) as seen from graph
● Distance to ALL other routers = infinity number , but for practical purpose it is set to 99
Initial tables of routers
Sharing After initialization nodes share their tables with neighbors to improve their
routing tables periodically and when there is a change in network (such as a failure in a
link or in a node)
Updating Whenever a node receives a two-column table from a neighbor, it needs to
update its routing table. Updating takes three steps:
1) The receiving node needs to add the cost between itself and the sending node to each
value in the second column. The logic is clear . If node C claims that its distance to
destination is x, and the distance between A and C is y, then the Distance between A and
that destination, via C,is x+y. This is Bellman-Ford algorithm
2) The receiving node needs to add the name of the sending node to each row as the
third column if the receiving node uses information from any row. The sending node is
the next node in the route.
Example of new table of A after modifying its table after receiving table from C.
Whichever is entry is less , is copied in new table , with corresponding next hop .
Previously, node A did not know how to Reach E (distance of infinity); now it knows that
the cost is 6 via C
Similarly Each node can update its table by using the tables received from other nodes.
In a short time, Node reaches a stable condition in which the contents of its table
remains the same. Final tables of all routers
Sharing of tables is done both periodically and when there is a change in the table.
Problem with DVR 1. Two node instability 2. Three node instability
Eg. Of protocol using DVR is RIP.
13. Explain Link state routing.
Link State Routing
Link State Routing is a dynamic routing protocol used in computer networks to
determine the best path for data packets. Unlike Distance Vector Routing, which shares
the entire routing table with neighbors, Link State Routing shares information about the
state of its directly connected links (e.g., cost, status) with all routers in the network.
Key Concepts of Link State Routing:
1. Network Topology Awareness: Each router maintains a complete map (or
topology) of the entire network.
2. Link State Advertisements (LSAs): Routers periodically generate LSAs to share
information about their directly connected links.
3. Shortest Path Algorithm: Uses Dijkstra's algorithm to compute the shortest path
to every destination.
4. Routing Table Calculation: After building the network topology, each router
calculates the best paths independently.
How Link State Routing Works:
1. Neighbor Discovery: Routers identify their directly connected neighbors using a
process like Hello packets.
2. Exchange of Link State Information: Each router creates an LSA containing
information about its directly connected links (e.g., link cost, link status). LSAs are
flooded throughout the network to all routers.
3. Building the Link State Database (LSDB): Each router maintains an LSDB that
stores the topology information received from LSAs.
4. Shortest Path Tree Calculation: Routers use the Dijkstra algorithm to compute
the shortest path to each destination based on the LSDB.
5. Routing Table Creation: The shortest paths from the Dijkstra algorithm are used to
populate the router's forwarding table.
Advantages of Link State Routing:
1. Fast Convergence: Changes in the network are quickly propagated and processed.
2. Scalability: Suitable for large and complex networks.
3. Loop-Free Routing: Each router independently calculates the best path, avoiding
routing loops.
4. Efficient Updates: Only link state changes are propagated, reducing unnecessary
traffic.
Disadvantages of Link State Routing:
1. Complexity: More complex to configure and maintain compared to Distance Vector
protocols.
2. Resource Intensive: Requires more memory and CPU to store and process the
topology database.
3. Flooding Overhead: LSAs are flooded throughout the network, which can cause
overhead in very large networks.
Protocols Using Link State Routing:
1. OSPF (Open Shortest Path First):
o A widely used link state protocol in IP networks.
o Supports areas to divide large networks into manageable segments.
The Netmask determines how many bits of the IP address must match the
destination network.
More specific entries (longer prefixes) take precedence.
i) 156.26.10.66
1. Convert Netmask and Destination:
o 156.26.10.0/26 (Netmask: 255.255.255.192 → 26 bits) → Range: 156.26.10.0 to
156.26.10.63.
o 156.26.10.128/25 (Netmask: 255.255.255.128 → 25 bits) → Range:
156.26.10.128 to 156.26.10.255.
o 156.26.0.0/16 → Range: 156.26.0.0 to 156.26.255.255.
2. 156.26.10.66 does not fall into:
o 156.26.10.0/26 (range ends at 156.26.10.63).
However, it does fall into:
o 156.26.0.0/16 (broader range).
3. Action: The packet will be forwarded to 156.26.10.1 based on the
156.26.0.0/16 route.
ii) 156.26.10.226
1. Convert Netmask and Destination:
o 156.26.10.128/25 → Range: 156.26.10.128 to 156.26.10.255.
2. 156.26.10.226 falls within 156.26.10.128/25.
3. Action: The packet will be delivered directly over Eth1.
iii) 168.130.12.27
1. This IP address does not match any of the specific networks in the table:
o 156.26.10.0/26 → Does not match.
o 156.26.10.128/25 → Does not match.
o 156.26.0.0/16 → Does not match.
2. Since no specific match is found, the router uses the default route (0.0.0.0/0).
3. Action: The packet will be forwarded to 156.10.1.30.
Unit 4
1. Draw and explain TCP header format.
Source port: this is a 16 bit field that specifies the port number of the sender.
Destination port: this is a 16 bit field that specifies the port number of the receiver.
Sequence number: the sequence number is a 32 bit field that indicates how much data
is sent during the TCP session. When you establish a new TCP connection (3 way
handshake) then the initial sequence number is a random 32 bit value. The receiver will
use this sequence number and sends back an acknowledgment. Protocol analyzers like
wireshark will often use a relative sequence number of 0 since it’s easier to read than
some high random number.
Acknowledgment number: this 32 bit field is used by the receiver to request the next
TCP segment. This value will be the sequence number incremented by 1.
DO: this is the 4 bit data offset field, also known as the header length. It indicates the
length of the TCP header so that we know where the actual data begins.
RSV: these are 3 bits for the reserved field. They are unused and are always set to 0.
Flags: there are 9 bits for flags, we also call them control bits. We use them to establish
connections, send data and terminate connections:
URG: urgent pointer. When this bit is set, the data should be treated as priority over
other data.
ACK: used for the acknowledgment.
PSH: this is the push function. This tells an application that the data should be
transmitted immediately and that we don’t want to wait to fill the entire TCP segment.
RST: this resets the connection, when you receive this you have to terminate the
connection right away. This is only used when there are unrecoverable errors and it’s not
a normal way to finish the TCP connection.
SYN: we use this for the initial three way handshake and it’s used to set the initial
sequence number.
FIN: this finish bit is used to end the TCP connection. TCP is full duplex so both parties
will have to use the FIN bit to end the connection. This is the normal method how we end
an connection.
Window: the 16 bit window field specifies how many bytes the receiver is willing to
receive. It is used so the receiver can tell the sender that it would like to receive more
data than what it is currently receiving. It does so by specifying the number of bytes
beyond the sequence number in the acknowledgment field.
Checksum: 16 bits are used for a checksum to check if the TCP header is OK or not.
Urgent pointer: these 16 bits are used when the URG bit has been set, the urgent
pointer is used to indicate where the urgent data ends.
Options: this field is optional and can be anywhere between 0 and 320 bits.
1. Process-to-Process Communication
o The transport layer ensures communication between specific processes (applications)
on two devices.
It uses port numbers to identify the sending and receiving application processes.
For example, HTTP uses port 80, while FTP uses port 21.
5. Flow Control
o Prevents a fast sender from overwhelming a slow receiver.
Flow control mechanisms like the sliding window protocol are used to regulate
the amount of data sent.
Example: TCP implements flow control.
6. Error Control
o Ensures corrupted or lost data is detected and retransmitted.
Error detection techniques such as checksums are used.
When an error is detected, the sender retransmits the affected segment.
Example: TCP provides error control.
7. Congestion Control
o Controls the amount of data sent to avoid overwhelming the network.
Congestion control mechanisms reduce the rate of data transmission during
network congestion.
Example: TCP implements congestion control using algorithms like Slow Start and
Congestion Avoidance.
UDP Header
Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
5. What is socket? What are different types of socket? Explain socket
functions used in connection oriented services with diagram.
A socket is an endpoint for sending or receiving data across a computer network. It
serves as a communication channel between two nodes, allowing them to exchange
data. Sockets provide a standardized way for programs to communicate over a network,
abstracting the complexities of underlying protocols.
Types of Sockets
There are several types of sockets, each suited for different communication needs:
1. Stream Sockets (TCP Sockets):
o Uses the Transmission Control Protocol (TCP).
o Provides reliable, connection-oriented communication.
o Guarantees data delivery and order.
2. Datagram Sockets (UDP Sockets):
o Uses the User Datagram Protocol (UDP).
o Provides connectionless communication.
o Does not guarantee delivery or order.
3. Raw Sockets:
o Provides access to lower-level network protocols.
o Allows sending and receiving of packets bypassing the transport layer.
o Typically used for network diagnostics and research.
4. Sequenced Packet Sockets (SCTP Sockets):
o Uses the Stream Control Transmission Protocol (SCTP).
o Supports multi-streaming and multi-homing.
o Ensures reliable, message-oriented communication.
Socket Functions in Connection-Oriented Services (TCP)
Here are the primary socket functions used in connection-oriented services (TCP) along
with a diagram:
1. socket():
o Creates a new socket.
o Syntax: int socket(int domain, int type, int protocol);
2. bind():
o Binds the socket to a specific local address and port.
o Syntax: int bind(int sockfd, const struct sockaddr *addr, socklen_t addrlen);
3. listen():
o Marks the socket as a passive socket, indicating it will be used to accept incoming
connection requests.
o Syntax: int listen(int sockfd, int backlog);
4. accept():
o Accepts an incoming connection request.
o Syntax: int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen);
5. connect():
o Initiates a connection to a remote socket.
o Syntax: int connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen);
6. send() and recv():
o Used to send and receive data over a connected socket.
o Syntax for send: ssize_t send(int sockfd, const void *buf, size_t len, int flags);
o Syntax for recv: ssize_t recv(int sockfd, void *buf, size_t len, int flags);
7. close():
o Closes the socket.
o Syntax: int close(int sockfd);
Benefits of SCTP:
1. Improved Reliability: Redundant paths and multi-streaming enhance reliability
and fault tolerance.
2. Preserved Message Boundaries: Ensures that messages are received exactly as
sent, which is crucial for certain applications like signaling.
3. Enhanced Performance: Reduces latency and improves throughput by allowing
multiple streams of data within a single association.
4. Security: SCTP’s design includes mechanisms to protect against common types of
network attacks like SYN flooding.
Use Cases:
1. Telecommunications: Transporting PSTN signaling messages over IP networks
(e.g., SS7 over IP).
2. Real-Time Applications: Suitable for VoIP, video conferencing, and online gaming
where message boundaries and reliability are critical.
3. Financial Transactions: Ensures secure and reliable transaction processing in
banking and finance.
7. Explain socket functions used in connection less services with
diagram.
In connectionless communication (like UDP), the communication is unreliable,
meaning there is no established connection between sender and receiver before data
transmission. This makes it different from connection-oriented services like TCP,
where data transmission happens over a reliable, established connection.
In connectionless services (UDP), sockets are used to send and receive datagrams
(packets of data) without ensuring delivery, order, or flow control.
Socket Functions Used in Connectionless Services (UDP)
socket(): The client or server creates a socket using the socket() function with
SOCK_DGRAM to specify the use of UDP.
bind(): The socket is bound to a specific port (and optionally an IP address) using the
bind() function. This step is required on the server side but optional on the client side.
sendto(): The sender uses the sendto() function to send a datagram (UDP packet) to a
specified destination (IP address and port).
recvfrom(): The receiver uses the recvfrom() function to receive a datagram, which also
provides the sender's address.
close(): The socket is closed when the communication is done.
8. Explain TCP congestion control in transport layer?
TCP congestion control refers to the mechanism that prevents congestion from
happening or removes it after congestion takes place.
When congestion takes place in the network, TCP handles it by reducing the size of the
sender’s window. The window size of the sender is determined by the following two
factors:
Receiver window size
Congestion window size
Receiver Window Size
It shows how much data can a receiver receive in bytes without giving any
acknowledgment.
Things to remember for receiver window size:
1. The sender should not send data greater than that of the size of receiver window.
2. If the data sent is greater than that of the size of the receiver’s window, then it
causes retransmission of TCP due to the dropping of TCP segment.
3. Hence sender should always send data that is less than or equal to the size of the
receiver’s window.
4. TCP header is used for sending the window size of the receiver to the sender.
Congestion Window
It is the state of TCP that limits the amount of data to be sent by the sender into the
network even before receiving the acknowledgment.
Following are the things to remember for the congestion window:
1. To calculate the size of the congestion window, different variants of TCP and methods
are used.
2. Only the sender knows the congestion window and its size and it is not sent over the
link or network.
The formula for determining the sender’s window size is:
Sender window size = Minimum (Receiver window size, Congestion window size)
Congestion in TCP is handled by using these three phases:
1. Slow Start
2. Congestion Avoidance
3. Congestion Detection
Slow Start Phase
In the slow start phase, the sender sets congestion window size = maximum segment
size (1 MSS) at the initial stage. The sender increases the size of the congestion window
by 1 MSS after receiving the ACK (acknowledgment).
The size of the congestion window increases exponentially in this phase.
The formula for determining the size of the congestion window is
Congestion window size = Congestion window size + Maximum segment size
Round trip time Congestion window size result
Email Limited to the local device; Allows folders, flags, and labels to
Management deleting emails removes them be managed on the server.
permanently.
Offline Access Emails are available offline after Emails can be accessed offline
download. only if downloaded first.
Default Port Port 110 (unencrypted), Port 995 Port 143 (unencrypted), Port 993
(encrypted). (encrypted).
Security Less secure because emails are More secure; data is stored
downloaded and stored locally. remotely and can be encrypted.
Email Deletion Deleting emails on the device Deleting emails on the client does
usually removes them from the not necessarily remove them from
server. the server unless specified.
Use Case Best suited for users who access Ideal for users who need to access
email from a single device and emails from multiple devices and
don’t need synchronization. require synchronization.
Key Usage Uses a single key for both Uses a pair of keys: a public key and a
encryption and decryption private key
Key Key must be securely shared Public key can be shared openly;
Management between parties private key must be kept secret
Security Less secure if the key is More secure as private key is never
intercepted during exchange transmitted
Complexity Simpler, requires only one key More complex, involves key pair
generation and management
Key Length Typically shorter (128-256 bits) Typically longer (1024-4096 bits or
more)
1. Confidentiality:
o Definition: Ensuring that sensitive information is accessible only to those
authorized to have access.
o Mechanisms: Encryption, access controls, and authentication mechanisms such as
passwords, biometrics, and two-factor authentication.
2. Integrity:
o Definition: Ensuring the accuracy and completeness of data, and protecting it from
being altered or tampered with by unauthorized individuals.
o Mechanisms: Checksums, cryptographic hash functions, digital signatures, and
data validation processes.
3. Availability:
o Definition: Ensuring that information and systems are accessible to authorized
users when needed.
o Mechanisms: Redundancy, failover mechanisms, load balancing, regular
maintenance, and denial-of-service (DoS) attack prevention measures.
4. Authentication:
o Definition: Verifying the identity of users and systems to ensure that only
authorized entities can access resources.
o Mechanisms: Passwords, biometrics (fingerprint, retina scan), smart cards, and
multi-factor authentication (MFA).
5. Authorization:
o Definition: Granting or denying specific access rights to resources based on the
identity of the user or system.
o Mechanisms: Access control lists (ACLs), role-based access control (RBAC), and
user permissions.
6. Non-repudiation:
o Definition: Ensuring that a party in a transaction cannot deny the authenticity of
their signature or the sending of a message that they originated.
o Mechanisms: Digital signatures and audit logs.
7. Accountability:
o Definition: Ensuring that actions of an entity can be traced uniquely to that entity,
which helps in detecting and responding to security breaches.
o Mechanisms: Audit trails, logging, and monitoring.
8. Risk Management:
o Definition: Identifying, assessing, and mitigating risks to information and systems
to minimize the impact of security incidents.
o Mechanisms: Risk assessments, threat modeling, and implementing security
controls.
9. Security Policy:
o Definition: A set of rules and guidelines that define how an organization manages
and protects its information assets.
o Mechanisms: Policies on access control, incident response, data protection, and
employee training.
10. Physical Security:
o Definition: Protecting the physical infrastructure of information systems from
damage or unauthorized access.
o Mechanisms: Security guards, surveillance cameras, secure access controls, and
environmental controls like fire suppression systems.
11. Incident Response:
o Definition: Developing and implementing processes to detect, respond to, and
recover from security incidents.
o Mechanisms: Incident response plans, forensic analysis, and post-incident reviews.
12. Compliance:
o Definition: Adhering to laws, regulations, and standards related to information
security.
o Mechanisms: Regular audits, compliance checks, and adherence to standards
such as GDPR, HIPAA, and ISO/IEC 27001.