Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Questions

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 43

1. List of servers monitoring by soc?

 Firewall
 IDS/IPS
 Proxy
 DNS
 DHCP
 AD active server
 Database
 File server
 Windows

2. SIEM components?
https://www.manageengine.com/log-management/siem/siem-components.html

3. What is email gateway?


An email gateway is a type of email server that protects an organizations or users
internal email servers. This server acts as a gateway through which every incoming
and outgoing email passes through. A Secure Email Gateway (SEG) is a device or
software used for email monitoring that are being sent and received.

4. What is a proof point?


Proofpoint is a suite of cybersecurity tools that protects email, mobile devices, social
media and the cloud from cyber threats and criminals. The platform focuses on
keeping brands and people safe from online attacks.

5. What is Web Application Firewall?


A WAF or web application firewall helps protect web applications by filtering and
monitoring HTTP traffic between a web application and the Internet. It typically
protects web applications from attacks such as cross-site forgery, cross-site-scripting
(XSS), file inclusione, and SQL injection, among others. A WAF is a protocol layer
7 defense (in the OSI model), and is not designed to defend against all types of
attacks. This method of attack mitigation is usually part of a suite of tools which
together create a holistic defense against a range of attack vectors.
By deploying a WAF in front of a web application, a shield is placed between the
web application and the Internet. While a proxy server protects a client machine’s
identity by using an intermediary, a WAF is a type of reverse-proxy, protecting the
server from exposure by having clients pass through the WAF before reaching the
server.
A WAF operates through a set of rules often called policies. These policies aim to
protect against vulnerabilities in the application by filtering out malicious traffic. The
value of a WAF comes in part from the speed and ease with which policy
modification can be implemented, allowing for faster response to varying attack
vectors; during a DDoS attack, rate limiting can be quickly implemented by
modifying WAF policies.
/

6. What are network-based, host-based, and cloud-based WAFs?


A WAF can be implemented one of three different ways, each with its own benefits
and shortcomings:
 A network-based WAF is generally hardware-based. Since they are
installed locally they minimize latency, but network-based WAFs are the
most expensive option and also require the storage and maintenance of
physical equipment.
 A host-based WAF may be fully integrated into an application’s software.
This solution is less expensive than a network-based WAF and offers more
customizability. The downside of a host-based WAF is the consumption of
local server resources, implementation complexity, and maintenance costs.
These components typically require engineering time, and may be costly.
 Cloud-based WAFs offer an affordable option that is very easy to
implement; they usually offer a turnkey installation that is as simple as a
change in DNS to redirect traffic. Cloud-based WAFs also have a minimal
upfront cost, as users pay monthly or annually for security as a service.
Cloud-based WAFs can also offer a solution that is consistently updated to
protect against the newest threats without any additional work or cost on
the user’s end. The drawback of a cloud-based WAF is that users hand
over the responsibility to a third party, therefore some features of the
WAF may be a black box to them. (A cloud-based WAF is one type of
cloud firewall; learn more about cloud firewalls.)

7. What is firewall?
URL: https://www.simplilearn.com/tutorials/cyber-security-tutorial/what-is-
firewall
A firewall is a network security device, either hardware or software-based, which
monitors all incoming and outgoing traffic and based on a defined set of security
rules it accepts, rejects or drops that specific traffic.
Accept: allow the traffic
Reject: block the traffic but reply with an “unreachable error”
Drop: block the traffic with no reply
A firewall establishes a barrier between secured internal networks and outside
untrusted network, such as the Internet.

8. What Firewalls Do?


A Firewall is a necessary part of any security architecture and takes the guesswork
out of host level protections and entrusts them to your network security device.
Firewalls, and especially Next Generation Firewalls, focus on blocking malware and
application-layer attacks, along with an integrated intrusion prevention system (IPS),
these Next Generation Firewalls can react quickly and seamlessly to detect and react
to outside attacks across the whole network. They can set policies to better defend
your network and carry out quick assessments to detect invasive or suspicious
activity, like malware, and shut it down.

9. Why Do We Need Firewalls?


Firewalls, especially Next Generation Firewalls, focus on blocking malware and
application-layer attacks. Along with an integrated intrusion prevention system
(IPS), these Next Generation Firewalls are able to react quickly and seamlessly to
detect and combat attacks across the whole network. Firewalls can act on previously
set policies to better protect your network and can carry out quick assessments to
detect invasive or suspicious activity, such as malware, and shut it down. By
leveraging a firewall for your security infrastructure, you’re setting up your network
with specific policies to allow or block incoming and outgoing traffic.

10. Generations of firewalls


A. First Generation- Packet Filtering Firewall:
Packet filtering firewall is used to control network access by monitoring
outgoing and incoming packet and allowing them to pass or stop based on
source and destination IP address, protocols and ports. It analyses traffic at the
transport protocol layer (but mainly uses first 3 layers). Packet firewalls treat
each packet in isolation. They have no ability to tell whether a packet is part of
an existing stream of traffic. Only It can allow or deny the packets based on
unique packet headers.
Packet filtering firewall maintains a filtering table which decides whether the
packet will be forwarded or discarded. From the given filtering table, the
packets will be Filtered according to following rules:

1. Incoming packets from network 192.168.21.0 are blocked.


2. Incoming packets destined for internal TELNET server (port 23) are
blocked.
3. Incoming packets destined for host 192.168.21.3 are blocked.
4. All well-known services to the network 192.168.21.0 are allowed.
B. Second Generation - Stateful inspection firewall
Now thought of as a “traditional” firewall, a stateful inspection firewall allows or
blocks traffic based on state, port, and protocol. It monitors all activity from the
opening of a connection until it is closed. Filtering decisions are made based on both
administrator-defined rules as well as context, which refers to using information
from previous connections and packets belonging to the same connection.
C. Third Generation- Application Layer Firewall:
Application layer firewall can inspect and filter the packets on any OSI layer, up to
the application layer. It has the ability to block specific content, also recognize
when certain application and protocols (like HTTP, FTP) are being misused. In
other words, Application layer firewalls are hosts that run proxy servers. A proxy
firewall prevents the direct connection between either side of the firewall, each
packet has to pass through the proxy. It can allow or block the traffic based on
predefined rules.
Note: Application layer firewalls can also be used as Network Address Translator(NAT).
D. Next-generation firewall (NGFW)
Firewalls have evolved beyond simple packet filtering and stateful inspection. Most
companies are deploying next-generation firewalls to block modern threats such as
advanced malware and application-layer attacks.
According to Gartner, Inc.’s definition, a next-generation firewall must include:

 Standard firewall capabilities like stateful inspection

 Integrated intrusion prevention

 Application awareness and control to see and block risky apps

 Upgrade paths to include future information feeds

 Techniques to address evolving security threats


While these capabilities are increasingly becoming the standard for most companies,
NGFWs can do more.
A. Proxy firewall
An early type of firewall device, a proxy firewall serves as the gateway from one
network to another for a specific application. Proxy servers can provide additional
functionality such as content caching and security by preventing direct connections
from outside the network. However, this also may impact throughput capabilities
and the applications they can support.
B. Unified threat management (UTM) firewall
A UTM device typically combines, in a loosely coupled way, the functions of a
stateful inspection firewall with intrusion prevention and antivirus. It may also
include additional services and often cloud management. UTMs focus on simplicity
and ease of use.
C. Threat-focused NGFW
These firewalls include all the capabilities of a traditional NGFW and also provide
advanced threat detection and remediation. With a threat-focused NGFW you can:

 Know which assets are most at risk with complete context awareness
 Quickly react to attacks with intelligent security automation that sets policies
and hardens your defenses dynamically
 Better detect evasive or suspicious activity with network and endpoint event
correlation
 Greatly decrease the time from detection to cleanup with retrospective
security that continuously monitors for suspicious activity and behavior even
after initial inspection
 Ease administration and reduce complexity with unified policies that protect
across the entire attack continuum

11. Types of firewalls


Firewalls are generally of two types: Host-based and Network-based.
1. Host- based Firewalls: Host-based firewall is installed on each network node
which controls each incoming and outgoing packet. It is a software application
or suite of applications, comes as a part of the operating system. Host-based
firewalls are needed because network firewalls cannot provide protection
inside a trusted network. Host firewall protects each host from attacks and
unauthorized access.
2. Network-based Firewalls: Network firewall function on network level. In
other words, these firewalls filter all incoming and outgoing traffic across the
network. It protects the internal network by filtering the traffic using rules
defined on the firewall. A Network firewall might have two or more network
interface cards (NICs). A network-based firewall is usually a dedicated system
with proprietary software installed.

12. What is Kerberos authentication?


Kerberos is a protocol for authenticating service requests between trusted hosts
across an untrusted network, such as the internet. Kerberos support is built in to all
major computer operating systems, including Microsoft Windows, Apple macOS,
FreeBSD and Linux.
Kerberos is designed to completely avoid storing any passwords locally or having
to send any passwords through the internet and provides mutual authentication,
meaning both the user and the server's authenticity are verified.
A simplified description of how Kerberos works follows; the actual process is more
complicated and may vary from one implementation to another:
1. Authentication server request. To start the Kerberos client authentication
process, the initiating client sends an authentication request to the Kerberos
KDC authentication server. The initial authentication request is sent as
plaintext because no sensitive information is included in the request. The
authentication server verifies that the client is in the KDC database and
retrieves the initiating client's private key.
2. Authentication server response. If the initiating client's username isn't found
in the KDC database, the client cannot be authenticated, and the authentication
process stops. Otherwise, the authentication server sends the client a TGT and
a session key.
3. Service ticket request. Once authenticated by the authentication server, the
client asks for a service ticket from the TGS. This request must be accompanied
by the TGT sent by the KDC authentication server.
4. Service ticket response. If the TGS can authenticate the client, it sends
credentials and a ticket to access the requested service. This transmission is
encrypted with a session key specific to the user and service being accessed.
This proof of identity is used to access the requested "Kerberized" service. That
service validates the original request and then confirms its identity to the
requesting system.
5. Application server request. The client sends a request to access the application
server. This request includes the service ticket received in step 4. If the
application server can authenticate this request, the client can access the server.
6. Application server response. In cases where the client requests the application
server to authenticate itself, this response is required. The client has already
authenticated itself, and the application server response includes Kerberos
authentication of the server.

13. What is Kerberos used for?


Kerberos is used to authenticate entities requesting access to network resources,
especially in large networks to support SSO. The protocol is used by default in many
widely used networking systems. Some systems in which Kerberos support is
incorporated or available include the following:
 Amazon Web Services
 Apple macOS
 Google Cloud
 Hewlett Packard Unix
 IBM Advanced Interactive eXecutive
 Microsoft Azure
 Microsoft Windows Server and AD
 Oracle Solaris
 Red Hat Linux
 FreeBSD
 OpenBSD

14. What is meant by SSL AND TLS


 SSL is an acronym for Secure Sockets Layer. A type of digital security that
allows encrypted communication between a website and a web browser. The
technology is currently deprecated and has been replaced entirely by TLS.
 TLS stands for Transport Layer Security and it ensures data privacy the
same way that SSL does. Since SSL is actually no longer used, this is the
correct term that people should start using.
 HTTPS is a secure extension of HTTP. Websites that install and configure an
SSL/TLS certificate can use the HTTPS protocol to establish a secure
connection with the server.
 The goal of SSL/TLS is to make it safe and secure to transmit sensitive
information including personal data, payment or login information.
 It’s an alternative to plain text data transfer in which your connection to a
server is unencrypted, and it makes it harder for crooks and hackers to
snoop on the connection and steal your data.
 Most people are familiar with SSL/TLS certificates, which are used by
webmasters to secure their websites and to provide a secure way for
people to carry out transactions.
 You can tell when a website is using one because you’ll see a little
padlock icon next to the URL in the address bar.

15. What is OWASP?


OWASP stands for Open Web Application Security Project, which is a non-profit
organization that provides unbiased guides, security best practices, tools and
recommendations for building a secured web application.

16. Parsing
Data parsing is converting data from one format to another. Widely used for data
structuring, it is generally done to make the existing, often unstructured, unreadable
data more comprehensible.

17. NIC
A Network Interface Card (NIC) provides networking capabilities for a computer. It
may enable a wired connection (Ethernet) or a wireless connection (Wi-Fi) to a local
area network.

18. Splunk components


A. Splunk Forwarder: Splunk Forwarder is an agent deployed on IT systems,
collecting logs and sending them to the indexer. Also, it is used to gather real-time
data and help users to analyze real-time data. It is scalable as well as consumes less
processing power compared to other monitoring tools. There are 2 categories
in Splunk forwarders that is a) Universal Forwarder b) Heavy Forwarder
Splunk Universal Forwarder – It is used to forward the raw data collected at the
source. The component performs minimum processing on incoming data streams
before forwarding them to an indexer, a lot of unnecessary data is also forwarded
which causes performance overheads.
Splunk Heavy Forwarder – Splunk heavy forwarder is used for reducing the
problems, as one level of data processing takes place on the source itself before
forwarding data to the indexer. It routes the data to the Indexer saving
bandwidth and storage. Hence, the main role of the heavy forwarders is to parse
the data at the source.
B. Splunk Indexer: Splunk indexer is a component used for indexing and storing
data coming from the heavy forwarder. After transforming the data into events and
storing it into a disk for performing search operations efficiently, Splunk indexes the
data and creates some files separating them into directories called buckets. those files
consist of:
 A compressed form of raw data
 Indexes point to raw data that is TSIDX files or index files.
 Some metadata files
Incoming data is processed to enable fast search and analysis. This process is also
known as event processing. Data replication is another advantage
to Splunk Indexer.
C. Splunk Head: A search head is for interaction with Splunk. Users perform
various operations on the user interface provided to them. By entering search words,
you can search and query the data stored in the indexer and get the expected results.
Splunk instance functions both as search head as well as search peer. Search heads
performing searching and not indexing are commonly known as dedicated search
heads. Whereas, indexing and responding are done by search peer to search requests
from other search heads.

19. Splunk licences


 Splunk Enterprise license
 Splunk developer licenses
 The Splunk Enterprise trial license
 Sales Trial license
 Free license
 Forwarder license
 Beta license

20. License master


1. Licensing master - controls one or more license slaves. From the license master,
you can define stacks, pools, add licensing capacity, and manage license slaves.
License master ensures that the right amount of data getting indexed. It ensures that
the environment remains within the limits of the purchased volume as Splunk
license depends on the data volume, which comes to the platform within a 24-hour
window.
2. Licensing Slave - A license slave is a Splunk indexer which is a member of one or
more license pools. The access a license slave has to license volume is controlled by
its license master.
a. standalone licensing master -
If you have a single Splunk indexer and want to manage its licenses, you can run
it as its own license master, install one or more Enterprise licenses on it and it
will manage itself as a license slave.
b. Central license master
If you have more than one indexer and want to manage their access to
purchased license capacity from a central location, configure a central license
master and add the indexers to it as license slaves.

21. Name some important configuration files of Splunk


Commonly used Splunk configuration files are:
 Inputs file
 Transforms file
 Server file
 Indexes file
 Props file

22. Explain license violation in Splunk.


It is a warning error that occurs when you exceed the data limit. This warning error
will persist for 14 days. In a commercial license, you may have 5 warnings within a
1-month rolling window before which your Indexer search results and reports stop
triggering. However, in a free version, license violation warning shows only 3 counts
of warning.

23. What are the most important configuration files in Splunk?


Following is the list of most important configuration files in Splunk:
 props.conf
 indexes.conf
 inputs.conf
 transforms.conf
 server.conf

24. What are the common port numbers used by Splunk?


Following is the list of the common port numbers used by Splunk:
 Splunk Network port: 514 (Used to get data from the Network port, i.e., UDP
data)
 Splunk Web Port: 8000
 Splunk Index Replication Port: 8080
 Splunk Management Port: 8089
 KV store: 8191
 Splunk Indexing Port: 9997

25. What are the features not available in Splunk Free?


Following is a list of features that are not available in the Splunk Free version:
 Authentication and scheduled searches/alerting
 Deployment management
 Distributed search
 Forwarding in TCP/HTTP (to non-Splunk)

26. What are the different types of Splunk dashboards available in Splunk?
Following are the three different types of Splunk dashboards available in Splunk:
 Real-time dashboards
 Dynamic form-based dashboards
 Dashboards for scheduled reports

27. What will happen if the License Master is unreachable in Splunk?


In Splunk, if the license master is not available or unreachable, the license slave will
start a 24-hour timer, after which the search will be blocked on the license slave
(though indexing continues). After that, the users will not be able to search for data
in that slave until it can reach the license master again.

28. What are the different types of search modes supported in Splunk?
Splunk supports the following three types of dashboards:
 Fast mode
 Smart mode
 Verbose mode
https://www.devopsschool.com/blog/top-50-splunk-interview-questions-and-
answers/

29. Event IDs


https://www.xplg.com/windows-server-security-events-list/

30. DDoS and DoS?


1. DOS Attack is a denial of service attack, in this attack a computer sends a
massive amount of traffic to a victim’s computer and shuts it down. Dos attack is
an online attack that is used to make the website unavailable for its users when
done on a website. This attack makes the server of a website that is connected to
the internet by sending a large number of traffic to it.
2. DDOS Attack means distributed denial of service in this attack dos attacks are
done from many different locations using many systems.
Difference between DOS and DDOS attacks:
DOS DDOS

DDOS Stands for Distributed Denial of


DOS Stands for Denial of service attack.
service attack.

In Dos attack single system targets the In DDoS multiple systems attacks the
victim system. victim’s system.

Victim PC is loaded from the packet of Victim PC is loaded from the packet of data
data sent from a single location. sent from Multiple location.

Dos attack is slower as compared to


DDoS attack is faster than Dos Attack.
DDoS.

It is difficult to block this attack as multiple


Can be blocked easily as only one devices are sending packets and attacking
system is used. from multiple locations.

In DOS Attack only single device is In DDoS attack,The volumeBots are used to
used with DOS Attack tools. attack at the same time.

DOS Attacks are Easy to trace. DDOS Attacks are Difficult to trace.

DDoS attacks allow the attacker to send


Volume of traffic in the Dos attack is
massive volumes of traffic to the victim
less as compared to DDos.
network.

Types of DOS Attacks are: 1. Buffer Types of DDOS Attacks are: 1. Volumetric
overflow attacks 2. Ping of Death or Attacks 2. Fragmentation Attacks 3.
ICMP flood 3. Teardrop Attack 4. Application Layer Attacks 4. Protocol
Flooding Attack Attack.

31. Wt is IP and types


IP address stands for internet protocol address; it is an identifying number that is
associated with a specific computer or computer network. When connected to the
internet, the IP address allows the computers to send and receive information.
There are four types of IP addresses: public, private, static, and dynamic.
The public and private are indicative of the location of the network, static and
dynamic indicate permanency
 private being used inside a network
 public is used outside of a network .
 A static IP address is one that was manually created, as opposed to having
been assigned.
 Dynamic IP addresses are only active for a certain amount of time, after
which they expire.

32. Wt is SQL Injection


A Structured Query Language (SQL) injection attack occurs on a database-driven
website when the hacker manipulates a standard SQL query. It is carried by injecting
a malicious code into a vulnerable website search box, thereby making the server
reveal crucial information.
This results in the attacker being able to view, edit, and delete tables in the
databases. Attackers can also get administrative rights through this.
To prevent a SQL injection attack:
 Use an Intrusion detection system, as they design it to detect unauthorized
access to a network.
 Carry out a validation of the user-supplied data. With a validation process,
it keeps the user input in check.

33. Zeroday exploit


If a hacker manages to exploit the vulnerability before software developers can find
a fix, that exploit becomes known as a zero-day attack.

34. Eavesdropping
Eavesdropping attacks happen when cyber criminals or attackers listen in to
network traffic traveling over computers, servers, mobile devices and Internet of
Things (IoT) devices.

35. Port number 3389 & 389 & 69


3389-Remote Destop protocol
389 – LDAP Lightweight Directory access protocol
69 – TFTP Trivial file transfer protocol

36. How to check the log source is running or not?


Indexer="firewall" or "cisco" or “main”

37. How to find the Source and Destination IP in Splunk


index=firewall (src_ip=10.1.1.1 OR src_ip=192.168.1.1) AND dest_ip=172.16.1.1

38. The following are the responsibilities of an L2 Security Analyst:


 They conduct in-depth analyses of escalated alerts
 They secure the privacy and security of sensitive information
 They verify the incidents that SOC operators have reported
 They help with incident remediation
 They help L1 Security Analysts in the analysis of alerts
 They train the L1 Security Analysts
 They handle primary SIEM challenges
 They keep SOPs and SOC processes up to date and improve them

39. What is the difference between firewall deny and drop?


Answer: DENY RULE: If the firewall is set to deny rule, it will block the connection
and send a reset packet back to the requester. The requester will know that the
firewall is deployed.
DROP RULE: If the firewall is set to drop rule, it will block the connection request
without notifying the requester.
It is best to set the firewall to deny the outgoing traffic and drop the incoming traffic
so that attacker will not know whether the firewall is deployed or not.

40. Being a SOC Analyst, what would you do if you found 300 alerts triggered at
once?
Answer: If multiple alerts trigger at the same time, there could be the following three
possibilities:
A single alert may have triggered more than once: If a single alert triggers more than
once, I will distinguish the duplicate alerts.
If the alerts are different: I will prioritize them and choose the one having a higher
impact.
If the alerts are for a new correlation rule: Then alerts may be misconfigured. I will
inform the SIEM Engineer.
(These types of questions are asked by the interviewer to check the practical or
applied knowledge of the candidates)

41. What is log parsing?


Each log has a repeating data format which includes data fields and values.
However, the format varies between systems even between different logs on the
same system. A log parser is a software component that can take a specific log
format and convert it to structured data. Log aggregation software includes dozens
or hundreds of parsers written to process logs for common systems.

42. What are the disadvantages of using Splunk?


Some disadvantages of using Splunk tool are:
 Splunk can prove expensive for large data volumes.
 Dashboards are functional but not as effective as some other monitoring tools.
 Its learning curve is stiff, and you need Splunk training as it’s a multi-tier
architecture. So, you need to spend lots of time to learn this tool.
 Searches are difficult to understand, especially regular expressions and search
syntax.
43. Explain search factor and replication factor?
Search factor determines the number of data maintained by the indexer cluster. It
determines the number of searchable copies available in the bucket.
Replication factor determines the number of copies maintained by the cluster as well
as the number of copies that each site maintains

44. How can identity theft be prevented?


 Ensure strong password
 Avoid sharing confidential information online on social media
 Shop from known and trusted websites
 Use the latest version of browsers
 Install advanced malware and spyware protection tools
 Update your system and software

45. How does encryption work? Why is it important?


Encryption is a process that encodes a message or file so that it can only be read by
certain people. Encryption uses an algorithm to scramble or encrypt data and then
uses a key for the receiving party to unscramble or decrypt the information. Keys are
usually generated with random number generators or computer algorithms that
mimic random number generators.

Key: Random string of bits created specifically for scrambling and unscrambling
data. These are used to encrypt and/or decrypt data. Each key is unique and created
via an algorithm to make sure it is unpredictable. Longer keys are harder to crack.
Common key lengths are 128 bits for symmetric key algorithms and 2048 bits for
public-key algorithms.

Private Key (or Symmetric Key): This means that the encryption and decryption keys
are the same. The two parties must have the same key before they can achieve secure
communication.
Public Key: This means that the encryption key is published and available for
anyone to use. Only the receiving party has access to the decryption key that enables
them to read the message.

46. How QRadar SIEM collects security data?


IBM QRadar collects log data from sources in an enterprise's information system,
including network devices, operating systems, applications and user activities. The
QRadar SIEM analyzes log data in real-time, enabling users to quickly identify and
stop attacks.

47. What is parsing in QRadar?


When you send your log file data to IBM Security QRadar, it first is parsed inside a
Device Support Module (DSM) so that QRadar can fully utilize the normalized data
for event and offense processing

48. What is Port Scanning?


Port Scanning is the technique used to identify open ports and service available on a
host. Hackers use port scanning to find information that can be helpful to exploit
vulnerabilities. Administrators use Port Scanning to verify the security policies of the
network. Some of the common Port Scanning Techniques are:
 Ping Scan
 TCP Half-Open
 TCP Connect
 UDP
 Stealth Scanning

49. What is an ARP and how does it work?


Address Resolution Protocol (ARP)is a protocol for mapping an Internet Protocol
address (IP address) to a physical machine address that is recognized in the local
network.
When an incoming packet destined for a host machine on a particular local area
network arrives at a gateway, the gateway asks the ARP program to find a physical
host or MAC address that matches the IP address.
The ARP program looks in the ARP cache and, if it finds the address, provides it so
that the packet can be converted to the right packet length and format and sent to the
machine.
If no entry is found for the IP address, ARP broadcasts a request packet in a special
format to all the machines on the LAN to see if one machine knows that it has that IP
address associated with it.

50. Horizontal scan and vertical scan


Horizontal Scan is described as scan against a group of IPs for a single port.
Vertical Scan is described as scan against a single IP being scanned for multiple
ports.

51. Whaling
A whaling attack, also known as whaling phishing or a whaling phishing attack, is a
specific type of phishing attack that targets high-profile employees, such as the chief
executive officer or chief financial officer, in order to steal sensitive information from
a company. In many whaling phishing attacks, the attacker's goal is to manipulate
the victim into authorizing high-value wire transfers to the attacker.
52. Steps in Incident response plan
 Preparation
 Identification
 Containment
 Eradication
 Recovery
 Lessons learned.
53. If ur system is attacked with ransomeware how u'll react?
 Isolate the Affected Systems
 Report the attack
 Shut down "Patient Zero"
 Secure your Backups
 Disable all Maintenance Tasks
 Backup the Infected Systems
 Identify the Strain
 Decide Whether to Pay the Ransom

54. Brute force attack


A brute force attack uses trial-and-error to guess login info, encryption keys, or find
a hidden web page. Hackers work through all possible combinations hoping to
guess correctly. The attacker systematically checks all possible passwords and
passphrases until the correct one is found.

55. Types of hackers?


 White Hat / Ethical Hackers.
 Black Hat Hackers.
 Gray Hat Hackers.
 Script Kiddies.
 Green Hat Hackers.
 Blue Hat Hackers.
 Red Hat Hackers.
 State/Nation Sponsored Hackers.
 Hacktivist
 Malicious insider or Whistleblower

56. Why cybersecurity important


Cybersecurity is crucial because it safeguards all types of data against theft and loss.
Sensitive data, protected health information (PHI), personally identifiable
information (PII), intellectual property, personal information, data, and government
and business information systems are all included.

57. MAC address


A Media Access Control address (MAC address) is a hardware identifier that
uniquely identifies each device on a network. Primarily, the manufacturer assigns it.
They are often found on a device's network interface controller (NIC) card. A MAC
address can also be referred to as a burned-in address, Ethernet hardware address,
hardware address, or physical address.
58. Hub and switch difference
HUB SWITCH
Hub is operated on Physical layer of Switch is operated on Data link layer
OSI model. of OSI Model.
Hub is a broadcast type transmission. Switch is a Unicast, multicast and
broadcast type transmission.
Hub has 4/12 ports. Switch have 24 to 48 ports.
There is only one collision domain. Different ports have own collision
domain.
Hub is a half-duplex transmission Switch is a full duplex transmission
mode. mode.
Packet filtering is not provided. Packet filtering is provided.
Hub cannot be used as a repeater. Switch can be used as a repeater.
Hub is not an intelligent device that Switch is an intelligent device that
sends message to all ports hence it is sends message to selected destination
comparatively inexpensive. so it is expensive.
Hub is simply old type of device and Switch is very sophisticated device and
is not generally used. widely used.
Hacking of systems attached to hub is Hacking of systems attached to switch
complex. is little easy.

59. Inbound and outbound traffic


Inbound traffic originates from outside the network, while outbound traffic
originates inside the network.

60. Difference b/n port numbers and protocols


Protocol number define the field in IP header to which protocol must be delivered.
Port number define the number of protocols in an application layer. Port number
identifies a specific process to which an Internet or other network message is to be
forwarded when it arrives at a server.
Examples:
Protocol Number---> ip=0 / icmp=1 / tcp=6 / udp=17
Port Number: ---> http=80 / https=443 / ftp=20,21 / telnet=23

61. AAA
AAA is a standard-based framework used to control who is permitted to use
network resources (through authentication), what they are authorized to do
(through authorization), and capture the actions performed while accessing the
network (through accounting).
Authentication –
The process by which it can be identified that the user, which wants to access the
network resources, valid or not by asking some credentials such as username and
password. Common methods are to put authentication on console port, AUX port, or
vty lines.
As network administrators, we can control how a user is authenticated if someone
wants to access the network. Some of these methods include using the local database
of that device (router) or sending authentication requests to an external server like
the ACS server. To specify the method to be used for authentication, a default or
customized authentication method list is used.
Authorization –
It provides capabilities to enforce policies on network resources after the user has
gained access to the network resources through authentication. After the
authentication is successful, authorization can be used to determine what resources
is the user allowed to access and the operations that can be performed.
For example, if a junior network engineer (who should not access all the resources)
wants to access the device then the administrator can create a view that will allow
particular commands only to be executed by the user (the commands that are
allowed in the method list). The administrator can use the authorization method list
to specify how the user is authorized to network resources i.e through a local
database or ACS server.
Accounting –
It provides means of monitoring and capturing the events done by the user while
accessing the network resources. It even monitors how long the user has access to
the network. The administrator can create an accounting method list to specify what
should be accounted for and to whom the accounting records should be sent.
AAA implementation: AAA can be implemented by using the local database of the
device or by using an external ACS server.
local database – If we want to use the local running configuration of the router or
switch to implement AAA, we should create users first for authentication and
provide privilege levels to users for Authorization.
ACS server – This is the common method used. An external ACS server is used (can
be ACS device or software installed on Vmware) for AAA on which configuration on
both router and ACS is required. The configuration includes creating a user, separate
customized method list for authentication, Authorization, and Accounting.
The client or Network Access Server (NAS) sends authentication requests to the
ACS server and the server takes the decision to allow the user to access the network
resource or not according to the credentials provided by the user.
Note – If the ACS server fails to authenticate, the administrator should mention
using the local database of the device as a backup, in the method list, to implement
AAA.

62. What is DNS


The Domain Name System (DNS) is the phonebook of the Internet. The domain
name system (DNS) is a naming database in which internet domain names are
located and translated into Internet Protocol (IP) addresses. The domain name
system maps the name people use to locate a website to the IP address that a
computer uses to locate that website.
 DNS recursor - The recursor can be thought of as a librarian who is asked to
go find a particular book somewhere in a library. The DNS recursor is a
server designed to receive queries from client machines through applications
such as web browsers. Typically, the recursor is then responsible for making
additional requests in order to satisfy the client’s DNS query.
 Root nameserver - The root server is the first step in translating (resolving)
human readable host names into IP addresses. It can be thought of like an
index in a library that points to different racks of books - typically it serves as
a reference to other more specific locations.
 TLD nameserver - The top-level domain server (TLD) can be thought of as a
specific rack of books in a library. This nameserver is the next step in the
search for a specific IP address, and it hosts the last portion of a hostname (In
example.com, the TLD server is “com”).
 Authoritative nameserver - This final nameserver can be thought of as a
dictionary on a rack of books, in which a specific name can be translated into
its definition. The authoritative nameserver is the last stop in the nameserver
query. If the authoritative name server has access to the requested record, it
will return the IP address for the requested hostname back to the DNS
Recursor (the librarian) that made the initial request.
Link

63. 2 factor authentication


Two-factor authentication (also known as 2FA or dual authentication) is a type of
multi-factor authentication (MFA) that increases account security by using two
methods to verify your identity. Online, 2FA usually refers to a second layer of
security on top of a password.
Using a bank card at an ATM along with a PIN code to withdraw money is a
common example of 2FA. But security questions and CAPTCHAs are not.

64. Why do we need two-factor authentication (2FA)?


We need two-factor authentication because it’s a more effective way to control access
than keeping your personal data protected with only a password. If someone hacks
an account protected by 2FA, they’ll still need to know the second access factor, like
an SMS verification code or your fingerprint, to access your account.

2FA: the three factors


The three factors that can be used for two-factor authentication are something you
know (like a password), something you have (like a bank card), and something you
are (like face ID). 2FA requires two of these three factors.
Here are the three main 2FA authentication factors:
 Knowledge factor
This is something you know. It can’t be physically lost or found, but it can be copied
— like a password or PIN code.
 Possession factor
This is something you have that can’t be easily copied, but can be stolen — like a
bank card or physical key.
 Inherence (biometric) factor
This is something you are, which can’t be easily faked — like a fingerprint or face ID.

To qualify as two-factor authentication, the two access methods used must be two
different factor types. Using a username and password isn’t 2FA because both
factors are knowledge factors. Even an extra security question still doesn’t qualify as
2FA, because a security question is also a knowledge factor.
Now, think of your garage door code (knowledge factor) and your house key
(possession factor). If you want to enter your locked house through the garage, you
need both. This is an example of two-factor authentication, because it relies on
something you know (code) and something you have (key). Without one of them,
you’re not getting through that door easily.

Here are some other common examples of two-factor authentication:


 Withdrawing money from an ATM
o You know your PIN code
o You have your bank card
 Accessing online accounts with one-time SMS verification (OTP) codes
o You know your username and password
o You have your phone
 Traveling internationally
o You have your passport
o You are you, verified by facial recognition, fingerprints, or retina scans
Note-Link

65. Difference Between Threat, Vulnerability, and Risk


Threat Vulnerability Risks
Take advantage of Known as the weakness
vulnerabilities in the in hardware, software, or The potential for loss or
system and have the designs, which might destruction of data is
potential to steal and allow cyber threats to caused by cyber threats.
damage data. happen.
Generally, can’t be
Can be controlled. Can be controlled.
controlled.
It may or may not be
Generally, unintentional. Always intentional.
intentional.
Can be blocked by Vulnerability Reducing data transfers,
managing the management is a process downloading files from
Threat Vulnerability Risks
reliable sources,
updating the software
of identifying the
regularly, hiring a
problems, then
professional
categorizing them,
cybersecurity team to
vulnerabilities. prioritizing them, and
monitor data,
resolving the
developing an incident
vulnerabilities in that
management plan, etc.
order.
help to lower down the
possibility of cyber risks.
Can be detected by
identifying mysterious
Can be detected by
Can be detected by anti- emails, suspicious pop-
penetration testing
virus software and ups, observing unusual
hardware and many
threat detection logs. password activities, a
vulnerability scanners.
slower than normal
network, etc.

66. OSI layer attacks


Cyber-attack/ Threat
OSI Layers Device/Protocol Function
Examples
FTP, HTTP, IMAP,
SMTP, NTP, HTTPS,
Ransomware, Viruses,
Application LDAP, RTP, Telnet, DNS, User Interface
Worm, Botnets, MITM,
DHCP, POP3, RTSP, SSH,
ARP Spoofing,
SIP, TFTP
Keylogger, Rootkits,
JPG, PNG, MPEG, MIDI, Data format,
Presentation Malware, Spyware,
PICT, TIFF Encryption
Cache poisoning, DNS
Process to
SQL, RPC, NFS, NetBIOS, redirecting
Session Process
SCP, ZIP, PAP
Communication
End to End
RIP attacks, SYN
Transport TCP, UDP Communication
flooding
Maintenance
Routing Data, IP smurfing, Address
ICMP, IGMP, IPSec, IPv4,
Logical Spoofing, Vulnerable
Network IPv6, IPX, RIP, L3
Addressing, old firmware,
Switches, Routers
WAN delivery Misconfigured devices,
Datalink L2 Switches, Bridges, Physical Default passwords
ARP, ATM, CDP, STP, Addressing,
PPP, HDLC, FDDI, LAN delivery
MPLS, Token Ring,
Cyber-attack/ Threat
OSI Layers Device/Protocol Function
Examples
Frame Relay
Physical cabling, Envt. And Physical
Transmitting
Physical Bluetooth, ethernet, DSL, Threats: Dust, Water
Bits
ISDN, Wi-Fi and Rodents

67. HTTP response status codes


HTTP response status codes indicate whether a specific HTTP request has been
successfully completed. Responses are grouped in five classes:
 Informational responses (100 – 199)
 Successful responses (200 – 299)
 Redirection messages (300 – 399)
 Client error responses (400 – 499)
 Server error responses (500 – 599)
The status codes listed below are defined by RFC 9110
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status

68. Difference between TCP and UDP


Transmission control protocol User datagram protocol
Basis
(TCP) (UDP)
UDP is the Datagram-oriented
TCP is a connection-oriented
protocol. This is because there
protocol. Connection-
is no overhead for opening a
orientation means that the
connection, maintaining a
communicating devices should
Type of Service connection, and terminating a
establish a connection before
connection. UDP is efficient
transmitting data and should
for broadcast and multicast
close the connection after
types of network
transmitting the data.
transmission.
TCP is reliable as it guarantees The delivery of data to the
Reliability the delivery of data to the destination cannot be
destination router. guaranteed in UDP.
TCP provides extensive error-
checking mechanisms. It is UDP has only the basic error
Error checking
because it provides flow checking mechanism using
mechanism
control and acknowledgment checksums.
of data.
An acknowledgment segment No acknowledgment
Acknowledgment
is present. segments.
Sequence Sequencing of data is a feature There is no sequencing of data
of Transmission Control in UDP. If the order is
Protocol (TCP). this means that required, it has to be managed
Transmission control protocol User datagram protocol
Basis
(TCP) (UDP)
packets arrive in order at the
by the application layer.
receiver.
TCP is comparatively slower UDP is faster, simpler, and
Speed
than UDP. more efficient than TCP.
Retransmission of lost packets There is no retransmission of
Retransmission is possible in TCP, but not in lost packets in the User
UDP. Datagram Protocol (UDP).
TCP has a (20-60) bytes UDP has an 8 bytes fixed-
Header Length
variable length header. length header.
Weight TCP is heavy-weight. UDP is lightweight.
Handshaking Uses handshakes such as SYN, It’s a connectionless protocol
Techniques ACK, SYN-ACK i.e. No handshake
TCP doesn’t support
Broadcasting UDP supports Broadcasting.
Broadcasting.
TCP is used by HTTP, HTTPs, UDP is used by DNS, DHCP,
Protocols
FTP, SMTP and Telnet. TFTP, SNMP, RIP, and VoIP.
The TCP connection is a byte UDP connection is message
Stream Type
stream. stream.
Overhead Low but higher than UDP. Very low.

A short example to understand the differences clearly:


Suppose there are two houses, H1 and H2 and a letter have to be sent from H1 to H2.
But there is a river in between those two houses. Now how can we send the letter?
Solution 1: Make a bridge over the river and then it can be delivered.
Solution 2: Get it delivered through a pigeon.
Consider the first solution as TCP. A connection has to be made (bridge) to get the
data (letter) delivered.
The data is reliable because it will directly reach another end without loss in data or
error.
And the second solution is UDP. No connection is required for sending the data.
The process is fast as compared to TCP, where we need to set up a
connection(bridge). But the data is not reliable: we don’t know whether the pigeon
will go in the right direction, or it will drop the letter on the way, or some issue is
encountered in mid-travel.

69. What is trace route


A traceroute is a function which traces the path from one network to another. It
allows us to diagnose the source of many problems.
It is a network diagnostic tool used to track in real time the pathway taken by packet
on an IP network from source to destination, reporting the IP addresses of all routers
pinged in between. Traceroute also records the time taken for each hop the packet
makes during its route to the destination.
Traceroute most commonly uses Internet Control Message Protocol (ICMP) echo
packets with variable time to live (TTL) values.
Traceroute is a useful tool for determining the response delays and routing loops
present in a network pathway across packet switched nodes. It also helps to locate
any points of failure encountered while en route to a certain destination.

70. Why Use an HTTPS Port


For website owners, utilizing a secure channel is essential. Here are four main
reasons you should switch to an HTTPS port:

 Sensitive information protection. One of the benefits of using SSL is that it


encrypts and authenticates data as it’s being transferred. It ensures data
security in transit and protects it from man-in-the-middle (MITM) attacks.
 Keeps online transactions secure. eCommerce site owners must have an SSL
certificate to encrypt financial data and adhere to the Payment Card Industry
Data Security Standards (PCI DSS) requirements.
 Increases website’s rank on Search Engine Result Pages (SERP). HTTPS is
an important metric for search engine optimization (SEO). Therefore, sites
with an SSL certificate will rank better on search results.
 Improves customers’ trust and conversion rate. An HTTPS site assures
visitors that their sensitive information is secure, making them more likely to
revisit your site.

71. What is 8443 and why it is used for?


Port number 8443 is an alternative HTTPS port and a primary protocol that the Apache
Tomcat web server utilizes to open the SSL text service.
In addition, this port is primarily used as an HTTPS Client Authentication connection
protocol.
The HTTPS port provides encrypted traffic by generating an authentication key pair for the
user that is kept within the web browser. The server will then verify the authenticity of the
private key before establishing a secure connection.
The Tomcat is a core project in the Jakarta project of the Apache Software Foundation, which
is developed by Apache, Sun and several other companies and individuals.

The default https port number is 443, so Tomcat uses 8443 to distinguish this port.

72. Difference Between HTTPS Port 443 and Port 8443


When the Tomcat sets the https port, the differences of port 8443 and port 443:
Port 8443 needs to add a port number during the visit, the equivalent of http 8080, not
directly through the domain name, you need to add the port number. For example:
https://domainname.com:8443.
Port 443 can access without the need for port number, is the equivalent of http 80. It can
directly access through the domain name. Example: https://domainname.com.
73. CIA triad
the CIA triad is one of the most important models which is designed to guide
policies for information security within an organization.
CIA stands for:
 Confidentiality
 Integrity
 Availability

These are the objectives that should be kept in mind while securing a network.
Confidentiality:
Confidentiality means that only authorized individuals/systems can view sensitive
or classified information. The data being sent over the network should not be
accessed by unauthorized individuals. The attacker may try to capture the data
using different tools available on the Internet and gain access to your information. A
primary way to avoid this is to use encryption techniques to safeguard your data so
that even if the attacker gains access to your data, he/she will not be able to decrypt
it. Encryption standards include AES (Advanced Encryption Standard) and DES
(Data Encryption Standard). Another way to protect your data is through a VPN
tunnel. VPN stands for Virtual Private Network and helps the data to move securely
over the network.

Integrity:
The next thing to talk about is integrity. Well, the idea here is to make sure
that data has not been modified. Corruption of data is a failure to maintain data
integrity. To check if our data has been modified or not, we make use of a hash
function.
We have two common types: SHA (Secure Hash Algorithm) and MD5(Message
Direct 5). Now MD5 is a 128-bit hash and SHA is a 160-bit hash if we’re using SHA-
1. There are also other SHA methods that we could use like SHA-0, SHA-2, SHA-3.
Let’s assume Host ‘A’ wants to send data to Host ‘B’ maintaining integrity. A
hash function will run over the data and produce an arbitrary hash value H1 which
is then attached to the data. When Host ‘B’ receives the packet, it runs the same hash
function over the data which gives a hash value H2. Now, if H1 = H2, this means
that the data’s integrity has been maintained and the contents were not modified.

Availability:
This means that the network should be readily available to its users. This applies to
systems and to data. To ensure availability, the network administrator should
maintain hardware, make regular upgrades, have a plan for fail-over, and prevent
bottlenecks in a network. Attacks such as DoS or DDoS may render a network
unavailable as the resources of the network get exhausted. The impact may be
significant to the companies and users who rely on the network as a business tool.
Thus, proper measures should be taken to prevent such attacks.

74. Cyber kill chain process


The cyber kill chain is essentially a cybersecurity model created by Lockheed Martin that
traces the stages of a cyber-attack, identifies vulnerabilities, and helps security teams to stop
the attacks at every stage of the chain.
The term kill chain is adopted from the military, which uses this term related to the structure
of an attack. It consists of identifying a target, dispatch, decision, order, and finally,
destruction of the target.
The cyber kill chain consists of 7 distinct steps:
 Reconnaissance
The attacker collects data about the target and the tactics for the attack. This includes
harvesting email addresses and gathering other information.
Automated scanners are used by intruders to find points of vulnerability in the system. This
includes scanning firewalls, intrusion prevention systems, etc to get a point of entry for the
attack.
 Weaponization
Attackers develop malware by leveraging security vulnerabilities. Attackers engineer
malware based on their needs and the intention of the attack. This process also involves
attackers trying to reduce the chances of getting detected by the security solutions that the
organization has in place.
 Delivery
The attacker delivers the weaponized malware via a phishing email or some other medium.
The most common delivery vectors for weaponized payloads include websites, removable
disks, and emails. This is the most important stage where the attack can be stopped by the
security teams.
 Exploitation
The malicious code is delivered into the organization’s system. The perimeter is breached
here. And the attackers get the opportunity to exploit the organization’s systems by
installing tools, running scripts, and modifying security certificates.
Most often, an application or the operating system’s vulnerabilities are targeted. Examples
of exploitation attacks can be scripting, dynamic data exchange, and local job scheduling.

 Installation
A backdoor or remote access trojan is installed by the malware that provides access to the
intruder. This is also another important stage where the attack can be stopped using systems
such as HIPS (Host-based Intrusion Prevention System).
 Command and Control
The attacker gains control over the organization’s systems and network. Attackers gain
access to privileged accounts and attempt brute force attacks, search for credentials, and
change permissions to take over the control.
 Actions on Objective
The attacker finally extracts the data from the system. The objective involves gathering,
encrypting, and extracting confidential information from the organization’s environment.

75. Control of cyber kill chain


Based on these stages, the following layers of control implementation are provided:
 Detect – Determine the attempts to penetrate an organization.
 Deny – Stopping the attacks when they are happening.
 Disrupt – Intervene is the data communication done by the attacker and stops it then.
 Degrade – This is to limit the effectiveness of a cybersecurity attack to minimize its ill
effects.
 Deceive – Mislead the attacker by providing them with misinformation or
misdirecting them.
 Contain – Contain and limit the scope of the attack so that it is restricted to only
some part of the organization.
Link

76. Encryption and hashing


Encryption Hashing
A one-way method of hiding sensitive
A two-way function that
data. Using a hashing algorithm, hashing
takes in plaintext data, and
Definition turns a plaintext into a unique hash digest
turns it into
that cannot be reverted to the original
undecipherable ciphertext.
plaintext, without considerable effort.
Reversible or
Reversible Irreversible
Irreversible?
Variable or
Fixed Length Variable Length Fixed Length
Output?
Asymmetric and
Types Hashing
Symmetric
Common AES, RC4, DES, RSA, SHA-1, SHA-2, MD5, CRC32,
Algorithms ECDSA WHIRLPOOL

77. Security misconfiguration


Security misconfiguration occurs when security settings are not adequately defined in the
configuration process or maintained and deployed with default settings. This might impact
any layer of the application stack, cloud or network. Misconfigured clouds are a central
cause of data breaches, costing organizations millions of dollars.

Vulnerabilities are generally introduced during configuration. Typical misconfiguration


vulnerabilities occur with the use of the following:
 Defaults—including passwords, certificates and installation
 Deprecated protocols and encryption
 Open database instances
 Directory listing—this should not be enabled
 Error messages showing sensitive information
 Misconfigured cloud settings
 Unnecessary features—including pages, ports and command injection

78. Common Types of Security Misconfiguration


The following are common occurrences in an IT environment that can lead to a security
misconfiguration:
 Default accounts / passwords are enabled—Using vendor-supplied defaults for
system accounts and passwords is a common security misconfiguration, and may
allow attackers to gain unauthorized access to the system.
 Secure password policy is not implemented—Failure to implement a password
policy may allow attackers to gain unauthorized access to the system by methods
such as using lists of common username and passwords to brute force a username
and/or password field until successful authentication.
 Software is out of date and flaws are unpatched—Failure to update software
patches as part of the software management process may allow attackers to use
techniques such as code injection to inject malicious code that the application then
executes.
 Files and directories are unprotected—Leaving files and directories unprotected
may allow attackers to use techniques such as forceful browsing to gain access to
restricted files or areas in the server directory.
 Unused features are enabled or installed—Failure to remove unnecessary features,
components, documentation, and samples makes the application susceptible to
misconfiguration vulnerabilities, and may allow attackers to use techniques such as
code injection to inject malicious code that the application then executes.
 Security features not maintained or configured properly—Failure to properly
configure and maintain security features makes the application vulnerable to
misconfiguration attacks.
 Unpublished URLs are not blocked from receiving traffic from ordinary users—
Unpublished URLs, accessed by those who maintain applications, are not intended
to receive traffic from ordinary users. Failure to block these URLs can pose a
significant risk when attackers scan for them.
 Improper / poor application coding practices—Improper coding practices can lead
to security misconfiguration attacks. For example, the lack of proper input/output
data validation may lead to code injection attacks which work by injecting code that
the application executes.
 Directory traversal—allows an attacker to access directories, files, and commands
that are outside the root directory. Armed with access to application source code or
configuration and critical system files, a cybercriminal can change a URL in such a
way that the application could execute or display the contents of arbitrary files on the
server. Any device or application that reveals an HTTP-based interface is possibly
vulnerable to a directory traversal attack.

79. Penetration testing and Vulnerability Assessment


Generally, these two terms, i.e., Penetration Testing and Vulnerability assessment are
used interchangeably by many people, either because of misunderstanding or
marketing hype. But both the terms are different from each other in terms of their
objectives and other means. However, before describing the differences, let us first
understand both the terms one-by one.
Penetration Testing
Penetration testing replicates the actions of an external or/and internal cyber attacker/s
that is intended to break the information security and hack the valuable data or disrupt
the normal functioning of the organization. So, with the help of advanced tools and
techniques, a penetration tester (also known as ethical hacker) makes an effort to control
critical systems and acquire access to sensitive data.
Vulnerability Assessment
On the other hand, a vulnerability assessment is the technique of identifying (discovery)
and measuring security vulnerabilities (scanning) in a given environment. It is a
comprehensive assessment of the information security position (result analysis).
Further, it identifies the potential weaknesses and provides the proper mitigation
measures (remediation) to either remove those weaknesses or reduce below the risk
level.
The following diagram summarizes the vulnerability assessment −

The following table illustrates the fundamental differences between penetration testing
and vulnerability assessments –
Vulnerability scan Penetration test
At least quarterly, especially after Once or twice a year, as well as
new equipment is loaded or the anytime the Internet-facing
Frequency
network undergoes significant equipment undergoes significant
changes changes
Provide a comprehensive
baseline of what vulnerabilities Concisely identify what data was
Reports
exist and what changed since the compromised
last report
Lists known software Discovers unknown and
Focus vulnerabilities that could be exploitable weaknesses in normal
exploited business processes
Typically conducted by in-house Best to use an independent
Performed staff using authenticated outside service and alternate
by credentials; does not require a between two or three; requires a
high skill level great deal of skill
Detects when equipment could be
Value Identifies and reduces weaknesses
compromised

80. IPv4 Vs IPv6?


IPv4 IPv6
Address
IPv4 is a 32-bit address. IPv6 is a 128-bit address.
length
Fields IPv4 is a numeric address that IPv6 is an alphanumeric
IPv4 IPv6
address that consists of 8
consists of 4 fields which are
fields, which are separated
separated by dot (.).
by colon.
IPv4 has 5 different classes of IP
address that includes Class A, IPv6 does not contain classes
Classes
Class B, Class C, Class D, and of IP addresses.
Class E.
Number of IP IPv4 has a limited number of IP IPv6 has a large number of
address addresses. IP addresses.
It supports VLSM (Virtual
Length Subnet Mask). Here,
VLSM VLSM means that Ipv4 converts It does not support VLSM.
IP addresses into a subnet of
different sizes.
It supports manual, DHCP,
Address It supports manual and DHCP
auto-configuration, and
configuration configuration.
renumbering.
It generates 4 billion unique It generates 340 undecillion
Address space
addresses unique addresses.
End-to-end In the case of IPv6, end-to-
In IPv4, end-to-end connection
connection end connection integrity is
integrity is unachievable.
integrity achievable.
In IPv4, security depends on the
Security application. This IP address is not In IPv6, IPSEC is developed
features developed in keeping the for security purposes.
security feature in mind.
In IPv6, the representation of
Address In IPv4, the IP address is
the IP address in
representation represented in decimal.
hexadecimal.
Fragmentation is done by the
Fragmentation is done by the
Fragmentation senders and the forwarding
senders only.
routers.
It does not provide any It uses flow label field in the
Packet flow
mechanism for packet flow header for the packet flow
identification
identification. identification.
Checksum The checksum field is available in The checksum field is not
field IPv4. available in IPv6.
On the other hand, IPv6 is
Transmission
IPv4 is broadcasting. multicasting, which provides
scheme
efficient network operations.
Encryption It does not provide encryption It provides encryption and
and and authentication. authentication.
IPv4 IPv6
Authenticatio
n
It consists of 8 fields, and
Number of each field contains 2 octets.
It consists of 4 octets.
octets Therefore, the total number
of octets in IPv6 is 16.

81. DMARC?
DMARC (Domain-based Message Authentication Reporting and Conformance) is an email
validation system designed to protect your company’s email domain from being used for
email spoofing, phishing scams and other cybercrimes. DMARC leverages the existing email
authentication techniques SPF (Sender Policy Framework) DKIM (Domain Keys Identified
Mail). DMARC adds an important function, reporting. When a domain owner publishes a
DMARC record into their DNS record, they will gain insight in who is sending email on
behalf of their domain. This information can be used to get detailed information about the
email channel. With this information a domain owner can get control over the email sent on
his behalf. You can use DMARC to protect your domains against abuse in phishing or
spoofing attacks.
Within DMARC it is possible to instruct email receivers what to do with an email which fails
the DMARC checks. In the DMARC record a DMARC policy can be defined that, depending
on the setting, instructs an ISP how to handle emails that fail the DMARC checks. Email
receivers check if incoming messages have valid SPF and DKIM records and if these align
with the sending domain. After these checks a message can be considered as DMARC
compliant or DMARC failed. After the email receiver verifies the authentication status of a
message, they will handle the message differently based on the DMARC policy that is set.

There are 3 possible DMARC policies available: None (monitoring only), Quarantine and
Reject.
 Monitor policy: p=none
The first policy is the none (monitor) policy: p=none. The DMARC policy none instructs
email receivers to send DMARC reports to the address published in the RUA or RUF tag of
the DMARC record. This is known as a Monitoring only policy because with this
(recommended starting) policy you gain insight in your email channel. The none policy will
give insight in the email channel but does not instruct email receivers to handle emails
failing the DMARC checks differently, this is why it is also known as the monitor policy. The
none policy only gives insight in who’s sending email on behalf of a domain and will not
affect the deliverability.
 Quarantine policy: p=quarantine
The second policy is the quarantine policy: p=quarantine. Besides sending DMARC reports,
the DMARC policy quarantine instructs email receivers to put emails failing the DMARC
checks in the spam folder of the receiver. Emails that pass the DMARc checks will be
delivered in the primary inbox of the receiver. The quarantine policy will already mitigate
the impact of spoofing, but spoof emails will still be delivered to the receiver (in the spam
folder).
 Reject policy: p=reject
The third policy is the reject policy: p=reject. The DMARC policy reject. Besides sending
DMARC reports, the DMARC policy reject instructs email receivers to not deliver emails
failing the DMARC checks at all. Emails that pass the DMARC checks will be delivered in
the primary inbox of the receiver. This policy mitigates the impact of spoofing. Since the
DMARC policy reject makes sure all incorrect setup emails (spoofing emails) will be deleted
by the email receiver and not land in the inbox of the receiver.
A DMARC policy is a request not an obligation

82. Difference between Private IP and Public IP


Private IP address of a system is the IP address that is used to communicate within
the same network. Using private IP data or information can be sent or received
within the same network.
Public IP address of a system is the IP address that is used to communicate outside
the network. A public IP address is basically assigned by the ISP (Internet Service
Provider).
S.No. PRIVATE IP ADDRESS PUBLIC IP ADDRESS
1. The scope of Private IP is local. The scope of Public IP is global.
It is used to communicate within the It is used to communicate outside
2.
network. the network.
Private IP addresses of the systems
Public IP may differ in a uniform or
3. connected in a network differ in a
non-uniform manner.
uniform manner.
4. It works only on LAN. It is used to get internet service.
It is used to load the network
5. It is controlled by ISP.
operating system.
6. It is available free of cost. It is not free of cost.
Public IP can be known by
Private IP can be known by entering
7. searching “what is my ip” on
“ipconfig” on the command prompt.
google.
Range: Besides private IP
addresses, the rest are public.
A = 1.0.0.0 to 9.255.255.255 and
11.0.0.0 to 126.255.255.255
Range:
B = 128.0.0.0 to 172.15.255.255 and
A=10.0.0.0 – 10.255.255.255, 172.32.0.0 to 191.255.255.255
8.
B=172.16.0.0 – 172.31.255.255, C = 192.0.0.0 to 192.167.255.255 and
192.169.0.0 to 223.255.255.255
C=192.168.0.0 – 192.168.255.255
D = 224.0.0.0 to 239.255.255.255
(Multi cast Address)
E = 240.0.0.0 to 254.255.255.255
(Experimental / Research use)
Public IP uses a numeric code that
Private IP uses numeric code that is
9. is unique and cannot be used by
not unique and can be used again
other
Public IP address has no security
10. Private IP addresses are secure
and is subjected to attack
Private IP addresses require NAT to Public IP does not require a
11.
communicate with devices network translation

83. Splunk Architecture

Tier Content Description


Data Input Monitor Files You can monitor the files coming in and
Detect File changes detect the changes in real time
Listen to Network ports
You can receive data from various
Run Scripts network ports by running scripts for
automating data forwarding
Forwarder Data Routing, Cloning The forwarder has the capability to
Tier Content Description
and Load Balancing intelligently route the data, clone the
data and do load balancing on that data
before it reaches the indexer. Cloning is
done to create multiple copies of an event
right at the data source whereas load
balancing is done so that even if one
instance fails, the data can be forwarded
to another instance which is hosting the
indexer
Deployment server The deployment server is used for
managing the entire deployment,
configurations and policies
Indexer User and access controls When this data is received, it is stored in
an Indexer. The indexer is then broken
down into different logical data stores
and at each data store you can set
permissions which will control what each
user views, accesses and uses
Distributed search Once the data is in, you can search the
indexed data and also distribute searches
to other search peers and the results will
merged and sent back to the Search head
Search head Scheduling/ Alerting Apart from that, you can also do
scheduled searches and create alerts,
which will be triggered when certain
conditions match saved searches
Using the Search heads, they can be
visualized and analyzed using the
graphical user interface.
Reporting You can use saved searches to create
reports and make analysis by using
Visualization dashboards
Using the Search peers, one will be able
to save the searches and will be able to
generate reports and also do analysis by
visualization dashboard.
Knowledge Finally, you can use Knowledge objects
to enrich the existing unstructured data
Search heads and Knowledge objects can
be accessed from a Splunk CLI or a
Splunk Web Interface. This
communication happens over a REST API
Tier Content Description
connection

84. QRadar Architecture

Tier Content Description


QRadar a. QRadar console offers the user
Console interface, real time data events,
administrative functions, offenses, and
asset information.
b. In the distributed QRadar data
deployment, we make use of the QRadar
console to manage the networking hosts
and components functionalities.
Data Event Collector a. The QRadar event collector helps to
Collection collect the events from remote and local
log sources and then normalizes the raw
data log source events.
b. Usually these event collectors are types
of bundles and coalesces (group together)
identical events to transfer the data to the
data processor.
c. The event collector does not store the
events locally and parse the events for
storage.
Tier Content Description
d. This event collector will be assigned to
an EPS (Events per sec) license that
matches the QRadar event processor.
e. Event data represents those events that
occur at a point in time in the
environment like firewall denies VPN
connections, user logins, emails, proxy
connections, and other events that should
be logged.
Data Event Processor a. This QRadar event processor helps to
Processing process the events that are collected from
one or more event collectors.
b. The event processor processes the
QRadar events with the help of the
Customs Rules engine (CRE) which
generates offenses and alerts and then
data is written to storage. These events
are predefined and execute the action that
is specified for the rules.
c. Each event processor consists of local
storage and the data will be stored on the
QRadar processor.
d. You can also add an event processor
component to an all-in-one appliance and
each event processing function will be
moved from the all-in-one appliance to
the QRadar event processor.
Flow Collector a. The QRadar flow collector helps to
collect the data flows by connecting them
to the SPAN port or any networking TAP
portal.
b. These types of QRadar Qflow collectors
are not designed for full packet capture
systems. To get the full packet capture
you need to review the incident forensic
options.
c. User can also install a QRadar Qflow
collector on their own hardware system
and also enables you to make use of
Qflow collector appliances.
Data Flow Processor a. The QRadar flow processor helps to
Processing flow data from one or more Qflow
Tier Content Description
collector appliances. The flow processor
appliance can also be used to collect the
external networking data flows they are
Net Flow, S flow, and J flow.
b. User can also use the QRadar flow
processor appliance to scale the QRadar
deployment to maintain the higher data
flow per minute.
c. This type of flow processor consists of
on-board data flow processor and
internal storage.
Data searches QRadar data nodes a. This QRadar data node supports new
and existing QRadar deployment to
appropriate storage and processes them
as per your requirement.
b. QRadar data node also helps to
increase the data search speed and offers
more hardware resources to run your
device.
QRadar QRadar App host: a. This QRadar App host is used to
Console manage the network host to run your
applications. App host offers extra data
storage, CPU resources, and Memory for
your application without affecting the
processing capacity of the QRadar
console.
b. The applications such as User behavior
analytics and machine learning analytics
need more resources on the QRadar
console.

85. What is data exfiltration?


Data exfiltration is sometimes referred to as data extrusion, data exportation, or data
theft. Data exfiltration is a form of a security breach that occurs when an individual’s
or company’s data is copied, transferred, or retrieved from a computer or server
without authorization. It can be conducted manually, by an individual with physical
access to a computer, but it can also be an automated process conducted through
malicious programming over a network.
Data exfiltration can be difficult to detect. As it involves the transfer or moving of
data within and outside a company’s network, it often closely resembles of mimics’
typical network traffic, allowing substantial data loss incidents to fly under the radar
until data exfiltration has already been achieved. And once your company’s most
valuable data is in the hands of hackers, the damages can be immeasurable.

86. Different types of virus tools?


 Avira Free Antivirus – Offers a larger package of free security tools than most
competitors, including real-time AV, malware removal, and a VPN.
 Bitdefender Antivirus Free Edition: Award-winning free version of
Bitdefender’s popular tool
 TotalAV: Small and efficient tool that provides a smart scan that can remove
a wide range of malware and viruses.
 Adaware Antivirus Free: Offers a well-rated AV scanning engine and real-
time protection
 Comodo Free Anti-Malware BO Clean: Surprisingly easy to use malware
removal tool
 Norton Power Eraser: Designed specifically to identify and delete deep-
rooted malware
 RegRun Reanimator: A human-powered service that offers a personalized
malware removal approach
 eScan Anti-Virus Toolkit: Requires no installation; can be run directly from a
USB to help clean infected computer.
 FreeFixer: Lightweight tool with few features, but effective for its purpose
 SUPERAntiSpyware: Impressively loaded free malware removal tool

87. Where you will add licenses in Splunk ES?


 Log into Splunk Web.
 Navigate to Settings > Licensing.
 Click Add license.
 Either click Choose file and navigate to your license file and select it, or click
copy & paste the license XML directly... and paste the text of your license file into
the provided field.
 Click Install. Splunk Enterprise installs license.
 If this is the first Enterprise license that you are installing, you will be
prompted to restart Splunk Enterprise services.

88. Difference between search peer and search head?


A Splunk instance can function both as a search head and a search peer. A search
head that performs only searching, and not indexing is referred to as a dedicated
search head. Whereas, a search peer performs indexing and responds to search
requests from other search heads.
In a Splunk instance, a search head can send search requests to a group of indexers,
or search peers, which perform the actual searches on their indexes.

89. Do you know use cases creating?


All use cases have three major components:
 Rules, which detect and trigger alerts based on targeted events
 Logic, which defines how events or rules will be considered
 Action, which determines what action is required if logic or conditions are
met.

90. Where you will get Phishing emails?

91. If you get alert in proxy server, what you will check?
Proxy server logs should track the below information for being useful during an
investigation:

 Date and time


 HTTP protocol version
 HTTP request method
 Content type
 User agent
 HTTP referer
 Length of the content response
 Authenticated username of the client
 Client IP and source port
 Target host IP and destination port
 Target hostname (DNS)
 The requested resource
 HTTP status code of reply
 Time needed to provide the reply back to the client
 Proxy action (from cache, not from cache, …)
 Alerts on proxy server entries
 Besides being useful during an incident you can also raise alerts based on the
content of the proxy server logs.

Unusual protocol version


Most modern clients will now use HTTP/1.1. Requests with HTTP/1.0 require
deeper inspection. Don’t be alarmed immediately, some older applications might
just not support HTTP/1.0. Keep a list of those applications to exclude them from
raising an alert.

User agents
You should not blindly trust user agent information, it’s something that can easily be
crafted. But making statistics on the user agents can prove useful. Look out for user
agents that indicate the use of a scripting language (Python for example) or user
agents that don’t make sense. You can use User Agent String.com as a reference.
If you control your environment then you can develop a list of “known” and
“accepted” user agents. Everything that’s out of the ordinary should then trigger an
alarm.
If your proxy server logs the computer name you can add this as an extra rule to
validate the trustworthiness of the user agent field.

HTTP request methods


Log the HTTP request method (for example GET, POST) and graph / alert on (an
increase of) unusual methods (for example CONNECT, PUT)
Focus on POSTs with content types different than text/html. Especially POSTS with
application/octet-stream or any of the MS Office document file types should raise
suspicion. Repeated requests can indicate that something or someone is uploading a
lot of (corporate?) documents.
GET requests contain the query string in the URL. This can easily be logged. POST
requests however have the query string in the HTTP message body. This is not
always straightforward to log. But without this information it’s sometimes very
difficult for getting to know the actual payload that was exchanged. You’ll have to
look into something similar as mod security for logging HTTP POST requests. Also
don’t forget that logging the entire query string, regardless of GET or POST can raise
privacy concerns. Consult the HR and Legal department for advice.

Length of the content response


Track the length of the content response. A host that repeatedly sends or receives the
same length of content responses might indicate a host that requires further
inspection. It can mean an application update but also malware beaconing out to
control servers.
Also, excessive content lengths should raise an alarm.
Target host IP, destination port, hostname and requested resource
Requests that go to nonstandard HTTP or HTTPS ports should always raise an alert.
Last but not least you should use the information provided by threat information
platforms like for example MISP to track requests for hosts or resources that are
known to be bad.
As bonus you can also use passive dns information in addition to inspecting the
requested resources. This becomes especially useful if your proxy servers logs both
target IP and hostname. If a domain was hosting something malicious on a specific
IP during a limited timeframe you can use both sets of data to check if you were
affected.

92. Tell me some use cases and analysis?

93. What is DIP?

You might also like