CPPPMUSAB
CPPPMUSAB
CPPPMUSAB
1
Problem Identification /Project Title
1.1 Introduction
1.2 Background
1.3 Motivation
1.4 Problem Statement
1.5 Objective and Scope
Our Network Monitoring System system that allows users to keep track of their
network's activity and ensure its security. It provides features such as website
blocking, session time monitoring, and even key logging. These features enable
individuals and businesses to have better control over their networks, limit the access
of unwanted sites or users, and prevent unauthorized access to sensitive data.
Freemium Version
Monitor your devices and interfaces using our free network monitoring
software.
Full-stack Monitoring
Full-fledged Server Monitoring with more than 60 performance metrics for
your physical, virtual, and cloud servers
Abuse of account privileges.
From honest mistakes to misuse of account privileges and intentional leaks, to
identity theft, or any other engineering attack to compromise the security of
user account data; individuals inside your premises are among your major
security problems.
Insufficient IT security management
Even with the most reliable cyber-security solutions, most organizations may
still face threats since they lack enough skilled workforce to manage the
resources well. As a result, you may miss crucial security alerts, and any
successful attack may not be countered early enough to minimize the damage.
1.2 MOTIVATION
The seamless operation of the Internet requires being able to monitor and to visualize
the actual behaviour of the network. Today, IP network operators usually collect
network flow statistics from critical points of their network infrastructure. Whereas
network problems or attacks that significantly change traffic patterns are relatively
easy to identify, it tends to be much more challenging to identify creeping changes or
attacks and faults that manifest themselves only by very careful analysis of initially
seemingly unrelated traffic patterns and their changes. There are currently no
deployable good network visualization solutions supporting this kind of network
analysis, and research in this area is just starting. In addition, the large volume of flow
data on high-capacity networks and exchange points requires moving to probabilistic
sampling techniques, which require new analysis techniques to calculate and also to
visualize the uncertainty attached to data sets
O Threat Detection Some exploits may not be preventable and some threats may not
be anticipated, and in this sense, monitoring is the last line of defence. But there is a
difference between detecting a security situation and doing something about it.
O A Legal Record of Activity Security event data can form a legal record of actions
that users or processes performed. To be used in a legal proceeding, this data must
have verifiable integrity (records have not been altered and they comprise a complete
record) and the organization must be able to demonstrate chain of custody over the
data.
EXPERIMENT NO. 2
2.1 Introduction
Network monitoring is very crucial for any business. Today, networks span globally,
having multiple links established between geographically separated data centers,
public and private clouds. This creates multifield challenges in network management.
Network admins need to be more proactive and agile in monitoring network
performance.
Our Network Monitoring System system that allows users to keep track of their
network's activity and ensure its security. It provides features such as website
blocking, session time monitoring, and even key logging. These features enable
individuals and businesses to have better control over their networks, limit the access
of unwanted sites or users, and prevent unauthorized access to sensitive data.
2.2 Objectives
Project
Proposal
Below are the steps involved in the System Development Life Cycle. Each phase
within the overall cycle may be made up of several steps.
The first step is to identify a need for the new system. This will include
determining whether a business problem or opportunity exists, conducting a
feasibility study to determine if the proposed solution is cost effective, and
developing a project plan.
This process may involve end users who come up with an idea for improving their
work. Ideally, the process occurs in tandem with a review of the organization's
strategic plan to ensure that IT is being used to help the organization achieve its
strategic objectives. Management may need to approve concept ideas before any
money is budgeted for its development
Step 2: Requirements Analysis:
Requirement’s analysis is the process of analyzing the information needs of the end
users, the organizational environment, and any system presently being used,
developing the functional requirements of a system that can meet the needs of the
users. The requirements documentation should be referred to throughout the rest of
the system development process to ensure the developing project aligns with user
needs and requirements.
Professionals must involve end users in this process to ensure that the new system
will function adequately and meets their needs and expectations.
The design will serve as a blueprint for the system and helps detect problems
before these errors or problems are built into the final system. Professionals create
the system design, but must review their work with the users to ensure the design
meets users’ needs.
Coding and debugging are the act of creating the final system. This step is done by
software developer.
Step 6: Maintenance
Inevitably the system will need maintenance. Software will definitely undergo
change once it is delivered to the customer. There are many reasons for the change.
Change could happen because of some unexpected input values into the system. In
addition, the changes in the system could directly affect the software operations.
The software should be developed to accommodate changes that could happen
during the post implementation period.
o Prototyping Model
o RAD Model
o The Spiral Model
o The Waterfall Model
o The Iterative Model
Of all these process models we’ve used the Iterative model (The Linear Sequential
Model) for the development of our project.
(a) The problem is specified along with the desired service objectives(goals)
3. In the implementation and testing phase stage the designs are translated into the
software domain. Detailed documentation from the design phase can
significantly reduce the coding effort. Testing at this stage focuses on making
sure that any errors are identified and that the software meets its required
specification.
4. In the integration and system testing phase all the program units are integrated
and tested to ensure that the complete system meets the software requirements.
After this stage the software is delivered to the customer [Deliverable – The
software product is delivered to the client for acceptance testing.]
5. The maintenance phase the usually the longest stage of the software. In this
phase the software is updated to: Meet the changing customer needs, adapted to
accommodate changes in the external environment, Correct errors and
oversights previously undetected in the testing phases enhancing the efficiency
of the software
Observe that feedback loops allow for corrections to be incorporated into the model.
For example, a problem /update in the design phase requires a ‘revisit’ to the
specifications phase. When changes are made at any phase, the relevant
documentation should be updated to reflect that change.
There are two models to collect data, push and pull. In monitoring system, I would
1. Scalability Concern. Our infrastructure will keep growing, and we many have
hundreds or thousands of services in the coming years. And our service usage,
user base will grow too. If we go with the push model, then all these services will
keep hitting our monitor service. If we have a service which processes 1M
requests per second, and this service push the metrics to our monitoring service
upon every request, then we will suffer from scalability issue frequently as we
grow. So instead of getting called to get metrics, I would prefer to actively pull the
data from the services.
4. Easier for Testing — We can simply spin up testing env, and copy the
configuration from production, then you can pull the same metrics as prod and do
testing.
5. Simpler High Availability — just spin up two servers with the same
configuration to pull the same data to achieve HA.
Base on the analysis above, my design for the pull model is below:
1. Our service will pull the data from the services regularly (for example every
second). We need a real time monitoring system, but a lag of a couple of seconds
is totally fine.
2. Exporters — The services should not call our monitor service to send the data.
Instead, they can save the metrics to an exporter, and the data can be stored there
to get pulled. So that, our monitor service will not be exhausted from getting
called, and it will be more scalable. Also, our monitoring system may need the
data in a specific format, and the services may be designed in different
technologies, and have data in different formats. So, we require an exporter
attached to each service, which reformats the data into the correct format for our
monitor services. And our monitor will pull the data from the exporters.
3. Push Gateway — For cron jobs, they are not service based, but we may need to
monitor the metrics from them too. So, we can have a push gateway, which lives
behind all the cron jobs, and the monitor can just pull the data from the gateway
directly.
Exporter Design
Since we discussed the components for the Pull model, i.e., Exporter, and Push
Gateway.
Some interview may question why not have multiple services hooked to one exporter.
And I would always prefer one service per exporter, and the argument is below:
2. Single point of failure, and one service pushes too much will block others
3. If I am only interested in the metrics of one service, I cannot get that only, I have
to read all
5. Hard to get service metadata — we can store the service metadata in the exporter
Clustering?
Our monitoring system has to be very stable, so I would not go with the network
clustering approach for the monitoring service. The reason is, clustering is very
complicated, and easier to break. So it would be better to have on single solid node that
does not depend on network.
Also, for the monitoring data, we usually care more about recent data. We usually do
not care about metrics days or weeks ago. So we only need to store recent data instead
of all historical data. Then there is no reason for us to go with the clustering approach.
And we can simply run 2 servers in parallel, which will be sufficient enough for HA.
Design
Since we only care about more recent data in the monitoring. The data usage pattern
for monitor is like below:
So, we can store the recent data in memory for faster reads, and older data in disk. If
we have 1M metrics to monitor, and for each metrics, there is a data point for every
second, which is 16 bytes (key-value pair). Then for a server with 128GB memory, we
can save around 2 hours of data. Which is good enough.
For the data in memory, we can save them in chunks, and once an older chunk is filled,
we can simply compress it and save it on to a disk. For these data, querying on them
will be slower, as we need to read from disk and decompress them. But I think
slowness on querying old data is acceptable.
For much older data, like data months ago, we can store the compress data into a
cheaper data storage offsite.
Since the recent monitored data are in memory, we will need a recovery system for
them. If the server crashes, in order not to lose all the data, we need to create snapshots
of the memory maybe every few minutes.
Also, we need to keep a monitor on the memory usage on the monitor service, in case
our server is running out of memory during peak usages. When the memory usage is
high, we may need to speed up the compress and save to disk process.
The DB we need to use for monitoring service would be time series DB.
Base on the discussion above, this is a high-level design for a monitor service.
Exporter — Pulls metrics from targets and convert them to correct format
Push Gateway — Kron jobs to push metrics to at exit, then we can pull metrics
from it.
Ram: At Least128MB
Processor: 300 MHz or higher processor (Pentium processor recommended)
HDD: 20 GB or more
Software Requirement
Docker
MySqlServer
Languages used
HTML
CSS
JavaScript
Python
https:/gongybable.medium.com/system-design-design-a-monitoring-
systemf0f0cbafc895
i) Google for problem-solving
ii) http://www.javaworld.com/javaworld/jw-01-1998/jw-01-Credentialreview.html
iii) Database Programming with JDBC and Java by O’Reilly
iv) Head First Java 2NdEdition
v) http://www.jdbc-tutorial.com/
vi) Java andhttps://www.javapoint.com/java-tutorial
vii) Software Design Concept byApress
viii) https://www.tutorialpoint.com/java/
ix) https://docs.oracle.com/javase/tutorial/
x) https://www.wampserver.com/en/
xi) https://www.JSP.net/
xii) https://www.tutorialspoint.com/mysql/
xiii) httpd.apache.org/docs/2.0/misc/tutorials.ht
EXPERIMENT NO- 4
Anjuman-i-Islam’s
M.H. SABOO SIDDIK POLYTECHNIC
8, Saboo Siddik Polytechnic Road, Byculla
Mumbai- 400008
******
INFORMATION TECGNOLOGY
PROJECT DIARY
Academic Session 2022-23
Sr. Marks
Criteria Max Marks
No Obtained
4 Project Diary
6 Presentation 05
TOTAL 25
Week No :
1/2
Activities Planned:
Decided the number of members in the group and finalised the group members and submitted
it to our teacher
Activities Executed:
Discussions were done for deciding the group members and who will perform what kind of
role in the complete process of the CPP project development
Activities Planned:
Discussed and finalised the topic for our final year project with which all the members as
well our teacher agreed and as well were satisfied
Activities Executed:
Our teacher guided/assisted us about what and what kind of topics we can take and are
eligible for our final year projects and then looking at the difficulty, time constraint, and
coordination levels in our group, we finally came to a conclusion and made a clear decision
on what topic we must opt for
Activities Planned:
Started finding about what all resources we will be requiring for our project and its
successful completion
Activities Executed:
We looked at the internet and even talked with our seniors and our teacher for seeking help
regarding our project or our topic and then after a lot research and listening to the
experiences of our seniors, we listed down number of resources which may help us out in our
project development
Activities Planned:
Divided our complete project in parts and started off with the first part of our project
development
Activities Executed:
After making sure what resources we need for our project devlopment, we then divided our
project in parts for easy and effective development of our project and even divided what all
and what kind of tasks are required to be performed by the members of our group
Activities Planned:
We started with the development of other parts of our project development as well after
successful completion of our previous parts or implementations
Activities Executed:
After completing and succesfully implementing our project in our previous parts, we finally
managed to get further with our development, and now we started of with the next part of our
project development and had managed to complete almost more than half portion of our
project
Activities Planned:
We almost completed our project and just a few minor touches in the UI and some other
parts were remaining including the testing of our project in various conditions as well
Activities Executed:
Till now we sucessfully completed our project and were omnly left with some minor changes
to our project in some parts like presentation, color scheme, etc which may enhance the look
and feel of our project to the user or anyone to whom our project may be presented to or used
by
Project Report
1. Certificate
2. Acknowledgement
3. Abstract
4. Content Page
Chapter 1:
5. Introduction and Background of the Industry or User Based
Problem
Chapter 2:
6.
Literature Survey for Problem Identification and Specification
Chapter 3:
7. Proposed Detail Methodology for Solving the identified
Problem with Action Plan
________________
________________
ACKNOWLEDGMENT
The project title Network Security Monitoring System is a system where we provide
security to the network. The system provides Threat detection, Verification of
Security Controls, Legal record of Activity etc.
For the success of any project, they need hard work and dedication by every member
of that group. But it largely depends on the support and encouragement given to the
team members. We take this opportunity to express our gratitude to the people who
have been leading and guiding us in the completion of this project.
We are greatly thankful to our project guide Lecturer Ms. Sameera Khan for their kind
support and guidance involved in successful completion of this project. We have
highly benefited by this guidance and have found her suggestions helpful in various
phases of this project.
We would also like to thank the entire Teaching and Non-teaching staff of IT
Department for their constant assistance and cooperation.
ABSTRACT
Main aim in developing this system is to monitor the network of other devices. The
system can monitor the users screen, it can know some of the actions performed by the
user or client. Our system can be used in many places for eg: Bank Security system,
Computer Lab system, Hospital security system etc.
Businesses rely on networks for all operations. Hence, network monitoring is very
crucial for any business. Today, networks span globally, having multiple links
established between geographically separated data centres, public and private clouds.
This creates multifield challenges in network management. Network admins need to
be more proactive and agile in monitoring network performance. However, this is
easier said than done.
Content Page
1.1 Introduction
1.2 Background
1.3 Motivation
1.4 Problem Statement
1.5 Objective and Scope
Freemium Version
Monitor your devices and interfaces using our free network monitoring
software.
Full-stack Monitoring
Full-fledged Server Monitoring with more than 60 performance metrics for
your physical, virtual, and cloud servers
Abuse of account privileges.
From honest mistakes to misuse of account privileges and intentional leaks, to
identity theft, or any other engineering attack to compromise the security of
user account data; individuals inside your premises are among your major
security problems.
Insufficient IT security management
Even with the most reliable cyber-security solutions, most organizations may
still face threats since they lack enough skilled workforce to manage the
resources well. As a result, you may miss crucial security alerts, and any
successful attack may not be countered early enough to minimize the damage.
1.3 MOTIVATION
The seamless operation of the Internet requires being able to monitor and to visualize
the actual behaviour of the network. Today, IP network operators usually collect
network flow statistics from critical points of their network infrastructure. Whereas
network problems or attacks that significantly change traffic patterns are relatively
easy to identify, it tends to be much more challenging to identify creeping changes or
attacks and faults that manifest themselves only by very careful analysis of initially
seemingly unrelated traffic patterns and their changes. There are currently no
deployable good network visualization solutions supporting this kind of network
analysis, and research in this area is just starting. In addition, the large volume of flow
data on high-capacity networks and exchange points requires moving to probabilistic
sampling techniques, which require new analysis techniques to calculate and also to
visualize the uncertainty attached to data sets
O Threat Detection Some exploits may not be preventable and some threats may not
be anticipated, and in this sense, monitoring is the last line of defence. But there is a
difference between detecting a security situation and doing something about it.
O A Legal Record of Activity Security event data can form a legal record of actions
that users or processes performed. To be used in a legal proceeding, this data must
have verifiable integrity (records have not been altered and they comprise a complete
record) and the organization must be able to demonstrate chain of custody over the
data.
CHAPTER 2
2.1 Introduction
2.2 Objectives
The means for security personnel to investigate and prosecute an unfolding incident or
simply to review logs to improve alerting mechanisms or to manually identify security
incidents.
2.2 Objectives
Below are the steps involved in the System Development Life Cycle. Each phase
within the overall cycle may be made up of several steps.
This process may involve end users who come up with an idea for improving their
work. Ideally, the process occurs in tandem with a review of the organization's
strategic plan to ensure that IT is being used to help the organization achieve its
strategic objectives. Management may need to approve concept ideas before any
money is budgeted for its development
Step 2: Requirements Analysis:
Requirement’s analysis is the process of analyzing the information needs of the end
users, the organizational environment, and any system presently being used,
developing the functional requirements of a system that can meet the needs of the
users. The requirements documentation should be referred to throughout the rest of
the system development process to ensure the developing project aligns with user
needs and requirements.
Professionals must involve end users in this process to ensure that the new system
will function adequately and meets their needs and expectations.
The design will serve as a blueprint for the system and helps detect problems
before these errors or problems are built into the final system. Professionals create
the system design, but must review their work with the users to ensure the design
meets users’ needs.
Coding and debugging are the act of creating the final system. This step is done by
software developer.
Inevitably the system will need maintenance. Software will definitely undergo
change once it is delivered to the customer. There are many reasons for the change.
Change could happen because of some unexpected input values into the system. In
addition, the changes in the system could directly affect the software operations.
The software should be developed to accommodate changes that could happen
during the post implementation period.
o Prototyping Model
o RAD Model
o The Spiral Model
o The Waterfall Model
o The Iterative Model
Of all these process models we’ve used the Iterative model (The Linear Sequential
Model) for the development of our project.
3. In the implementation and testing phase stage the designs are translated into
the software domain. Detailed documentation from the design phase can
significantly reduce the coding effort. Testing at this stage focuses on making
sure that any errors are identified and that the software meets its required
specification.
4. In the integration and system testing phase all the program units are integrated
and tested to ensure that the complete system meets the software requirements.
After this stage the software is delivered to the customer [Deliverable – The
software product is delivered to the client for acceptance testing.
5. The maintenance phase the usually the longest stage of the software. In this
phase the software is updated to: Meet the changing customer needs, adapted
to accommodate changes in the external environment, Correct errors and
oversights previously undetected in the testing phases enhancing the efficiency
of the software.
6. Observe that feedback loops allow for corrections to be incorporated into the
model. For example, a problem /update in the design phase requires a ‘revisit’
to the specifications phase. When changes are made at any phase, the relevant
documentation should be updated to reflect that change.
There are two models to collect data, push and pull. In monitoring system, I would
1. Scalability Concern. Our infrastructure will keep growing, and we many have
hundreds or thousands of services in the coming years. And our service usage,
user base will grow too. If we go with the push model, then all these services
will keep hitting our monitor service. If we have a service which processes 1M
requests per second, and this service push the metrics to our monitoring service
upon every request, then we will suffer from scalability issue frequently as we
grow. So instead of getting called to get metrics, I would prefer to actively pull
the data from the services.
2. Automatic Upness Monitoring — By pulling the data proactively, we can
directly know if the service is alive or not. For example, if one service is not
reachable, we can be aware of it immediately.
3. Easier Horizontal Monitoring — If we have two independent systems A and
B, but one day we need to monitor some service in system B from system A.
We can pull metrics from system B directly, no need to configure system B to
push to system A.
4. Easier for Testing — We can simply spin up testing env, and copy the
configuration from production, then you can pull the same metrics as prod and
do testing.
5. Simpler High Availability — just spin up two servers with the same
configuration to pull the same data to achieve HA.
6. Less configuration, no need to configure every service.
Base on the analysis above, my design for the pull model is below:
1. Our service will pull the data from the services regularly (for example every
second). We need a real time monitoring system, but a lag of a couple of
seconds is totally fine.
2. Exporters — The services should not call our monitor service to send the data.
Instead, they can save the metrics to an exporter, and the data can be stored
there to get pulled. So that, our monitor service will not be exhausted from
getting called, and it will be more scalable. Also, our monitoring system may
need the data in a specific format, and the services may be designed in different
technologies, and have data in different formats. So, we require an exporter
attached to each service, which reformats the data into the correct format for our
monitor services. And our monitor will pull the data from the exporters.
3. Push Gateway — For cron jobs, they are not service based, but we may need to
monitor the metrics from them too. So, we can have a push gateway, which lives
behind all the cron jobs, and the monitor can just pull the data from the gateway
directly.
Exporter Design
Since we discussed the components for the Pull model, i.e., Exporter, and Push
Gateway.
Some interview may question why not have multiple services hooked to one exporter.
And I would always prefer one service per exporter, and the argument is below:
Our monitoring system has to be very stable, so I would not go with the network
clustering approach for the monitoring service. The reason is, clustering is very
complicated, and easier to break. So it would be better to have on single solid node that
Also, for the monitoring data, we usually care more about recent data. We usually do
not care about metrics days or weeks ago. So we only need to store recent data instead
of all historical data. Then there is no reason for us to go with the clustering approach.
And we can simply run 2 servers in parallel, which will be sufficient enough for HA.
Design
Since we only care about more recent data in the monitoring. The data usage pattern
So, we can store the recent data in memory for faster reads, and older data in disk. If
we have 1M metrics to monitor, and for each metrics, there is a data point for every
second, which is 16 bytes (key-value pair). Then for a server with 128GB memory, we
we can simply compress it and save it on to a disk. For these data, querying on them
will be slower, as we need to read from disk and decompress them. But I think
For much older data, like data months ago, we can store the compress data into a
Since the recent monitored data are in memory, we will need a recovery system for
them. If the server crashes, in order not to lose all the data, we need to create snapshots
Also, we need to keep a monitor on the memory usage on the monitor service, in case
our server is running out of memory during peak usages. When the memory usage is
high, we may need to speed up the compress and save to disk process.
The DB we need to use for monitoring service would be time series DB.
HIGH LEVEL DESIGN
Base on the discussion above, this is a high-level design for a monitor service.
Exporter — Pulls metrics from targets and convert them to correct format
Push Gateway — Kron jobs to push metrics to at exit, then we can pull metrics
from it.
Ram: At Least128MB
Processor: 300 MHz or higher processor (Pentium processor recommended)
HDD: 20 GB or more
Software Requirement
Docker
MySqlServer
Languages used
HTML
CSS
JavaScript
Python
https:/gongybable.medium.com/system-design-design-a-monitoring-
systemf0f0cbafc895
i. Google for problem-solving
ii. http://www.javaworld.com/javaworld/jw-01-1998/jw-01-Credentialreview.html
iii. Database Programming with JDBC and Java by O’Reilly
iv. Head First Java 2NdEdition
v. http://www.jdbc-tutorial.com/
vi. Java andhttps://www.javapoint.com/java-tutorial
vii. Software Design Concept byApress
viii. https://www.tutorialpoint.com/java/
ix. https://docs.oracle.com/javase/tutorial/
x. https://www.wampserver.com/en/
xi. https://www.JSP.net/
xii. https://www.tutorialspoint.com/mysql/
xiii. httpd.apache.org/docs/2.0/misc/tutorials.ht