documentation_sample[_batch_1]
documentation_sample[_batch_1]
Submitted for partial fulfilment of the requirements for the award of the degree of
BACHELOR OF TECHNOLOGY
in
INFORMATION TECHNOLOGY
Submitted by
A SRIRAM 21K81A1202
CH MANIKANTA 21K81A1208
PULIPATI SRESTHA 21K81A1247
NOVEMBER – 2024
1
St. MARTIN'S ENGINEERING COLLEGE
UGC Autonomous
Affiliated to JNTUH, Approved by AICTE
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
www.smec.ac.in
Certificate
Place:
Date:
2
St. MARTIN'S ENGINEERING COLLEGE
UGC Autonomous
Affiliated to JNTUH, Approved by AICTE
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
www.smec.ac.in
Declaration
We, the students of ‘Bachelor of Technology in Department of Information
Technology’, session: 2021 - 2025, St. Martin’s Engineering College, Dhulapally,
Kompally, Secunderabad, hereby declare that the work presented in this Project
Work entitled “DEEP LEARNING EVENT BASED ABNOMAL EVENT
DETECTION IN PEDESTRIAN PATHWAY” is the outcome of our own bonafide
work and is correct to the best of our knowledge and this work has been undertaken taking
care of Engineering Ethics. This result embodied in this project report has not been submitted
in any university for award of any degree.
A SRIRAM 21K81A1202
CH MANIKANTA 21K81A1208
PULIPATI SRESTHA 21K81A1247
3
ACKNOWLEDGEMENT
A SRIRAM 21K81A1202
CH MANIKANTA 21K81A1208
PULIPATI SRESTHA 21K81A1247
i
ABSTRACT
A video surveillance system is capable of detecting any kind of motions that are
unusual and hence proves to be a major contributor of the surveillance sector. The
unusual or strange movements that has been recorded by the surveillance cameras
have been used and their patterns have been studied. The pattern that do not match
with the patterns of normal behavior are then used captured. In video sequences they
can be seen in numerous ways which includes bikers, skaters, small carts in pedestrian
pathways, etc. The main aim of the study is to develop a efficiently working Video
Anomaly Detection(VAD) system that is capable of identifying and analyzing strange
movements in videos using the techniques of deep learning and image processing.
In this project, we propose a deep learning-based approach for abnormal event
detection in pedestrian pathways, aiming to enhance public safety and automate
surveillance systems. Traditional methods for detecting abnormal behavior in crowded
areas often rely on manual monitoring or simplistic threshold-based algorithms, which
are prone to inaccuracies. In contrast, our method leverages the power of
convolutional neural networks (CNNs) and recurrent neural networks (RNNs),
particularly Long Short-Term Memory (LSTM) units, to analyze video footage and
identify irregularities in pedestrian movement patterns.
The model is trained on a dataset of normal and abnormal pedestrian behaviors,
learning to distinguish between routine activities and unusual events such as sudden
stops, erratic movements, and potential safety hazards. Our approach integrates
spatiotemporal features from video frames to capture both the appearance and motion
dynamics of pedestrians. By utilizing real-time processing, the system can alert
authorities to potential incidents like accidents, crowd formation, or suspicious
behavior.
ii
LIST OF FIGURES
iii
LIST OF TABLES
iv
LIST OF ACRONYMS AND DEFINITIONS
S.No: Acronym Definition
v
CONTENTS
ACKNOWLEDGMENT i
ABSTRACT ii
LIST OF FIGURES iv
LIST OF TABLES v
CHAPTER 1:INTRODUCTION
1.1 Objective 1
1.2 Overview 1
4.3 Design 23
vi
4.4 Modules 30
7.1 Conclusion 46
REFERENCES 48
vii
CHAPTER 1
INTRODUCTION
1.1 Objective
Deep learning-based abnormal event detection in pedestrian pathways focus on
enhancing safety and security through real-time monitoring of unusual or dangerous
activities. The goal is to automate surveillance, reducing the need for constant human
oversight, while achieving high accuracy and precision in identifying anomalies such
as accidents, assaults, or suspicious behaviors. By analyzing pedestrian movements
and behaviors, the system can detect abnormal patterns, even in complex and dynamic
environments with varying conditions like lighting or weather. It is designed to be
scalable, allowing deployment across different areas, from small pathways to large
public spaces, without significant manual adjustments. Additionally, deep learning
models aim to overcome challenges like occlusions and ensure robust detection even
when visibility is compromised. Ultimately, these systems can integrate with broader
smart city infrastructure, supporting real-time responses and improving the overall
safety and security of pedestrian pathways.
1.2 Overview
1
Their capacity to recognize anomalous occurrences in real time is compromised when
they are exposed to extended periods of of time without visual stimulation. As a result,
the existing system is reduced to a recording system with little utility beyond forensics.
Therefore, real-time automated anomaly detection is necessary to identify and take
quick action. Therefore, the goal of our study is to ereste an anomaly system that can
identify odd movement in video footage. Here, image processing methods have been
combined with deep learning approaches such as convolutional and recurrent neural
networks to analyze the input image dataset. dealing with massive volumes of data and
identifying any indications of network penetration are crucial. When it comes to big
data, data security and privacy are perhaps the most pressing concerns, especially in
the context of network assaults. Distributed denial-of-service (DDoS) attacks are one
of the most common types of cyberattacks. They target servers or networks with the
intent of interfering with their normal operation. Although real-time detection and
mitigation of DDoS attacks is difficult to achieve, a solution would be extremely
valuable, since attacks can cause significant damage.
2
CHAPTER 2
LITERATURE SURVEY
Anomalies are abnormal events that differs from the pattern of normal behavior
Anomaly identification the key factor for surveillance. But this often found to be a
challenging task because anomaly may be falsely detected in certain areas. For
example, a firing a gun is an abnormal event in regular basis but it is a normal event in
a gun shooting club. Hence certain events are termed us anomalous despite being a
normal behavior according to the
place. [1][2]
The WEKA dataset (Waikato Environment for Knowledge Analysis) was used for the
investigation of violent crimes and this analysis shows a trend between real criminal
data and community data. Additive regressiin, Linear regression and Decision stump
techniques were used in this study and among the three techniques linear regression
was able to find the unpredictability of occurrence of the event and shows the ability
of Jeep learning techniques in anomaly detection. [3][4] In a study by Kim S et al, the
anomaly detectionin Philadelphia is analyzed and a trend has been identified. The
machine learning model is trained for anomaly detection from massive data sources
using machine learning Techniques as logistic regression, ordinal regression, decision
trees, and e-nearest neighhor. The models have achieved a 69% accuracy.[5][6]
In another study Elhurrouss et al have used old crime locations dataset to predict the
places where crimes are like to happen. The levenery-marquardt technique was used to
analyze and understand the data. In addition to this, scaled technique was also used in
the study for data examination and understanding. The scaled technique used was
found to be the best performer and showed an accuracy of 78%. It also results in crime
reduction up to 78[7][8]
In a study by Rummens et al, the anomaly activities have been thoroughly observed
and analyzed with the help of the anomaly data of the past 15 years from right before
2017. The techniques such as k-nearest neighbor and decision tree have been used for
the detection. It was able to achieve an accuracy level of 39-44% when used against a
dataset consisting so 5,60,000 anomaly activities. [10][11]
A growing number of cities are using surveillance cameras to reduce crime, but little
research exists to determine whether they're worth the cost. With jurisdictions across
the country tightening their belts, public safety resources are scarce and policymakers
need to know which potential investments are likely to bear fruit. This research brief
summarizes the Urban Institute's series documenting three cities use of public
surveillance cameras and how they impacted crime in their neighborhoods.
In this paper, we propose a novel abnormal event detection method with spatio-
temporal adversarial networks (STAN). We devise a spatio-temporal generator which
synthesizes an inter-frame by considering spatio-temporal characteristics with
bidirectional ConvLSTM. A proposed spatio-temporal discriminator determines
whether an input sequence is real-normal or not with 3D convolutional layers. These
two networks are trained in an adversarial way to effectively encode spatio-temporal
features of normal patterns. After the learning, the generator and the discriminator can
be independently used as detectors, and deviations from the learned normal patterns
are detected as abnormalities. Experimental results show that the proposed method
achieved competitive performance compared to the state-of-the-art methods. Further,
for the interpretation, we visualize the location of abnormal events detected by the
proposed networks using a generator loss and discriminator gradients.
4
component analyzers. For any new optical flow patterns detected in incoming video
clips, we use the learned model and MRF graph to compute a maximum a posteriori
estimate of the degree of normality at each local node. Further, we show how to
incrementally update the current model parameters as new video observations stream
in, so that the model can efficiently adapt to visual context changes over a long period
of time. Experimental results on surveillance videos show that our space-time MRF
model robustly detects abnormal activities both in a local and global sense: not only
does it accurately localize the atomic abnormal activities in a crowded video, but at the
same time it captures the global-level abnormalities caused by irregular interactions
between local activities.
5
recurrent unit (BiGRU) and Convolutional neural network (CNN) to detect and
prevent criminal activities. The CNN extracts the spatial features from video frames
whereas temporal and local motion features are extracted by the BiGRU from multiple
frames CNN extracted features. The focused bag is created to select those video
frames which indicate certain actions. Moreover, ranked-based loss is used to
effectively detect and classify the suspicious activities. For classification of activities,
various machine learning classifiers are used. The proposed deep learning video
surveillance technique is able to track human trails and detect criminal events. The
CAVIAR dataset is used to examine the proposed technique for video surveillance-
based crime detection with a performance accuracy of almost 98.86%. The alerts
received from the proposed technique can also be examined, demonstrates that the
practiced video surveillance cameras systems can effectively detect unusual and
criminal activities. In addition, the proposed technique showed considerable
performance accuracy and outscored the related state-of-the-art (SOTA)DL models
including CNN-LSTM, CNN, HMM, and DBN and achieved 21.88% absolute
improvement in crime detection accuracy.
6
CHAPTER 3
Suppose there are two categories, i.e., Category A and Category B, and we have a
new data point x1, so this data point will lie in which of these categories. To solve this
type of problem, we need a K-NN algorithm. With the help of K-NN, we can easily
identify the category or class of a particular dataset. Consider the below
diagram:
7
Fig 3.1: KNN on dataset.
8
How does K-NN work?
Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
Step-4: Among these k neighbors, count the number of the data points in each
category.
Step-5: Assign the new data points to that category for which the number of the
neighbor is maximum.
Suppose we have a new data point, and we need to put it in the required category.
Consider the below image:
Firstly, we will choose the number of neighbors, so we will choose the k=5.
Next, we will calculate the Euclidean distance between the data points. The Euclidean
distance is the distance between two points, which we have already studied in
geometry. It can be calculated as:
9
Fig. 3.3: Measuring of Euclidean distance.
By calculating the Euclidean distance, we got the nearest neighbors, as three nearest
neighbors in category A and two nearest neighbors in category B. Consider the below
image:
As we can see the 3 nearest neighbors are from category A, hence this new data point
must belong to category A.
Below are some points to remember while selecting the value of K in the K-NN
algorithm:
10
There is no particular way to determine the best value for "K", so we need to
try some values to find the best out of them. The most preferred value for K is
5.
A very low value for K such as K=1 or K=2, can be noisy and lead to the
effects of outliers in the model.
Large values for K are good, but it may find some difficulties.
Limitations
11
3.2 Proposed System
Purpose: The input layer receives the image that the CNN will process.
Process: An image is represented as a 3D matrix (or tensor), where:
o Width and Height represent the spatial dimensions (number of pixels along the
horizontal and vertical axes).
o Depth (or channels) represents the number of color channels (typically 3 for
RGB images—Red, Green, and Blue).
o Example: A colored image of size 224x224 pixels will be represented as a tensor
of shape 224×224×3224 \times 224 \times 3224×224×3.
Importance: The input layer ensures the image data is properly formatted to pass
through the CNN architecture.
Purpose: This is the core component of the CNN, responsible for extracting features
like edges, textures, and patterns from the input image.
Process:
o A set of filters (or kernels) are applied to the input image. These filters are small
matrices (e.g., 3x3 or 5x5) that slide over the input image, performing element-
wise multiplication with portions of the image to produce feature maps.
o Filters are designed to detect various features, such as vertical and horizontal
edges, textures, corners, etc.
o Each filter focuses on a specific aspect of the image, and several feature maps
are generated, capturing different image characteristics.
Example: A 3x3 filter detects edge patterns in a small 3x3 region of the image.
12
Importance: The convolution operation helps the model detect low-level features in the
early layers, which become more abstract and complex in deeper layers (e.g., shapes,
objects).
Purpose: Reduces the size of the feature maps, making the network more efficient and
less computationally expensive while retaining important information.
Process:
o Max Pooling: The most commonly used pooling method. It selects the
maximum value from each sub-region of the feature map, reducing the spatial
size (width and height) while keeping the depth (number of feature maps) intact.
o Example: In a 2x2 max-pooling operation, a 2x2 region of the feature map is
replaced by the single maximum value from that region.
o Average Pooling: Alternatively, average pooling takes the average of the values
in the sub-region, but max pooling is more commonly used due to its superior
performance in preserving important features.
Importance: Pooling reduces the computational complexity, prevents overfitting, and
ensures that the most significant features are passed on to deeper layers.
Step 5. Flattening
13
Purpose: Converts the 2D feature maps into a 1D vector that can be fed into fully
connected layers.
Process:
o After multiple convolution and pooling operations, the resulting feature maps
(which are 2D arrays) need to be flattened into a single-dimensional vector to
be compatible with the fully connected layers.
o Example: If the output feature map is 7×7×1287 \times 7 \times 1287×7×128,
the flattening step will convert it into a 1D vector of size 6,272 (i.e.,
7×7×128=62727 \times 7 \times 128 = 62727×7×128=6272).
Importance: This step bridges the convolutional part of the network with the fully
connected layers, which are responsible for the final decision-making process.
Purpose: Performs the final classification based on the features extracted by previous
layers.
Process:
o In the fully connected layer (FC), each neuron is connected to every neuron in
the previous layer, similar to neurons in a traditional neural network.
o The output of the flattening layer is fed into the fully connected layers, which
use the extracted features to classify the image or predict the output.
o Each neuron computes a weighted sum of inputs from the previous layer, and an
activation function (usually ReLU) is applied to produce the output.
Example: If the task is to classify images into 10 different categories, the final fully
connected layer will have 10 neurons, each corresponding to one class.
Importance: The fully connected layer integrates all the learned features and makes the
final decision on what the image represents.
14
o Formula: P(y=j∣x)=ezj∑kezkP(y = j | x) = \frac{e^{z_j}}{\sum_{k}
e^{z_k}}P(y=j∣x)=∑k ezk ezj , where zjz_jzj is the output score for
class jjj, and kkk refers to all possible classes.
o Example: For a digit recognition task, the output layer could have 10 neurons
(representing digits 0-9), and the softmax function will output the probability of
the image belonging to each class.
Importance: The output layer produces the final classification, determining the most
likely class for the input image.
Step 8. Backpropagation
Purpose: Updates the weights of the filters and neurons to minimize prediction errors
during training.
Process:
o Error Calculation: The model’s output is compared to the actual label (ground
truth) to calculate the loss (error), using a loss function like cross-entropy (for
classification tasks).
o Backpropagation: The error is propagated backward through the network, and
the filters’ weights and biases are updated using gradient descent to minimize
the loss. The gradients of the loss function with respect to each weight are
calculated.
o Learning Rate: The learning rate controls the step size for updating the weights.
A lower learning rate leads to slower but more precise convergence.
Importance: Backpropagation enables the CNN to learn from the training data by
iteratively adjusting the filters and weights, ensuring that the model improves its
predictions over time.
The project involved analyzing the design of few applications so as to make the
application more users friendly. To do so, it was really important to keep the
navigations from one screen to the other well ordered and at the same time reducing
the amount of typing the user needs to do. In order to make the application more
15
accessible, the browser version had to be chosen so that it is compatible with most of
the Browsers.
REQUIREMENT SPECIFICATION
Functional Requirements
Software Requirements
For developing the application the following are the Software Requirements:
1. Python
2. Django
1. Windows 10 64 bit OS
1. Python
For developing the application the following are the Hardware Requirements:
Processor: Intel i9
RAM: 32 GB
Space on Hard Disk: minimum 1 TB
16
CHAPTER 4
SYSTEM REQUIREMENTS AND SPECIFICATION
4.1 Database
17
with both normal and
panic-induced
behaviors.
Street Scene Outdoor Multi-class Detection of
Dataset street scene anomalies like abnormal
videos with jaywalking, pedestrian
pedestrians sudden changes and vehicle
and vehicles, in motion. behavior in
showing streets
anomalies.
UCF-Crime Surveillance Includes various Anomaly
Dataset footage criminal events detection for
showing like assaults, crime
criminal and robbery, and prevention in
abnormal vandalism. public
activities in pedestrian
public spaces. areas.
CUHK University Outdoor scene Behavioral
Avenue avenue with labeled anomaly
Dataset footage normal/abnormal detection,
focusing on behaviors. sudden
unusual movements.
events like
running or
loitering
MIT Traffic Pedestrian Mixture of Detect
Dataset and traffic pedestrian and abnormal
anomalies in vehicle activity. pedestrian or
a busy vehicle events
intersection in urban
captured via intersections.
camera.
Objective: Provide the raw pixel values of the image to the CNN for processing.
Process: In the convolutional layer, a filter (also called a kernel) is applied to the input
image. The filter is a small matrix (e.g., 3x3 or 5x5) that "slides" (or convolves) across
the entire input image to extract features. Each filter detects specific patterns such as
edges, textures, or colors.
Steps:
1. The filter slides over the input image, performing element-wise multiplication
between the filter and the corresponding portion of the input image.
2. The results are summed to produce a single value, which is then stored in the
output matrix (called the feature map).
Example: A 3x3 filter sliding over a 5x5 image will produce a 3x3 feature map.
Objective: Extract important local features from the image such as edges, corners, or
textures.
19
Process: After convolution, an activation function is applied to introduce non-
linearity into the model. The most commonly used activation function in CNNs is
ReLU (Rectified Linear Unit).
o ReLU Formula: ReLU(x)=max(0,x)\text{ReLU}(x) = \max(0,
x)ReLU(x)=max(0,x)
o Steps:
Process: The pooling layer is used to downsample the feature maps, reducing
their spatial dimensions while retaining the most important information. The most
common pooling method is Max Pooling.
o Steps:
o Example: A 2x2 Max Pooling applied to a 4x4 feature map will reduce it to a
2x2 feature map by taking the maximum value in each 2x2 block.
Objective: Reduce the size of the feature maps, decreasing the computational
complexity and helping prevent overfitting. Pooling also makes the network more
robust to small spatial translations in the input.
Process: In deep CNNs, multiple convolutional and pooling layers are stacked to
extract progressively more abstract and complex features from the image.
o Example: The first convolutional layer might detect edges, the second layer
might detect textures, and later layers might recognize more complex patterns like
shapes, objects, or even specific features of objects (e.g., eyes, wheels).
20
Objective: Build a deep hierarchy of features, where each subsequent layer
learns increasingly complex representations of the input image.
Step 6. Flattening
Process: After passing through the convolutional and pooling layers, the
resulting feature maps (which are still in the form of multi-dimensional arrays) are
flattened into a 1D vector.
o Steps:
Take all the elements from the feature maps and arrange them into a
single linear vector. This vector will serve as the input for the fully connected layers.
Objective: Prepare the data for the fully connected (dense) layers by converting
multi-dimensional features into a single feature vector.
Process: The fully connected layer connects every neuron in one layer to every
neuron in the next layer. The flattened feature vector from the previous step is passed
into this layer.
o Steps:
Process: The output layer is typically a fully connected layer with a number of
neurons equal to the number of classes in the classification task. The final output is
generated using an activation function like Softmax for multi-class classification or
Sigmoid for binary classification.
21
o Softmax Function: Softmax(xi)=exi∑jexj\text{Softmax}(x_i) =
\frac{e^{x_i}}{\sum_{j}e^{x_j}}Softmax(xi )=∑j exj exi
o This ensures that the output values represent probabilities and sum to 1.
Objective: Produce a final classification by assigning a probability to each class.
Process: During training, the CNN uses a loss function to measure how far the
predicted output is from the true label. For classification tasks, categorical cross-
entropy is commonly used. The loss function computes the error, and the
backpropagation algorithm updates the network's weights to minimize this error.
o Steps:
1. The loss is computed based on the difference between predicted and actual
labels.
2. Gradients are calculated for each layer using backpropagation.
3. The weights are updated using an optimization algorithm like Stochastic
Gradient Descent (SGD) or Adam.
Objective: Optimize the weights of the CNN to minimize the classification error
during training.
Process: The model is trained using a training dataset, and its performance is
evaluated on a validation or test dataset.
Steps:
1. The model iteratively updates weights by minimizing the loss function using
training data.
2. After training, the model is evaluated using unseen test data to measure its
performance.
Objective: Ensure the model has learned to generalize well to new data and is
capable of making accurate predictions.
22
4.3 Design
4.3.1 System Architecture
The input data come in a variety of forms, including picture, audio, and video. Here,
we used preprocessed pedestrian picture data to get rid of any duplicates. After that,
the Feature Extraction approach is used. The necessary characteristics are retrieved
from the picture at this step. The Feature Selection is then put into practice. Using this
method, we pick the most important characteristics from the features that were
retrieved in the previous stage. You may now separate the dataset into a test set and a
train set. The Deep Learning model is then adjusted to forecast whether or not the
input picture contains an anomalous item.[15][16]
23
4.3.2 Data flow diagram
A Data Flow Diagram (DFD) is a visual representation of the flow of data within a
system or process. It is a structured technique that focuses on how data moves through
different processes and data stores within an organization or a system. DFDs are
commonly used in system analysis and design to understand, document, and
communicate data flow and processing.
24
4.3.3 UML Diagram
GOALS: The Primary goals in the design of the UML are as follows:
25
4.3.4 Use Case Diagram
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a
graphical overview of the functionality provided by a system in terms of actors, their
goals (represented as use cases), and any dependencies between those use cases. The
main purpose of a use case diagram is to show what system functions are performed
for which actor. Roles of the actors in the system can be depicted.
26
4.3.5 Class Diagram
The class diagram is used to refine the use case diagram and define a detailed design
of the system. The class diagram classifies the actors defined in the use case diagram
into a set of interrelated classes. The relationship or association between the classes
can be either an “is-a” or “has-a” relationship. Each class in the class diagram may be
capable of providing certain functionalities. These functionalities provided by the
class are termed “methods” of the class. Apart from this, each class may have certain
“attributes” that uniquely identify the class.
27
4.3.6 Sequence Diagram
28
4.3.7 Activity diagram
29
4.4 Modules
1. User
2. Admin
3. Data Collection
4. CNN
User:
The User can register the first. While registering he required a valid user email and
mobile for further communications. Once the user register then admin can activate the
user. Once admin activated the user then user can login into our system. User can upload
the dataset based on our dataset column matched. For algorithm execution data must be
in float format. Here we took heavy vechicles related fuel consumption dataset. User can
also add the new data for existing dataset based on our Django application. User can
click the training in the web page so that the data calculated model results
like(mean_absolute error, mean_square error, r2_error) based on the algorithms. User
can display the prediction results. After that user can logout.
Admin:
Admin can login with his login details. Admin can activate the registered users. Once he
activate then only the user can login into our system. Admin can view the overall data in
the browser. Admin can click the model Results in the web page so calculated
(mean_absolute error, mean_square error, r2_error) based on the algorithms. After that
admin can logout.
Data collection:
The model is developed by using duty cycles collected from a single truck, with an
approximate mass of 8, 700 kg exposed to a variety of transients including both urban
and highway traffic in the Indianapolis area. Data was collected using the SAE J1939
standard for serial control and communications in heavy duty vehicle networks [24].
Twelve drivers were asked to exhibit good or bad behavior over two different routes.
30
Drivers exhibiting good behavior anticipated braking and allowed the vehicle to coast
when possible. Some drivers participated more than others and as a result the distribution
of drivers and routes is not uniform across the data set. This field test generated 3, 302,
890 data points sampled at 50 Hz from the vehicle CAN bus and a total distance of
778.89 km over 56 trips with varying distances. Most of the trips covered a distance of
10 km to 15 km. In order to increase the number of data points, synthetic duty cycles
over an extended distance were obtained by assembling segments from the field duty
cycles selected at random. Moreover, a set of drivers are assigned to the training
segments and a different set of drivers are assigned to the testing segments, thereby
ensuring that the training (Ftr) and testing (Fts) data sets derived from the respective
segments are completely separate.
31
4.5 System Requirements
4.5.1 Hardware Requirements
Minimum hardware requirements are very dependent on the particular software being
developed by a given Enthought Python / Canopy / VS Code user. Applications that
need to store large arrays/objects in memory will require more RAM, whereas
applications that need to perform numerous calculations or tasks more quickly will
require a faster processor.
System : Intel i3
Hard Disk : 1 TB
Ram : 4GB
The functional requirements or the overall description documents include the product
perspective and features, operating system and operating environment, graphics
requirements, design constraints and user documentation.
Front-End : Html,CSS
Designing : Html,CSS,Javascript
32
4.6 Testing
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way
to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.
33
4.6.3 Functional Testing
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
White Box Testing is a testing in which in which the software tester has knowledge
of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.
34
4.6.6 Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black
box .you cannot “see” into it. The test provides inputs and responds to outputs
without considering how the software works.
Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be
written in detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
35
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
36
CHAPTER 5
SOURCE CODE
import fuelconsumption
algo = Algorithms()
def UserRegisterActions(request):
if request.method == 'POST':
form = UserRegistrationForm(request.POST)
if form.is_valid():
print('Data is Valid')
form.save()
messages.success(request, 'You have been successfully registered')
form = UserRegistrationForm()
return render(request, 'UserRegistrations.html', {'form': form})
else:
messages.success(request, 'Email or Mobile Already Existed')
print("Invalid form")
else:
form = UserRegistrationForm()
return render(request, 'UserRegistrations.html', {'form': form})
def UserLoginCheck(request):
if request.method == "POST":
37
loginid = request.POST.get('loginid')
pswd = request.POST.get('pswd')
print("Login ID = ", loginid, ' Password = ', pswd)
try:
check = UserRegistrationModel.objects.get(
loginid=loginid, password=pswd)
status = check.status
print('Status is = ', status)
if status == "activated":
request.session['id'] = check.id
request.session['loggeduser'] = check.name
request.session['loginid'] = loginid
request.session['email'] = check.email
print("User id At", check.id, status)
return render(request, 'users/UserHomePage.html', {})
else:
messages.success(request, 'Your Account Not at activated')
return render(request, 'UserLogin.html')
except Exception as e:
print('Exception is ', str(e))
pass
messages.success(request, 'Invalid Login id and password')
return render(request, 'UserLogin.html', {})
def UserHome(request):
def prediction(request):
if request.method == "POST":
import pandas as pd
from django.conf import settings
distance = request.POST.get("distance")
speed = request.POST.get("speed")
change_in_kineticenergy = request.POST.get("change_in_kineticenergy")
change_in_potentialenergy = request.POST.get(
"change_in_potentialenergy")
weight = request.POST.get("weight")
x = data.iloc[:, 1:]
y = data.iloc[:, 0]
x = pd.get_dummies(x)
x = x.fillna(x.mean())
38
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=1)
# from sklearn.preprocessing import StandardScaler
# sc = StandardScaler()
# x_train = sc.fit_transform(x_train)
# x_test = sc.fit_transform(x_test)
x_train = pd.DataFrame(x_train)
import numpy as np
from sklearn.tree import DecisionTreeRegressor
dt = DecisionTreeRegressor()
test_set = [distance, speed, change_in_kineticenergy,
change_in_potentialenergy, weight]
print(test_set)
#test_set = data.drop(['Yield'], axis=1)
dt.fit(x_train, y_train)
print('x train:', x_train)
print('y train:', y_train)
#test_set = list(np.float_(test_set))
y_pred = dt.predict([test_set])
print('y pred:', y_pred)
return render(request, 'users/prediction.html', {'y_pred': y_pred})
def random_forest(request):
mae, mse, r2 = algo.random_forest()
print("mea:", mae)
print("mse:", mse)
print("r2:", r2)
return render(request, "users/RF.html", {'mae': mae, 'mse': mse, 'r2': r2})
def Gradiant_Boosting(request):
mae, mse, r2 = algo.Gradiant_Boosting()
print("mea:", mae)
print("mse:", mse)
print("r2:", r2)
return render(request, "users/GF.html", {'mae': mae, 'mse': mse, 'r2': r2})
def svm(request):
mae, mse, r2 = algo.svm()
return render(request, "users/svm.html", {'mae': mae, 'mse': mse, 'r2': r2})
def cnn(request):
39
history = algo.DeepLearning()
print('-'*100)
mae = history.history['mae']
mse = history.history['mse']
print(history.history['mae'])
return render(request, "users/cnn.html", {'mae': mae[-1], 'mse': mse[-1]})
Base.html:
{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Machine Learning Model for Average Fuel Consumption in Heavy
Vehicles</title>
<meta content="width=device-width, initial-scale=1.0" name="viewport">
<meta content="Free Website Template" name="keywords">
<meta content="Free Website Template" name="description">
<link href="{% static 'img/favicon.ico' %}" rel="icon">
<link
href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@300;400;600;700;
800&display=swap" rel="stylesheet">
<link
href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"
rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-
awesome/5.10.0/css/all.min.css" rel="stylesheet">
<link href="{% static 'lib/animate/animate.min.css' %}" rel="stylesheet">
<link href="{% static 'lib/owlcarousel/owl.carousel.min.css' %}"
rel="stylesheet">
<link href="{% static 'lib/lightbox/css/lightbox.min.css' %}" rel="stylesheet">
<link href="{% static 'css/style.css' %}" rel="stylesheet">
</head>
<body>
<div class="top-bar d-none d-md-block">
<div class="container-fluid">
<div class="row">
<div class="col-md-6">
<div class="top-bar-left">
</div>
</div>
<div class="col-md-6">
<div class="top-bar-right">
</div>
</div>
</div>
</div>
</div>
<div class="navbar navbar-expand-lg bg-dark navbar-dark">
<div class="container-fluid">
<a href="index.html" class="navbar-brand"> <span
style="color:Black;">Fuel Consumption </span></a>
40
<button type="button" class="navbar-toggler" data-toggle="collapse" data-
target="#navbarCollapse">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse justify-content-between"
id="navbarCollapse">
<div class="navbar-nav ml-auto">
<a href="{% url 'index' %}" class="nav-item nav-link"
style="color:Black;">Home</a>
<a href="{% url 'UserLogin' %}" class="nav-item nav-link"
style="color:Black;">User</a>
<a href="{% url 'AdminLogin' %}" class="nav-item nav-link"
style="color:Black;">Admin</a>
<a href="{% url 'UserRegister' %}" class="nav-item nav-link"
style="color:Black;">Registration</a>
</div>
</div>
</div>
</div>
{%block contents%}
{%endblock%}
<div class="container copyright">
<div class="row">
<div class="col-md-6">
<p></p>
</div>
<div class="col-md-6">
</div>
<div class="col-xl-7 col-lg-7 col-md-7 co-sm-l2">
</div>
</div>
<!-- Footer End -->
<a href="#" class="back-to-top"><i class="fa fa-chevron-up"></i></a>
41
<!-- Template Javascript -->
<script src="{% static 'js/main.js' %}"></script>
</body>
</html>
Urls.py:
"""project8 URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path
from fuelconsumption import views as mainView
from admins import views as admins
from users import views as usr
urlpatterns = [
path('admin/', admin.site.urls),
path("", mainView.index, name="index"),
path("index/", mainView.index, name="index"),
path("AdminLogin/", mainView.AdminLogin, name="AdminLogin"),
path("UserLogin/", mainView.UserLogin, name="UserLogin"),
path("UserRegister/", mainView.UserRegister, name="UserRegister"),
# Admin views
path("AdminHome/", admins.AdminHome, name="AdminHome"),
path("AdminLoginCheck/", admins.AdminLoginCheck,
name="AdminLoginCheck"),
path('RegisterUsersView/', admins.RegisterUsersView, name='RegisterUsersView'),
path('ActivaUsers/', admins.ActivaUsers, name='ActivaUsers'),
# User Views
42
path("prediction/", usr.prediction, name="prediction"),
path("random_forest/", usr.random_forest, name="random_forest"),
path("Gradiant_Boosting/",usr.Gradiant_Boosting,name="Gradiant_Boosting"),
path("svm/",usr.svm,name="svm"),
path("cnn", usr.cnn, name="cnn")
43
CHAPTER 6
EXPERIMENTAL RESULTS
44
Fig 6.3:Prediction Page
45
CHAPTER 7
7.1 Conclusion
The safety of the people is very important hence anything that could cause any
discomforts to the people should need to be checked hence the anomaly is checked
These anomalies can be anything that is abnormal in the pedestrian pathway usually
these anomalies are bikes, cycles, truck, skaters etc... They can cause harm to the
pedestrians in the pathway therefore this code detects the anomalies with the help of
Deep learning and image processing techniques by converting video into frames then
these frames are preprocessed to extract efficient features from the dataset and the
noise were removed from the frames and using motion detection and the object
detection it classifies based on the motion of the object travelling and the size of the
object other than the, the pedestrians.
The base paper "Deep Learning based Abnormal Event Detection in Pedestrian
Pathways" outlines several potential future enhancements to improve the system
further. Here are some proposed enhancements:
Real-Time Processing: Improving the system to handle real-time video feeds more
efficiently, ensuring faster detection and response to anomalies.
46
to new types of anomalies without requiring complete retraining.
Hybrid Models: Combining deep learning with other machine learning techniques,
such as reinforcement learning, to improve the detection performance and adaptability
of the system.
Scalability: Enhancing the system's scalability to handle larger datasets and more
extensive networks of cameras, ensuring consistent performance across different
environments.
These enhancements aim to make the anomaly detection system more accurate,
efficient, and adaptable to various real-world conditions and requirements.
47
REFERENCES
[5]. “A world with a billion cameras watching you is just around the corner.” by Lin,
L., Purnell, N. in Wall Str. J. (2019)
[7]. “spatio-temporal adversarial networks for abnormal event detection.” by Lee, S.,
Kim, H. G., Ro, Y. M.: STAN in IEEE international conference on acoustics, speech
and signal processing (ICASSP)(2018)
[9]. “Crime analysis through machine learning” by S. Kim, P. Joshi, P.S. Kalsi, P.
Taheri in IEEE 9th Annual Informaation Technology, Electronics and Mobile
48
Communication Conference(IEMCON),IEEE (2018)
[11]. “Video anomaly detection with compact feature sets for online performance.” by
Leyva, R., Sanchez, V., Li, C.-T. in IEEE Trans. Image Process.(2017)
[12]. Ravanbakhsh, M., Nabi, M., Sangineto, E., Marcenaro, L., Regazzoni, C., Sebe,
N.: Abnormal event detection in videos using generative adversarial nets. In: 2017
IEEE international conference on image processing (ICIP), pp. 1577–1581 (2017)
[13]. “Hierarchical context modeling for video event recognition.” by Wang, X., Ji, Q.
in IEEE Trans. Pattern Anal. Mach. Intell.(2017)
[14]. “The use of predictive analysis in spatiotemporal crime forecasting: building and
testing a model in an urban context” by A. Rummens, W. Hardyns, L. Pauwels in Appl.
Geogr. (2017)
[15]. “Abnormal event detection at 150 FPS in MATLAB.” by Lu C., Shi, J., Jia, J. in
IEEE International Conference on Computer Vision (2013)
[16]. “Evaluating the use of public surveillance cameras for crime control and
prevention.” by Vigne, N.G.L., Lowry, S.S., Markman, J.A., Dwyer, A.M. in US
Department of Justice, Office of Community Oriented Policing Services. Urban
Institute, Justice Policy Center, Washington, DC (2011)
[17]. “Survey on contemporary remote surveillance systems for public safety” by T.D.
Raty in IEEE Transactions on Systems, Man and Cybernetics. (2010)
[18]. “Observe locally, infer globally a space-time MRF for detecting abnormal
activities with incremental updates.” by Authorized licensed use limited to: Zhejiang
University. Downloaded on April 06,2024 at 08:54:54 UTC from IEEE Xplore.
Restrictions apply. Kim, J., Grauman, K. in IEEE Conference on Computer Vision and
49
Pattern Recognition (2009)
50