Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

documentation_sample[_batch_1]

The document is a mini project report on 'Deep Learning Based Abnormal Event Detection in Pedestrian Pathway' submitted by students for their Bachelor of Technology in Information Technology. It outlines the project's aim to develop a video anomaly detection system using deep learning techniques to enhance public safety by identifying unusual pedestrian behaviors in real-time. The report includes acknowledgments, an abstract, literature survey, and various chapters detailing the system's objectives, design, and experimental results.

Uploaded by

Srestha Pulipati
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

documentation_sample[_batch_1]

The document is a mini project report on 'Deep Learning Based Abnormal Event Detection in Pedestrian Pathway' submitted by students for their Bachelor of Technology in Information Technology. It outlines the project's aim to develop a video anomaly detection system using deep learning techniques to enhance public safety by identifying unusual pedestrian behaviors in real-time. The report includes acknowledgments, an abstract, literature survey, and various chapters detailing the system's objectives, design, and experimental results.

Uploaded by

Srestha Pulipati
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

A

Mini Project Report


on
DEEP LEARNING BASED ABNORMAL EVENT DETECTION IN
PEDESTRIAN PATHWAY

Submitted for partial fulfilment of the requirements for the award of the degree of

BACHELOR OF TECHNOLOGY
in
INFORMATION TECHNOLOGY
Submitted by

A SRIRAM 21K81A1202
CH MANIKANTA 21K81A1208
PULIPATI SRESTHA 21K81A1247

Under the Guidance of


Dr. N. KRISHNAIAH
PROFESSOR AND HEAD
OF DEPARTMENT

DEPARTMENT OF INFORMATION TECHNOLOGY

St. MARTIN'S ENGINEERING COLLEGE


UGC Autonomous
Affiliated to JNTUH, Approved by AICTE,
Accredited by NBA & NAAC A+, ISO 9001-2008 Certified
Dhulapally, Secunderabad - 500 100
www.smec.ac.in

NOVEMBER – 2024

1
St. MARTIN'S ENGINEERING COLLEGE
UGC Autonomous
Affiliated to JNTUH, Approved by AICTE
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
www.smec.ac.in

Certificate

This is to certify that the project entitled “DEEP LEARNING BASED


ABNORMAL DETECTION IN PEDESTRIAN PATHWAY” is being submitted by
A.Sriram (21K81A1202), CH.Manikanta (21K81A1208), Pulipati Srestha(21K81A1247) in
fulfilment of the requirement for the award of degree of BACHELOR OF
TECHNOLOGY IN INFORMATION TECHNOLOGY is recorded of bonafide
work carried out by them. The result embodied in this report have been verified and found
satisfactory.

Signature of Guide Signature of HOD


Dr. N. KRISHNAIAH Dr. N. KRISHNAIAH
Professor and Head of Department Professor and Head of Department
Department of Information Technology Department of Information Technology

Internal Examiner External Examiner

Place:

Date:

2
St. MARTIN'S ENGINEERING COLLEGE
UGC Autonomous
Affiliated to JNTUH, Approved by AICTE
NBA & NAAC A+ Accredited
Dhulapally, Secunderabad - 500 100
www.smec.ac.in

DEPARTMENT OF INFORMATION TECHNOLOGY

Declaration
We, the students of ‘Bachelor of Technology in Department of Information
Technology’, session: 2021 - 2025, St. Martin’s Engineering College, Dhulapally,
Kompally, Secunderabad, hereby declare that the work presented in this Project
Work entitled “DEEP LEARNING EVENT BASED ABNOMAL EVENT
DETECTION IN PEDESTRIAN PATHWAY” is the outcome of our own bonafide
work and is correct to the best of our knowledge and this work has been undertaken taking
care of Engineering Ethics. This result embodied in this project report has not been submitted
in any university for award of any degree.

A SRIRAM 21K81A1202
CH MANIKANTA 21K81A1208
PULIPATI SRESTHA 21K81A1247

3
ACKNOWLEDGEMENT

The satisfaction and euphoria that accompanies the successful completion of


any task would be incomplete without the mention of the people who made it possible
and whose encouragement and guidance have crowded our efforts with success.
First and foremost, we would like to express our deep sense of gratitude and
indebtedness to our College Management for their kind support and permission to use
the facilities available in the Institute.
We especially would like to express our deep sense of gratitude and
indebtedness to Dr. P. SANTOSH KUMAR PATRA, Professor and Group Director,
St. Martin’s Engineering College Dhulapally, for permitting us to undertake this
project.
We wish to record our profound gratitude to Dr. M. SREENIVAS RAO,
Principal, St. Martin’s Engineering College, for has motivation and encouragement
We are also thankful to Dr. N. KRISHNAIAH, Head of the Department,
Information Technology, St. Martin’s Engineering College, Dhulapally, Secunderabad.
for his support and guidance throughout our project as well as Project Coordinator
Mrs. T. BHARGAVI, Assistant Professor, Information Technology department for
her valuable support.
We would like to express our sincere gratitude and indebtedness to our project
supervisor Dr. N. KRISHNAIAH, Assistant Professor, Information Technology, St.
Martins Engineering College, Dhulapally, for his support and guidance throughout
our project.
Finally, we express thanks to all those who have helped us successfully
completing this project. Furthermore, we would like to thank our family and friends
for their moral support and encouragement. We express thanks to all those who have
helped us in successfully completing the project.

A SRIRAM 21K81A1202
CH MANIKANTA 21K81A1208
PULIPATI SRESTHA 21K81A1247

i
ABSTRACT

A video surveillance system is capable of detecting any kind of motions that are
unusual and hence proves to be a major contributor of the surveillance sector. The
unusual or strange movements that has been recorded by the surveillance cameras
have been used and their patterns have been studied. The pattern that do not match
with the patterns of normal behavior are then used captured. In video sequences they
can be seen in numerous ways which includes bikers, skaters, small carts in pedestrian
pathways, etc. The main aim of the study is to develop a efficiently working Video
Anomaly Detection(VAD) system that is capable of identifying and analyzing strange
movements in videos using the techniques of deep learning and image processing.
In this project, we propose a deep learning-based approach for abnormal event
detection in pedestrian pathways, aiming to enhance public safety and automate
surveillance systems. Traditional methods for detecting abnormal behavior in crowded
areas often rely on manual monitoring or simplistic threshold-based algorithms, which
are prone to inaccuracies. In contrast, our method leverages the power of
convolutional neural networks (CNNs) and recurrent neural networks (RNNs),
particularly Long Short-Term Memory (LSTM) units, to analyze video footage and
identify irregularities in pedestrian movement patterns.
The model is trained on a dataset of normal and abnormal pedestrian behaviors,
learning to distinguish between routine activities and unusual events such as sudden
stops, erratic movements, and potential safety hazards. Our approach integrates
spatiotemporal features from video frames to capture both the appearance and motion
dynamics of pedestrians. By utilizing real-time processing, the system can alert
authorities to potential incidents like accidents, crowd formation, or suspicious
behavior.

ii
LIST OF FIGURES

Figure No. Figure Title Page No.


1.1 CNN for Text Classification 1
1.2 Encoding Words 2
3.1 One-Hot Vector 6
3.2 Max Pooling over Time 7
3.3 Sentiment CNN 8
3.4 Data Flow Diagram 8

iii
LIST OF TABLES

Table No. Table Name Page No.


10.1 Dataset View 17

iv
LIST OF ACRONYMS AND DEFINITIONS
S.No: Acronym Definition

01. LSTM Long Short-term Memory Networks

02. CNN Convolutional Neural Network

03. SVM Support Vector Machine

04. UML Unified Modelling Language

05. RF Random Forest

06. RNN Recurrent Neural Networks

07. NB Native Bayes

08. MDP Markov Decision Process

09. PTSD Post-traumatic Stress Disorder

v
CONTENTS

ACKNOWLEDGMENT i

ABSTRACT ii

LIST OF FIGURES iv

LIST OF TABLES v

LIST OF ACRONYMS AND DEFINITION vi

CHAPTER 1:INTRODUCTION

1.1 Objective 1

1.2 Overview 1

CHAPTER 2: LITERATURE SURVEY 3

CHAPTER 3: SYSTEM ANALYSIS AND DESIGN 7

3.1 Existing System 7

3.2 Proposed System 12


3.3 System Configuration 15
CHAPTER 4: SYSTEM REQUIREMENTS & 17
SPECIFICATIONS
4.1 Database 17

4.2 CNN Algorithm 19

4.3 Design 23

4.3.1 System Architecture 23

4.3.2 Data Flow Diagram 24

4.3.3 UML Diagram 25


4.3.4 Use Case Diagram 26

4.3.5 Class Diagram 27

4.3.6 Sequence Diagram 28


4.3.7 Activity Diagram 29

vi
4.4 Modules 30

4.4.1 Modules Description 30


4.5 System Requirements 32
4.5.1 Hardware Requirements 32
4.5.2 Software Requirements 32
4.6 Testing 33
4.6.1 Unit Testing 33
4.6.2 Integration Testing 33
4.6.3 Functional Testing 34
4.6.4 System Testing 34
4.6.5 White Box Testing 34
4.6.6 Black Box Testing 35
4.6.5 Unit Testing 35
4.6.6 Integration Testing 35
4.6.7 Acceptance Testing 36

CHAPTER 5: SOURCE CODE 37

CHAPTER 6: EXPERIMENTAL RESULTS 44

CHAPTER 7: CONCLUSION AND FUTURE 46


ENHANCEMENT

7.1 Conclusion 46

7.2 Future Enhancement 46

REFERENCES 48

vii
CHAPTER 1
INTRODUCTION

1.1 Objective
Deep learning-based abnormal event detection in pedestrian pathways focus on
enhancing safety and security through real-time monitoring of unusual or dangerous
activities. The goal is to automate surveillance, reducing the need for constant human
oversight, while achieving high accuracy and precision in identifying anomalies such
as accidents, assaults, or suspicious behaviors. By analyzing pedestrian movements
and behaviors, the system can detect abnormal patterns, even in complex and dynamic
environments with varying conditions like lighting or weather. It is designed to be
scalable, allowing deployment across different areas, from small pathways to large
public spaces, without significant manual adjustments. Additionally, deep learning
models aim to overcome challenges like occlusions and ensure robust detection even
when visibility is compromised. Ultimately, these systems can integrate with broader
smart city infrastructure, supporting real-time responses and improving the overall
safety and security of pedestrian pathways.

1.2 Overview

A complicated field of study in computer vision and machine learning systems,


intelligent video surveillance has drawn more attention recently a result of worries
about international security. It's critical to keep an eye out for odd activity in public
areas like bus stops, train stations, and retail centers. Due to technological
advancements and decreased costs, the usage of surveillance cameras has expanded in
both public and private settings. The surveillance of undesirable occurrences is often
carried out by human operators. They examine simultaneous movies taken by several
cameras.

1
Their capacity to recognize anomalous occurrences in real time is compromised when
they are exposed to extended periods of of time without visual stimulation. As a result,
the existing system is reduced to a recording system with little utility beyond forensics.
Therefore, real-time automated anomaly detection is necessary to identify and take
quick action. Therefore, the goal of our study is to ereste an anomaly system that can
identify odd movement in video footage. Here, image processing methods have been
combined with deep learning approaches such as convolutional and recurrent neural
networks to analyze the input image dataset. dealing with massive volumes of data and
identifying any indications of network penetration are crucial. When it comes to big
data, data security and privacy are perhaps the most pressing concerns, especially in
the context of network assaults. Distributed denial-of-service (DDoS) attacks are one
of the most common types of cyberattacks. They target servers or networks with the
intent of interfering with their normal operation. Although real-time detection and
mitigation of DDoS attacks is difficult to achieve, a solution would be extremely
valuable, since attacks can cause significant damage.

2
CHAPTER 2
LITERATURE SURVEY

Anomalies are abnormal events that differs from the pattern of normal behavior
Anomaly identification the key factor for surveillance. But this often found to be a
challenging task because anomaly may be falsely detected in certain areas. For
example, a firing a gun is an abnormal event in regular basis but it is a normal event in
a gun shooting club. Hence certain events are termed us anomalous despite being a
normal behavior according to the
place. [1][2]

The WEKA dataset (Waikato Environment for Knowledge Analysis) was used for the
investigation of violent crimes and this analysis shows a trend between real criminal
data and community data. Additive regressiin, Linear regression and Decision stump
techniques were used in this study and among the three techniques linear regression
was able to find the unpredictability of occurrence of the event and shows the ability
of Jeep learning techniques in anomaly detection. [3][4] In a study by Kim S et al, the
anomaly detectionin Philadelphia is analyzed and a trend has been identified. The
machine learning model is trained for anomaly detection from massive data sources
using machine learning Techniques as logistic regression, ordinal regression, decision
trees, and e-nearest neighhor. The models have achieved a 69% accuracy.[5][6]

In another study Elhurrouss et al have used old crime locations dataset to predict the
places where crimes are like to happen. The levenery-marquardt technique was used to
analyze and understand the data. In addition to this, scaled technique was also used in
the study for data examination and understanding. The scaled technique used was
found to be the best performer and showed an accuracy of 78%. It also results in crime
reduction up to 78[7][8]

Sultani et al. conducted a thorough investigationon anomaly detection in metropolitan


areas where data was integrated into a 200x250 m grid. It was clearly analyzed with
hindsight. They proposed a model using techniques such as ensemble logistic
regression and neural network for anomaly detection. The results conclude that
prediction of anomaly is more accurate when performed once in every 14 days
3
compared to performance on monthly basis. [9]

In a study by Rummens et al, the anomaly activities have been thoroughly observed
and analyzed with the help of the anomaly data of the past 15 years from right before
2017. The techniques such as k-nearest neighbor and decision tree have been used for
the detection. It was able to achieve an accuracy level of 39-44% when used against a
dataset consisting so 5,60,000 anomaly activities. [10][11]

A growing number of cities are using surveillance cameras to reduce crime, but little
research exists to determine whether they're worth the cost. With jurisdictions across
the country tightening their belts, public safety resources are scarce and policymakers
need to know which potential investments are likely to bear fruit. This research brief
summarizes the Urban Institute's series documenting three cities use of public
surveillance cameras and how they impacted crime in their neighborhoods.

In this paper, we propose a novel abnormal event detection method with spatio-
temporal adversarial networks (STAN). We devise a spatio-temporal generator which
synthesizes an inter-frame by considering spatio-temporal characteristics with
bidirectional ConvLSTM. A proposed spatio-temporal discriminator determines
whether an input sequence is real-normal or not with 3D convolutional layers. These
two networks are trained in an adversarial way to effectively encode spatio-temporal
features of normal patterns. After the learning, the generator and the discriminator can
be independently used as detectors, and deviations from the learned normal patterns
are detected as abnormalities. Experimental results show that the proposed method
achieved competitive performance compared to the state-of-the-art methods. Further,
for the interpretation, we visualize the location of abnormal events detected by the
proposed networks using a generator loss and discriminator gradients.

We propose a space-time Markov random field (MRF) model to detect abnormal


activities in video. The nodes in the MRF graph correspond to a grid of local regions
in the video frames, and neighboring nodes in both space and time are associated with
links. To learn normal patterns of activity at each local node, we capture the
distribution of its typical optical flow with a mixture of probabilistic principal

4
component analyzers. For any new optical flow patterns detected in incoming video
clips, we use the learned model and MRF graph to compute a maximum a posteriori
estimate of the degree of normality at each local node. Further, we show how to
incrementally update the current model parameters as new video observations stream
in, so that the model can efficiently adapt to visual context changes over a long period
of time. Experimental results on surveillance videos show that our space-time MRF
model robustly detects abnormal activities both in a local and global sense: not only
does it accurately localize the atomic abnormal activities in a crowded video, but at the
same time it captures the global-level abnormalities caused by irregular interactions
between local activities.

Automated surveillance systems observe the environment utilizing cameras. The


observed scenario is then analysed using motion detection, crowd behaviour,
individual behaviour, interaction between individuals, crowds and their surrounding
environment. These automatic systems accomplish multitude of tasks which include,
detection, interpretation, understanding, recording and creating alarms based on the
analysis. Till recent, studies have achieved enhanced monitoring performance along
with avoiding possible human failures by manipulation of different features of these
systems. This paper presents a comprehensive review of such video surveillance
systems as well as the components used with them. The description of the
architectures used is presented which follows the most required analyses in these
systems. For the bigger picture and wholesome view of the system, existing
surveillance systems were compared in terms of characteristics, advantages, and
difficulties which are tabulated in this paper. Adding to this, future trends are
discussed which charts a path into the upcoming research directions.
Crime detection and their prediction is a fundamental process to reduce criminal
activities before they actually happen. Moreover, the detection method is vital since
can it potentially can save the victim's life, avoid all-time strain, and harm to the
public/private property. In addition, it can be useful in predicting the possible terrorist
activities. Crime detection using deep learning models is an attention-grabbing
research area. Detecting and reducing the criminal activities is imperative to develop a
peaceful society. Video surveillance automates the hazardous situations and enables a
law enforcement system to take effective steps towards public safety. In this paper, an
end-to-end deep learning model is proposed which is based on Bi-directional gated

5
recurrent unit (BiGRU) and Convolutional neural network (CNN) to detect and
prevent criminal activities. The CNN extracts the spatial features from video frames
whereas temporal and local motion features are extracted by the BiGRU from multiple
frames CNN extracted features. The focused bag is created to select those video
frames which indicate certain actions. Moreover, ranked-based loss is used to
effectively detect and classify the suspicious activities. For classification of activities,
various machine learning classifiers are used. The proposed deep learning video
surveillance technique is able to track human trails and detect criminal events. The
CAVIAR dataset is used to examine the proposed technique for video surveillance-
based crime detection with a performance accuracy of almost 98.86%. The alerts
received from the proposed technique can also be examined, demonstrates that the
practiced video surveillance cameras systems can effectively detect unusual and
criminal activities. In addition, the proposed technique showed considerable
performance accuracy and outscored the related state-of-the-art (SOTA)DL models
including CNN-LSTM, CNN, HMM, and DBN and achieved 21.88% absolute
improvement in crime detection accuracy.

6
CHAPTER 3

SYSTEM ANALYSIS AND DESIGN

3.1 Existing System

K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on


Supervised Learning technique. KNN algorithm assumes the similarity between the
new case/data and available cases and put the new case into the category that is most
similar to the available categories. It stores all the available data and classifies a new
data point based on the similarity. This means when new data appears then it can be
easily classified into a well suite category by using K- NN algorithm. It can be used
for Regression as well as for Classification but mostly it is used for the Classification
problems. It is a non-parametric algorithm, which means it does not make any
assumption on underlying data. It is also called a lazy learner algorithm because it
does not learn from the training set immediately instead it stores the dataset and at the
time of classification, it performs an action on the dataset. KNN algorithm at the
training phase just stores the dataset and when it gets new data, then it classifies that
data into a category that is much like the new data.

Why do we need a K-NN Algorithm?

Suppose there are two categories, i.e., Category A and Category B, and we have a
new data point x1, so this data point will lie in which of these categories. To solve this
type of problem, we need a K-NN algorithm. With the help of K-NN, we can easily
identify the category or class of a particular dataset. Consider the below
diagram:

7
Fig 3.1: KNN on dataset.

8
How does K-NN work?

The K-NN working can be explained based on the below algorithm:

Step-1: Select the number K of the neighbors.

Step-2: Calculate the Euclidean distance of K number of neighbors.

Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.

Step-4: Among these k neighbors, count the number of the data points in each
category.

Step-5: Assign the new data points to that category for which the number of the
neighbor is maximum.

Step-6: Model is ready.

Suppose we have a new data point, and we need to put it in the required category.
Consider the below image:

Fig. 3.2: Considering new data point.

Firstly, we will choose the number of neighbors, so we will choose the k=5.

Next, we will calculate the Euclidean distance between the data points. The Euclidean
distance is the distance between two points, which we have already studied in
geometry. It can be calculated as:

9
Fig. 3.3: Measuring of Euclidean distance.

By calculating the Euclidean distance, we got the nearest neighbors, as three nearest
neighbors in category A and two nearest neighbors in category B. Consider the below
image:

Fig. 3.4: Assigning data point to category A.

As we can see the 3 nearest neighbors are from category A, hence this new data point
must belong to category A.

How to select the value of K in the K-NN Algorithm?

Below are some points to remember while selecting the value of K in the K-NN
algorithm:

10
 There is no particular way to determine the best value for "K", so we need to
try some values to find the best out of them. The most preferred value for K is
5.
 A very low value for K such as K=1 or K=2, can be noisy and lead to the
effects of outliers in the model.
 Large values for K are good, but it may find some difficulties.

Limitations

 Computational Complexity: Making predictions with KNN involves


calculating distances between the new data point and all training data points.
This can be computationally expensive, especially for large datasets.
 Sensitivity to Noise: This model is sensitive to noisy data and outliers. Noisy
data can significantly impact the classification results.
 Need for Optimal K: Selecting the appropriate value of K (the number of
nearest neighbors) is crucial. Choosing an inappropriate K value can lead to
suboptimal results.
 Imbalanced Data: It struggles with imbalanced datasets where one class
significantly outnumbers the others. The majority class can dominate
predictions.
 Curse of Dimensionality: In high-dimensional feature spaces, the notion of
distance becomes less meaningful, and KNN suffers from the curse of
dimensionality, leading to reduced performance.

11
3.2 Proposed System

The Convolutional Neural Network (CNN) algorithm is a powerful deep learning


architecture primarily used for tasks like image recognition, classification, and object
detection. It mimics the human visual system's way of processing images, breaking them
down into simpler patterns and gradually abstracting complex features. Let's elaborate on
each step of the CNN process:

Step 1. Input Layer

 Purpose: The input layer receives the image that the CNN will process.
 Process: An image is represented as a 3D matrix (or tensor), where:
o Width and Height represent the spatial dimensions (number of pixels along the
horizontal and vertical axes).
o Depth (or channels) represents the number of color channels (typically 3 for
RGB images—Red, Green, and Blue).
o Example: A colored image of size 224x224 pixels will be represented as a tensor
of shape 224×224×3224 \times 224 \times 3224×224×3.
 Importance: The input layer ensures the image data is properly formatted to pass
through the CNN architecture.

Step 2. Convolution Layer

 Purpose: This is the core component of the CNN, responsible for extracting features
like edges, textures, and patterns from the input image.
 Process:
o A set of filters (or kernels) are applied to the input image. These filters are small
matrices (e.g., 3x3 or 5x5) that slide over the input image, performing element-
wise multiplication with portions of the image to produce feature maps.
o Filters are designed to detect various features, such as vertical and horizontal
edges, textures, corners, etc.
o Each filter focuses on a specific aspect of the image, and several feature maps
are generated, capturing different image characteristics.
 Example: A 3x3 filter detects edge patterns in a small 3x3 region of the image.

12
 Importance: The convolution operation helps the model detect low-level features in the
early layers, which become more abstract and complex in deeper layers (e.g., shapes,
objects).

Step 3. Activation Layer (ReLU)

 Purpose: Introduces non-linearity to the model. Without an activation function, the


CNN would behave like a linear model, unable to learn complex patterns.
 Process: The most commonly used activation function in CNNs is ReLU (Rectified
Linear Unit). It replaces all negative pixel values in the feature map with zero, while
keeping positive values unchanged.
o Formula: f(x)=max⁡(0,x)f(x) = \max(0, x)f(x)=max(0,x), where xxx is the input
to the neuron.
 Importance: ReLU adds the ability for the network to model complex relationships,
making the model more powerful and capable of handling real-world data with non-
linear properties.

Step 4. Pooling Layer

 Purpose: Reduces the size of the feature maps, making the network more efficient and
less computationally expensive while retaining important information.
 Process:
o Max Pooling: The most commonly used pooling method. It selects the
maximum value from each sub-region of the feature map, reducing the spatial
size (width and height) while keeping the depth (number of feature maps) intact.
o Example: In a 2x2 max-pooling operation, a 2x2 region of the feature map is
replaced by the single maximum value from that region.
o Average Pooling: Alternatively, average pooling takes the average of the values
in the sub-region, but max pooling is more commonly used due to its superior
performance in preserving important features.
 Importance: Pooling reduces the computational complexity, prevents overfitting, and
ensures that the most significant features are passed on to deeper layers.

Step 5. Flattening

13
 Purpose: Converts the 2D feature maps into a 1D vector that can be fed into fully
connected layers.
 Process:
o After multiple convolution and pooling operations, the resulting feature maps
(which are 2D arrays) need to be flattened into a single-dimensional vector to
be compatible with the fully connected layers.
o Example: If the output feature map is 7×7×1287 \times 7 \times 1287×7×128,
the flattening step will convert it into a 1D vector of size 6,272 (i.e.,
7×7×128=62727 \times 7 \times 128 = 62727×7×128=6272).
 Importance: This step bridges the convolutional part of the network with the fully
connected layers, which are responsible for the final decision-making process.

Step 6. Fully Connected Layer

 Purpose: Performs the final classification based on the features extracted by previous
layers.
 Process:
o In the fully connected layer (FC), each neuron is connected to every neuron in
the previous layer, similar to neurons in a traditional neural network.
o The output of the flattening layer is fed into the fully connected layers, which
use the extracted features to classify the image or predict the output.
o Each neuron computes a weighted sum of inputs from the previous layer, and an
activation function (usually ReLU) is applied to produce the output.
 Example: If the task is to classify images into 10 different categories, the final fully
connected layer will have 10 neurons, each corresponding to one class.
 Importance: The fully connected layer integrates all the learned features and makes the
final decision on what the image represents.

Step 7. Output Layer

 Purpose: Provides the final class probabilities or predictions.


 Process:
o The output layer usually applies the softmax activation function to produce a
probability distribution over the possible classes. Softmax ensures that the sum
of the probabilities for all classes equals 1.

14
o Formula: P(y=j∣x)=ezj∑kezkP(y = j | x) = \frac{e^{z_j}}{\sum_{k}
e^{z_k}}P(y=j∣x)=∑k​ ezk​ ezj​ ​ , where zjz_jzj​ is the output score for
class jjj, and kkk refers to all possible classes.
o Example: For a digit recognition task, the output layer could have 10 neurons
(representing digits 0-9), and the softmax function will output the probability of
the image belonging to each class.
 Importance: The output layer produces the final classification, determining the most
likely class for the input image.

Step 8. Backpropagation

 Purpose: Updates the weights of the filters and neurons to minimize prediction errors
during training.
 Process:
o Error Calculation: The model’s output is compared to the actual label (ground
truth) to calculate the loss (error), using a loss function like cross-entropy (for
classification tasks).
o Backpropagation: The error is propagated backward through the network, and
the filters’ weights and biases are updated using gradient descent to minimize
the loss. The gradients of the loss function with respect to each weight are
calculated.
o Learning Rate: The learning rate controls the step size for updating the weights.
A lower learning rate leads to slower but more precise convergence.
 Importance: Backpropagation enables the CNN to learn from the training data by
iteratively adjusting the filters and weights, ensuring that the model improves its
predictions over time.

3.3 System Configuration


REQUIREMENT ANALYSIS

The project involved analyzing the design of few applications so as to make the
application more users friendly. To do so, it was really important to keep the
navigations from one screen to the other well ordered and at the same time reducing
the amount of typing the user needs to do. In order to make the application more

15
accessible, the browser version had to be chosen so that it is compatible with most of
the Browsers.

REQUIREMENT SPECIFICATION

Functional Requirements

 Graphical User interface with the User.

Software Requirements

For developing the application the following are the Software Requirements:

1. Python

2. Django

Operating Systems supported

1. Windows 10 64 bit OS

Technologies and Languages used to Develop

1. Python

Debugger and Emulator

 Any Browser (Particularly Chrome)


Hardware Requirements

For developing the application the following are the Hardware Requirements:

 Processor: Intel i9
 RAM: 32 GB
 Space on Hard Disk: minimum 1 TB

16
CHAPTER 4
SYSTEM REQUIREMENTS AND SPECIFICATION
4.1 Database

Dataset Description Key Features Application


UCSD Video clips Dense pedestrian Detect non-
Anomaly from a traffic with pedestrian
Detection walkway at anomalies like entities (e.g.,
UC San cyclists, skaters, bikes,
Diego carts. vehicles) in
showing pathways.
pedestrian
activity.
Avenue University Varied Anomaly
Dataset avenue videos pedestrian detection
with normal behaviors in an based on
and abnormal outdoor unusual
behaviors environment behaviors and
(running, movement
etc.). patterns.
ShanghaiTech Large-scale Over 13,000 Detection of
Campus dataset from a video frames loitering,
campus, with various sudden
featuring abnormal events. movement,
diverse and crowd
anomalies congestion.
(loitering,
etc.).
UMN Unusual Crowd Videos of crowds Clearly labeled normal Used for detecting
Activity with both normal and and abnormal crowd panic and unusual
panic-induced behaviors. crowd movements.
behaviors.
Videos of crowds

17
with both normal and
panic-induced
behaviors.
Street Scene Outdoor Multi-class Detection of
Dataset street scene anomalies like abnormal
videos with jaywalking, pedestrian
pedestrians sudden changes and vehicle
and vehicles, in motion. behavior in
showing streets
anomalies.
UCF-Crime Surveillance Includes various Anomaly
Dataset footage criminal events detection for
showing like assaults, crime
criminal and robbery, and prevention in
abnormal vandalism. public
activities in pedestrian
public spaces. areas.
CUHK University Outdoor scene Behavioral
Avenue avenue with labeled anomaly
Dataset footage normal/abnormal detection,
focusing on behaviors. sudden
unusual movements.
events like
running or
loitering
MIT Traffic Pedestrian Mixture of Detect
Dataset and traffic pedestrian and abnormal
anomalies in vehicle activity. pedestrian or
a busy vehicle events
intersection in urban
captured via intersections.
camera.

4.2 CNN Algorithm


18
The Convolutional Neural Network (CNN) is a deep learning algorithm widely used
for analyzing visual data such as images and videos. CNNs are particularly effective in
tasks like image classification, object detection, and anomaly detection due to their
ability to automatically learn spatial hierarchies of features from input images. Below
is a step-by-step explanation of how a CNN works:

Step 1. Input Layer

Process: The input to a CNN is typically an image represented as a matrix of pixel


values. For a color image, the input will have three channels (RGB), so the input
matrix will have three dimensions: width, height, and depth (number of channels).

Objective: Provide the raw pixel values of the image to the CNN for processing.

Step 2. Convolutional Layer

Process: In the convolutional layer, a filter (also called a kernel) is applied to the input
image. The filter is a small matrix (e.g., 3x3 or 5x5) that "slides" (or convolves) across
the entire input image to extract features. Each filter detects specific patterns such as
edges, textures, or colors.

Steps:

1. The filter slides over the input image, performing element-wise multiplication
between the filter and the corresponding portion of the input image.
2. The results are summed to produce a single value, which is then stored in the
output matrix (called the feature map).

Example: A 3x3 filter sliding over a 5x5 image will produce a 3x3 feature map.

Objective: Extract important local features from the image such as edges, corners, or
textures.

Step 3. Activation Function (ReLU)

19
 Process: After convolution, an activation function is applied to introduce non-
linearity into the model. The most commonly used activation function in CNNs is
ReLU (Rectified Linear Unit).
o ReLU Formula: ReLU(x)=max⁡(0,x)\text{ReLU}(x) = \max(0,
x)ReLU(x)=max(0,x)
o Steps:

 Replace all negative values in the feature map with zero.


 Retain all positive values as they are.
 Objective: Introduce non-linearity into the network, helping it learn complex
patterns and features in the data.

Step 4. Pooling Layer (Downsampling)

 Process: The pooling layer is used to downsample the feature maps, reducing
their spatial dimensions while retaining the most important information. The most
common pooling method is Max Pooling.
o Steps:

1. A window (e.g., 2x2 or 3x3) slides over the feature map.


2. For Max Pooling, the maximum value in each window is selected and
stored in the downsampled feature map.

o Example: A 2x2 Max Pooling applied to a 4x4 feature map will reduce it to a
2x2 feature map by taking the maximum value in each 2x2 block.
 Objective: Reduce the size of the feature maps, decreasing the computational
complexity and helping prevent overfitting. Pooling also makes the network more
robust to small spatial translations in the input.

Step 5. Additional Convolutional and Pooling Layers (Deep Architecture)

 Process: In deep CNNs, multiple convolutional and pooling layers are stacked to
extract progressively more abstract and complex features from the image.
o Example: The first convolutional layer might detect edges, the second layer
might detect textures, and later layers might recognize more complex patterns like
shapes, objects, or even specific features of objects (e.g., eyes, wheels).

20
 Objective: Build a deep hierarchy of features, where each subsequent layer
learns increasingly complex representations of the input image.

Step 6. Flattening

 Process: After passing through the convolutional and pooling layers, the
resulting feature maps (which are still in the form of multi-dimensional arrays) are
flattened into a 1D vector.
o Steps:

 Take all the elements from the feature maps and arrange them into a
single linear vector. This vector will serve as the input for the fully connected layers.

 Objective: Prepare the data for the fully connected (dense) layers by converting
multi-dimensional features into a single feature vector.

Step 7. Fully Connected (Dense) Layer

 Process: The fully connected layer connects every neuron in one layer to every
neuron in the next layer. The flattened feature vector from the previous step is passed
into this layer.
o Steps:

1. The feature vector is multiplied by a weight matrix.


2. A bias is added.
3. An activation function (typically ReLU) is applied.

 Objective: Combine the features extracted by the convolutional layers to predict


the final output.

Step 8. Output Layer (Classification)

Process: The output layer is typically a fully connected layer with a number of
neurons equal to the number of classes in the classification task. The final output is
generated using an activation function like Softmax for multi-class classification or
Sigmoid for binary classification.

21
o Softmax Function: Softmax(xi)=exi∑jexj\text{Softmax}(x_i) =
\frac{e^{x_i}}{\sum_{j}e^{x_j}}Softmax(xi​ )=∑j​ exj​ exi​ ​
o This ensures that the output values represent probabilities and sum to 1.
 Objective: Produce a final classification by assigning a probability to each class.

Step 9. Loss Function and Backpropagation

 Process: During training, the CNN uses a loss function to measure how far the
predicted output is from the true label. For classification tasks, categorical cross-
entropy is commonly used. The loss function computes the error, and the
backpropagation algorithm updates the network's weights to minimize this error.
o Steps:

1. The loss is computed based on the difference between predicted and actual
labels.
2. Gradients are calculated for each layer using backpropagation.
3. The weights are updated using an optimization algorithm like Stochastic
Gradient Descent (SGD) or Adam.

 Objective: Optimize the weights of the CNN to minimize the classification error
during training.

Step 10. Training and Model Evaluation

 Process: The model is trained using a training dataset, and its performance is
evaluated on a validation or test dataset.
 Steps:

1. The model iteratively updates weights by minimizing the loss function using
training data.
2. After training, the model is evaluated using unseen test data to measure its
performance.

 Objective: Ensure the model has learned to generalize well to new data and is
capable of making accurate predictions.

22
4.3 Design
4.3.1 System Architecture

The input data come in a variety of forms, including picture, audio, and video. Here,
we used preprocessed pedestrian picture data to get rid of any duplicates. After that,
the Feature Extraction approach is used. The necessary characteristics are retrieved
from the picture at this step. The Feature Selection is then put into practice. Using this
method, we pick the most important characteristics from the features that were
retrieved in the previous stage. You may now separate the dataset into a test set and a
train set. The Deep Learning model is then adjusted to forecast whether or not the
input picture contains an anomalous item.[15][16]

23
4.3.2 Data flow diagram

A Data Flow Diagram (DFD) is a visual representation of the flow of data within a
system or process. It is a structured technique that focuses on how data moves through
different processes and data stores within an organization or a system. DFDs are
commonly used in system analysis and design to understand, document, and
communicate data flow and processing.

24
4.3.3 UML Diagram

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard
is managed, and was created by, the Object Management Group. The goal is for UML
to become a common language for creating models of object-oriented computer
software. In its current form UML is comprised of two major components: a Meta-
model and a notation. In the future, some form of method or process may also be
added to; or associated with, UML.

The Unified Modeling Language Is a standard language for Specifying, Visualization,


Constructing and Documenting the artifacts of software system, as well as for
business modeling and other non-software systems. The UML represents a collection
of best engineering practices that have proven successful in the modeling of large and
complex systems. The UML is a very important part of developing objects-oriented
software and the software development process. The UML uses mostly graphical
notations to express the design of software projects.

GOALS: The Primary goals in the design of the UML are as follows:

 Provide users a ready-to-use, expressive visual modeling Language so that


they can develop and exchange meaningful models.
 Provide extendibility and specialization mechanisms to extend the core
concepts.
 Be independent of particular programming languages and development
process.
 Provide a formal basis for understanding the modeling language.

 Encourage the growth of OO tools market.

 Support higher level development concepts such as collaborations,


frameworks, patterns and components.
 Integrate best practices.

25
4.3.4 Use Case Diagram

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a
graphical overview of the functionality provided by a system in terms of actors, their
goals (represented as use cases), and any dependencies between those use cases. The
main purpose of a use case diagram is to show what system functions are performed
for which actor. Roles of the actors in the system can be depicted.

26
4.3.5 Class Diagram

The class diagram is used to refine the use case diagram and define a detailed design
of the system. The class diagram classifies the actors defined in the use case diagram
into a set of interrelated classes. The relationship or association between the classes
can be either an “is-a” or “has-a” relationship. Each class in the class diagram may be
capable of providing certain functionalities. These functionalities provided by the
class are termed “methods” of the class. Apart from this, each class may have certain
“attributes” that uniquely identify the class.

27
4.3.6 Sequence Diagram

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction


diagram that shows how processes operate with one another and in what order. It is a
construct of a Message Sequence Chart. A sequence diagram shows, as parallel
vertical lines (“lifelines”), different processes or objects that live simultaneously, and
as horizontal arrows, the messages exchanged between them, in the order in which
they occur. This allows the specification of simple runtime scenarios in a graphical
manner.

28
4.3.7 Activity diagram

Activity diagrams are graphical representations of workflows of stepwise activities


and actions with support for choice, iteration and concurrency. In the Unified
Modeling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity diagram
shows the overall flow of control.

29
4.4 Modules

1. User
2. Admin
3. Data Collection
4. CNN

4.4.1 Modules Description

User:
The User can register the first. While registering he required a valid user email and
mobile for further communications. Once the user register then admin can activate the
user. Once admin activated the user then user can login into our system. User can upload
the dataset based on our dataset column matched. For algorithm execution data must be
in float format. Here we took heavy vechicles related fuel consumption dataset. User can
also add the new data for existing dataset based on our Django application. User can
click the training in the web page so that the data calculated model results
like(mean_absolute error, mean_square error, r2_error) based on the algorithms. User
can display the prediction results. After that user can logout.

Admin:
Admin can login with his login details. Admin can activate the registered users. Once he
activate then only the user can login into our system. Admin can view the overall data in
the browser. Admin can click the model Results in the web page so calculated
(mean_absolute error, mean_square error, r2_error) based on the algorithms. After that
admin can logout.

Data collection:
The model is developed by using duty cycles collected from a single truck, with an
approximate mass of 8, 700 kg exposed to a variety of transients including both urban
and highway traffic in the Indianapolis area. Data was collected using the SAE J1939
standard for serial control and communications in heavy duty vehicle networks [24].
Twelve drivers were asked to exhibit good or bad behavior over two different routes.

30
Drivers exhibiting good behavior anticipated braking and allowed the vehicle to coast
when possible. Some drivers participated more than others and as a result the distribution
of drivers and routes is not uniform across the data set. This field test generated 3, 302,
890 data points sampled at 50 Hz from the vehicle CAN bus and a total distance of
778.89 km over 56 trips with varying distances. Most of the trips covered a distance of
10 km to 15 km. In order to increase the number of data points, synthetic duty cycles
over an extended distance were obtained by assembling segments from the field duty
cycles selected at random. Moreover, a set of drivers are assigned to the training
segments and a different set of drivers are assigned to the testing segments, thereby
ensuring that the training (Ftr) and testing (Fts) data sets derived from the respective
segments are completely separate.

CNN( Convolutional Neural Network):


The proposed model can easily be developed and deployed for each individual
vehicle in a fleet in order to optimize fuel consumption over the entire fleet. The
predictors of the model are aggregated over fixed window sizes of distance
traveled. Different window sizes are evaluated and the results show that a 1 km
window is able to predict fuel consumption with a 0.91 coefficient of
determination and mean absolute peak-to-peak percent error less than 4% for
routes that include both city and highway duty cycle segments.

31
4.5 System Requirements
4.5.1 Hardware Requirements

Minimum hardware requirements are very dependent on the particular software being
developed by a given Enthought Python / Canopy / VS Code user. Applications that
need to store large arrays/objects in memory will require more RAM, whereas
applications that need to perform numerous calculations or tasks more quickly will
require a faster processor.

 System : Intel i3

 Hard Disk : 1 TB

 Monitor : 14’ Colour Monitor

 Mouse : Optical Mouse

 Ram : 4GB

4.5.2 Software Requirements

The functional requirements or the overall description documents include the product
perspective and features, operating system and operating environment, graphics
requirements, design constraints and user documentation.

The appropriation of requirements and implementation constraints gives the general


overview of the project in regard to what the areas of strength and deficit are and how
to tackle them.

 Operating System : Windows 10

 Coding Language : Python

 Front-End : Html,CSS

 Designing : Html,CSS,Javascript

 Data Base : SQLite

32
4.6 Testing
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way
to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the Software system meets its requirements and user expectations and does not
fail in an unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement

4.6.1 Unit Testing

Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

4.6.2 Integration Testing

Integration tests are designed to test integrated software components to determine if


they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that
although the components were individually satisfaction, as shown by successfully
unit testing, the combination of components is correct and consistent. Integration
testing is specifically aimed at exposing the problems that arise from the
combination of components.

33
4.6.3 Functional Testing

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes
must be considered for testing. Before functional testing is complete, additional tests
are identified and the effective value of current tests is determined.

4.6.4 System Testing

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

4.6.5 White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge
of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.

34
4.6.6 Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black
box .you cannot “see” into it. The test provides inputs and responds to outputs
without considering how the software works.

4.6.7 Unit Testing

Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be
written in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

4.6.8 Integration Testing

35
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

4.6.9 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

36
CHAPTER 5

SOURCE CODE

# # Deep learning based Abnormal Event Detection Pathways


Userside views:
from ast import alias
from turtle import distance, speed
from django.shortcuts import render

# Create your views here.


from django.shortcuts import render, HttpResponse
from django.contrib import messages
from users.utility.process_ml import Algorithms

import fuelconsumption

from .forms import UserRegistrationForm


from .models import UserRegistrationModel
from users.utility.process_ml import Algorithms

algo = Algorithms()

# Create your views here.

def UserRegisterActions(request):
if request.method == 'POST':
form = UserRegistrationForm(request.POST)
if form.is_valid():
print('Data is Valid')
form.save()
messages.success(request, 'You have been successfully registered')
form = UserRegistrationForm()
return render(request, 'UserRegistrations.html', {'form': form})
else:
messages.success(request, 'Email or Mobile Already Existed')
print("Invalid form")
else:
form = UserRegistrationForm()
return render(request, 'UserRegistrations.html', {'form': form})

def UserLoginCheck(request):
if request.method == "POST":

37
loginid = request.POST.get('loginid')
pswd = request.POST.get('pswd')
print("Login ID = ", loginid, ' Password = ', pswd)
try:
check = UserRegistrationModel.objects.get(
loginid=loginid, password=pswd)
status = check.status
print('Status is = ', status)
if status == "activated":
request.session['id'] = check.id
request.session['loggeduser'] = check.name
request.session['loginid'] = loginid
request.session['email'] = check.email
print("User id At", check.id, status)
return render(request, 'users/UserHomePage.html', {})
else:
messages.success(request, 'Your Account Not at activated')
return render(request, 'UserLogin.html')
except Exception as e:
print('Exception is ', str(e))
pass
messages.success(request, 'Invalid Login id and password')
return render(request, 'UserLogin.html', {})

def UserHome(request):

return render(request, 'users/UserHomePage.html', {})

def prediction(request):

if request.method == "POST":
import pandas as pd
from django.conf import settings

distance = request.POST.get("distance")
speed = request.POST.get("speed")
change_in_kineticenergy = request.POST.get("change_in_kineticenergy")
change_in_potentialenergy = request.POST.get(
"change_in_potentialenergy")
weight = request.POST.get("weight")

path = settings.MEDIA_ROOT + "\\" + "fuel_supervised_csv.csv"


data = pd.read_csv(path)

x = data.iloc[:, 1:]
y = data.iloc[:, 0]
x = pd.get_dummies(x)
x = x.fillna(x.mean())

38
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.3, random_state=1)
# from sklearn.preprocessing import StandardScaler

# sc = StandardScaler()
# x_train = sc.fit_transform(x_train)
# x_test = sc.fit_transform(x_test)
x_train = pd.DataFrame(x_train)

import numpy as np
from sklearn.tree import DecisionTreeRegressor
dt = DecisionTreeRegressor()
test_set = [distance, speed, change_in_kineticenergy,
change_in_potentialenergy, weight]
print(test_set)
#test_set = data.drop(['Yield'], axis=1)
dt.fit(x_train, y_train)
print('x train:', x_train)
print('y train:', y_train)
#test_set = list(np.float_(test_set))
y_pred = dt.predict([test_set])
print('y pred:', y_pred)
return render(request, 'users/prediction.html', {'y_pred': y_pred})

return render(request, 'users/prediction.html')

def random_forest(request):
mae, mse, r2 = algo.random_forest()
print("mea:", mae)
print("mse:", mse)
print("r2:", r2)
return render(request, "users/RF.html", {'mae': mae, 'mse': mse, 'r2': r2})

def Gradiant_Boosting(request):
mae, mse, r2 = algo.Gradiant_Boosting()
print("mea:", mae)
print("mse:", mse)
print("r2:", r2)
return render(request, "users/GF.html", {'mae': mae, 'mse': mse, 'r2': r2})

def svm(request):
mae, mse, r2 = algo.svm()
return render(request, "users/svm.html", {'mae': mae, 'mse': mse, 'r2': r2})

def cnn(request):

39
history = algo.DeepLearning()
print('-'*100)
mae = history.history['mae']
mse = history.history['mse']
print(history.history['mae'])
return render(request, "users/cnn.html", {'mae': mae[-1], 'mse': mse[-1]})
Base.html:
{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>A Machine Learning Model for Average Fuel Consumption in Heavy
Vehicles</title>
<meta content="width=device-width, initial-scale=1.0" name="viewport">
<meta content="Free Website Template" name="keywords">
<meta content="Free Website Template" name="description">
<link href="{% static 'img/favicon.ico' %}" rel="icon">
<link
href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@300;400;600;700;
800&display=swap" rel="stylesheet">
<link
href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"
rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-
awesome/5.10.0/css/all.min.css" rel="stylesheet">
<link href="{% static 'lib/animate/animate.min.css' %}" rel="stylesheet">
<link href="{% static 'lib/owlcarousel/owl.carousel.min.css' %}"
rel="stylesheet">
<link href="{% static 'lib/lightbox/css/lightbox.min.css' %}" rel="stylesheet">
<link href="{% static 'css/style.css' %}" rel="stylesheet">
</head>
<body>
<div class="top-bar d-none d-md-block">
<div class="container-fluid">
<div class="row">
<div class="col-md-6">
<div class="top-bar-left">
</div>
</div>
<div class="col-md-6">
<div class="top-bar-right">
</div>
</div>
</div>
</div>
</div>
<div class="navbar navbar-expand-lg bg-dark navbar-dark">
<div class="container-fluid">
<a href="index.html" class="navbar-brand"> <span
style="color:Black;">Fuel Consumption </span></a>

40
<button type="button" class="navbar-toggler" data-toggle="collapse" data-
target="#navbarCollapse">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse justify-content-between"
id="navbarCollapse">
<div class="navbar-nav ml-auto">
<a href="{% url 'index' %}" class="nav-item nav-link"
style="color:Black;">Home</a>
<a href="{% url 'UserLogin' %}" class="nav-item nav-link"
style="color:Black;">User</a>
<a href="{% url 'AdminLogin' %}" class="nav-item nav-link"
style="color:Black;">Admin</a>
<a href="{% url 'UserRegister' %}" class="nav-item nav-link"
style="color:Black;">Registration</a>
</div>
</div>
</div>
</div>

{%block contents%}

{%endblock%}
<div class="container copyright">
<div class="row">
<div class="col-md-6">
<p></p>
</div>
<div class="col-md-6">

</div>
<div class="col-xl-7 col-lg-7 col-md-7 co-sm-l2">

</div>
</div>
<!-- Footer End -->
<a href="#" class="back-to-top"><i class="fa fa-chevron-up"></i></a>

<!-- JavaScript Libraries -->


<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script
src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.bundle.min.js"></
script>
<script src="{% static 'lib/easing/easing.min.js' %}"></script>
<script src="{% static 'lib/owlcarousel/owl.carousel.min.js' %}"></script>
<script src="{% static 'lib/isotope/isotope.pkgd.min.js' %}"></script>
<script src="{% static 'lib/lightbox/js/lightbox.min.js' %}"></script>

<!-- Contact Javascript File -->


<script src="{% static 'mail/jqBootstrapValidation.min.js' %}"></script>
<script src="{% static 'mail/contact.js' %}"></script>

41
<!-- Template Javascript -->
<script src="{% static 'js/main.js' %}"></script>
</body>
</html>

Urls.py:
"""project8 URL Configuration

The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path
from fuelconsumption import views as mainView
from admins import views as admins
from users import views as usr

urlpatterns = [

path('admin/', admin.site.urls),
path("", mainView.index, name="index"),
path("index/", mainView.index, name="index"),
path("AdminLogin/", mainView.AdminLogin, name="AdminLogin"),
path("UserLogin/", mainView.UserLogin, name="UserLogin"),
path("UserRegister/", mainView.UserRegister, name="UserRegister"),

# Admin views
path("AdminHome/", admins.AdminHome, name="AdminHome"),
path("AdminLoginCheck/", admins.AdminLoginCheck,
name="AdminLoginCheck"),
path('RegisterUsersView/', admins.RegisterUsersView, name='RegisterUsersView'),
path('ActivaUsers/', admins.ActivaUsers, name='ActivaUsers'),

# User Views

path("UserRegisterActions/", usr.UserRegisterActions, name="UserRegisterActions"),


path("UserLoginCheck/", usr.UserLoginCheck, name="UserLoginCheck"),
path("UserHome/", usr.UserHome, name="UserHome"),

42
path("prediction/", usr.prediction, name="prediction"),
path("random_forest/", usr.random_forest, name="random_forest"),
path("Gradiant_Boosting/",usr.Gradiant_Boosting,name="Gradiant_Boosting"),
path("svm/",usr.svm,name="svm"),
path("cnn", usr.cnn, name="cnn")

43
CHAPTER 6
EXPERIMENTAL RESULTS

Fig 6.1:Home page

Fig 6.2:Model Training Result

44
Fig 6.3:Prediction Page

Fig 6.4:Prediction Result

45
CHAPTER 7

CONCLUSION AND FUTURE SCOPE

7.1 Conclusion

The safety of the people is very important hence anything that could cause any
discomforts to the people should need to be checked hence the anomaly is checked
These anomalies can be anything that is abnormal in the pedestrian pathway usually
these anomalies are bikes, cycles, truck, skaters etc... They can cause harm to the
pedestrians in the pathway therefore this code detects the anomalies with the help of
Deep learning and image processing techniques by converting video into frames then
these frames are preprocessed to extract efficient features from the dataset and the
noise were removed from the frames and using motion detection and the object
detection it classifies based on the motion of the object travelling and the size of the
object other than the, the pedestrians.

7.2 Future Enhancement

The base paper "Deep Learning based Abnormal Event Detection in Pedestrian
Pathways" outlines several potential future enhancements to improve the system
further. Here are some proposed enhancements:

Integration of Additional Sensors: Incorporating other types of sensors such as


infrared cameras, depth sensors, or thermal cameras could enhance the detection
accuracy, especially in low-light or adverse weather conditions.

Real-Time Processing: Improving the system to handle real-time video feeds more
efficiently, ensuring faster detection and response to anomalies.

Enhanced Feature Extraction: Utilizing more advanced feature extraction


techniques or combining multiple feature extraction methods to improve the
robustness of anomaly detection. Incremental Learning: Implementing incremental
learning to continuously update the model with new data, allowing the system to adapt

46
to new types of anomalies without requiring complete retraining.

Hybrid Models: Combining deep learning with other machine learning techniques,
such as reinforcement learning, to improve the detection performance and adaptability
of the system.

Scalability: Enhancing the system's scalability to handle larger datasets and more
extensive networks of cameras, ensuring consistent performance across different
environments.

User Interface Improvements: Developing a more intuitive and interactive user


interface for easier monitoring and management of the surveillance system.

Robustness to Environmental Changes: Improving the system's robustness to


environmental changes, such as varying lighting conditions, weather changes, or
camera angles, to maintain high detection accuracy.

Privacy Preservation: Incorporating privacy-preserving techniques to ensure that the


surveillance system adheres to privacy regulations and protects individuals' identities.

Collaborative Surveillance: Enabling collaborative surveillance where multiple


systems can share information and work together to detect anomalies more effectively.

These enhancements aim to make the anomaly detection system more accurate,
efficient, and adaptable to various real-world conditions and requirements.

47
REFERENCES

[1]. “E2E-VSDL: end-to-end video surveillance-based deep learning model to detect


and prevent criminal activities” by M.Q. Gandapur in Image Vis Comput. (2022)

[2]. “A review of video surveillance systems” by O. Elharrouss, N. Almaadeed, S. Al-


Maadeed in J. Vis. Commun. Image Represent. (2021)

[3]. “A causal framework to determine the effectiveness of dynamic quarantine policy


to mitigate COVID-19.” by Kristjanpoller, W., Michell, K., Minutolo, M.C. in Appl.
Soft Comput. (2021)

[4]. “Segmentation of COVID-19 pneumonia lesions: a deep learning approach.” by


Ghomi, Z., Mirshahi, R., Bagheri, A.K., Fattahpour, A., Mohammadiun, S.,
Gharahbagh, A.A., Djavadifar, A., Arabalibeik, H., Sadiq, R., Hewage, K. in Med. J.
Islam. Repub. Iran(2020).

[5]. “A world with a billion cameras watching you is just around the corner.” by Lin,
L., Purnell, N. in Wall Str. J. (2019)

[6]. “Detecting anomalies in image classification by means of semantic relationships.”


by Pasini, A., Baralis, E. in IEEE Second International Conference on Artificial
Intelligence and Knowledge Engineering (AIKE), Sardinia, Italy.(2019)

[7]. “spatio-temporal adversarial networks for abnormal event detection.” by Lee, S.,
Kim, H. G., Ro, Y. M.: STAN in IEEE international conference on acoustics, speech
and signal processing (ICASSP)(2018)

[8] . “Real-world anomaly detection in surveillance videos” by W. Sultani, C. Chen, M.


Shah in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition. (2018)

[9]. “Crime analysis through machine learning” by S. Kim, P. Joshi, P.S. Kalsi, P.
Taheri in IEEE 9th Annual Informaation Technology, Electronics and Mobile

48
Communication Conference(IEMCON),IEEE (2018)

[10]. “A unified approach to interpreting model predictions.”by Lundberg, S.M., Lee,


S. In Proceedings of the 31st International Conference on Neural Information
Processing Systems(2017)

[11]. “Video anomaly detection with compact feature sets for online performance.” by
Leyva, R., Sanchez, V., Li, C.-T. in IEEE Trans. Image Process.(2017)

[12]. Ravanbakhsh, M., Nabi, M., Sangineto, E., Marcenaro, L., Regazzoni, C., Sebe,
N.: Abnormal event detection in videos using generative adversarial nets. In: 2017
IEEE international conference on image processing (ICIP), pp. 1577–1581 (2017)

[13]. “Hierarchical context modeling for video event recognition.” by Wang, X., Ji, Q.
in IEEE Trans. Pattern Anal. Mach. Intell.(2017)

[14]. “The use of predictive analysis in spatiotemporal crime forecasting: building and
testing a model in an urban context” by A. Rummens, W. Hardyns, L. Pauwels in Appl.
Geogr. (2017)

[15]. “Abnormal event detection at 150 FPS in MATLAB.” by Lu C., Shi, J., Jia, J. in
IEEE International Conference on Computer Vision (2013)

[16]. “Evaluating the use of public surveillance cameras for crime control and
prevention.” by Vigne, N.G.L., Lowry, S.S., Markman, J.A., Dwyer, A.M. in US
Department of Justice, Office of Community Oriented Policing Services. Urban
Institute, Justice Policy Center, Washington, DC (2011)

[17]. “Survey on contemporary remote surveillance systems for public safety” by T.D.
Raty in IEEE Transactions on Systems, Man and Cybernetics. (2010)

[18]. “Observe locally, infer globally a space-time MRF for detecting abnormal
activities with incremental updates.” by Authorized licensed use limited to: Zhejiang
University. Downloaded on April 06,2024 at 08:54:54 UTC from IEEE Xplore.
Restrictions apply. Kim, J., Grauman, K. in IEEE Conference on Computer Vision and

49
Pattern Recognition (2009)

[19]. “Decision-theoretic saliency: computational principles, biological plausibility,


and implications for neurophysiology and psychophysics.” by Gao, D., Vasconcelos,
N. in Neural Comput.(2009)

[20]. “Robust real-time unusual event detection using multiple fixed-location


monitors.” by Adam, A., Rivlin, E., Shimshoni, I., Reinitz, D. in IEEE Trans. Pattern
Anal. Mach. Intell. (2008)

50

You might also like