Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Project Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

SACS MAVMM ENGINEERING COLLEGE

Alagar koil, Madurai – 625 307

PROJECT REPORT

SUBMITTED BY

Team ID - NM2023TMID12150

UNDER THE GUIDANCE OF

Industry Mentors : Baradwaj, Dinesh, Hemnath Faculty


Mentor : S. Terasa

INFO
Technology:
Internet of Things (IoT)

Project Title:
INTELLIGENT PEOPLE AND VEHICLE
COUNTING SYSTEM FOR SECRETARIAT

Team ID:
NM2023TMID12150

Team Members:
M MOHAMMED ABUBACKER
mohammedabubacker4@gmail.com
M MUTHU KUMAR
Muthukumar2004@gmail.com
B NITHISH KUMAR
Nithishkumar.2020.nk07@gmail.com
M AJAY KUMAR
ajaykumar@gmail.com
M SANTHOSH KUMAR
Santhoshkumarmarimuthu08@gmail.com

Date of Submission: 20.05.2023


CONTENT

1. INTRODUCTION
1.1 Project Overview
1.2 Purpose
2. IDEATION & PROPOSED SOLUTION
2.1 Problem Statement Definition
2.2 Empathy Map Canvas
2.3 Ideation & Brainstorming
2.4 Proposed Solution
3. REQUIREMENT ANALYSIS
3.1 Functional requirement
3.2 Non-Functional requirements
4. PROJECT DESIGN
4.1 Data Flow Diagrams
4.2 Solution & Technical Architecture
4.3 User Stories
5. CODING & SOLUTIONING (Explain the features added in the
project along with code)
5.1 Feature 1
5.2 Feature 2
5.3 Database Schema (if Applicable)
6. RESULTS
6.1 Performance Metrics
7. ADVANTAGES & DISADVANTAGES
8. CONCLUSION
9. FUTURE SCOPE
10. APPENDIX
Source Code
GitHub & Project Video Demo Link

1 INTRODUCTION

The intelligent people and vehicle counting system is an innovative solution


designed to tackle the challenges of maintain the security in various establishments,
such as government office,big private office and also in it companies.

1.1 Project Overview:


The project involves the implementation of IoT devices, such as
cameras or sensors, strategically positioned at entry and exit points of
establishments. These devices capture real-time data on the number of
people entering and exiting the premises. Advanced algorithms and data
analytics techniques are then utilized to process and analyze this data,
allowing for accurate estimation of the current crowd level.
Businesses can access the crowd data and insights through a
userfriendly mobile application or web-based dashboard. This empowers
them to monitor crowd patterns, identify peak hours, and make data-
driven decisions to optimize their operations. By efficiently allocating
resources, such as staffing and inventory, based on real-time crowd data,
businesses can enhance operational efficiency and customer satisfaction.

1.2 Purpose:
The purpose of the people counting is to provide an accurate
estimation of visitors in various establishments, such as government
office,big private office and also in it companies. It address the challenges of
security and queues by leveraging IoT technology and real-time data
analysis.
visitors benefit from shorter waiting times, a more pleasant and
efficient experience, and an overall improved service. They no longer
have to endure long queues and can enjoy a hassle-free visit to the
establishment. The project aims to create a healthy protective
environment.
2 IDEATION AND PROPOSED SOLUTION

2.1 Problem Statement Definition:


Developing a system that constantly monitors the people and vehicles
entering and leaving a particular location can be a challenging task, but
it is certainly possible with the help of advanced technologies. The
system can be designed using a combination of hardware and software
components.

The hardware components of the system can include high-resolution


cameras and sensors that are strategically placed at the entry and exit
points of the location. These cameras can be connected to a central
control room, where the footage can be monitored by security
personnel in real-time. Additionally, the cameras can be equipped with
facial recognition technology to identify individuals entering and
leaving the location.

The software components of the system can include algorithms that


analyze the footage captured by the cameras and sensors. These
algorithms can be designed to count the number of people and cars
entering and leaving the location, track their movements within the
location, and flag any suspicious activities.

To ensure the security and privacy of the data collected by the system,
appropriate security measures such as encryption and access control
can be implemented. The data collected can be stored in a secure
location and can be accessed only by authorized personnel.

In the case of emergencies, the data collected by the system can be


extremely useful in identifying potential threats and taking appropriate
actions. For example, if an unauthorized person tries to enter the
location, the system can immediately alert the security personnel and
take necessary actions to prevent the person from entering.

Overall, developing a system that constantly monitors people and


vehicles entering and leaving a particular location can greatly enhance
the security of the location and provide valuable insights in case of
emergencies

Problem I am I’m trying But Because Which makes me


Statement (Customer) to feel
(PS)

PS-1 secretariat Improve How Enhanced fearness


security the the
system security
detects
how it in
works secretariat
office

PS-2 Gov Crowd How it To avoid


employee estimation counts rush Accuracy,faithfullness
crowd

2.2 Empathy Map Canvas:


This empathy map canvas captures the perspective of busy office
visitors who are frustrated by long wait times and unpredictable queues
during peak hours.
The "Says" quadrant includes quotes and statements that capture
the customer's frustrations and desires, such as "The queue is too long"
and "I wish I could skip the line".
The "Thinks" quadrant includes the customer's thoughts and
beliefs, such as "Maybe I should go somewhere else".
The "Does" quadrant includes the customer's actions and
behaviours, such as checking the time and scanning the menu.
Finally, the "Feels" quadrant includes the customer's emotional
responses, such as feeling anxious, irritated, and rushed.
By using this empathy map canvas, people and vehicle counting
system can gain a deeper understanding of their customers' needs and
pain points, and develop a more effective solution that addresses these
concerns.

2.3 Ideation & Brainstorming:


office visitors often experience long wait times and unpredictable
queues during peak hours, leading to wasted time, increased stress
.There is a need for a more efficient and streamlined experience that
security available in office

2.4 Proposed Solution:

Description
S. No. Parameter
In many state there will be many people who
Problem generally visit the secretariat every day.
Statement Sometimes the rush in these secretariat office
(Problem to be would be more and there may be the chances of
unhygienic environment and the security
solved)
becomes low.
1.
The solution involves using computer vision
techniques to detect people entering and exiting
the secretariat office, and using this data to
estimate the number of people inside the
Idea / Solution secretariat office at any given time. The data is
description stored in the cloud, and the office authorities can
access it through a mobile application, which
allows them to monitor the crowd estimation
2. and adjust the meeting accordingly.

The novelty/uniqueness of this solution lies in


the use of computer vision techniques to estimate
Novelty / the crowd in real-time. This approach is unique
because it provides a non-intrusive and accurate
Uniqueness
way of estimating the crowd without the need for
manual counting or physical barriers.
3.
By accurately estimating the crowd and adjusting
meeting accordingly, the secretariat office
security authorities can ensure that there is
Social Impact / enough time to visit every ones. This can result
Customer in shorter waiting times, and overall improved
Satisfaction visitor satisfaction. This solution can also have a
positive social impact on the secretariat office
security officer by reducing their workload and
4. stress levels.
The business model for this solution could be
based on a subscription-based model. secretiat
Business Model can pay a monthly or yearly fee to use the people
(Revenue Model) counting and vehicle counting service, which
would provide them with access to the crowd
5. estimation data through the mobile application.
The people counting and vehicle
counting service, has the potential to
Scalability ofbe highly scalable due to its use of
the Solution easily replicable devices and
techniques, cloudbased data storage,
6. and mobile application deployment.

3 REQUIREMENT ANALYSIS

Functional requirement:

FR Functional
No. Requirement Sub-Requirement (Task/Story)
(Epic)
FR1 Entry and exit The system should be able to track the
tracking number of people entering and exiting
the secretariat office
FR2 Crowd estimation Based on the number of entries
and exits, the system should
update the number of people inside
the secretariat office
FR3 Cloud storage All data related to entry and exit
tracking should be stored in the
IBM-IOT cloud.
FR4 User-friendly The node red UI used by security
interface officer should have a simple
interface, allowing them to quickly
and easily access the information
they need.

3.1 Non-Functional Requirements:

Non-
NFR
Functional Description
No.
Requirement
NFR1 Usability The system must be user-friendly, with a
simple and attractive interface that makes it
easy for users to access and interpret data.

NFR2 Security The system must ensure the confidentiality


and integrity of the data collected,
processed, and stored, and must prevent
unauthorized access or data breaches.

NFR3 Reliability The system should be able to handle and


recover from errors and failures in a timely
manner to minimize disruption to the
secretariat office operations.

NFR4 Performance The system must be able to process data in


real-time and provide accurate crowd
estimations and also give who are all enter
in secretariat office and who are all exit
from the place
NFR5 Availability The system should have redundancy
mechanisms in place to minimize downtime
and ensure high availability.

NFR6 Scalability The system should be able to provide the


security to all important person so the
important persons only buy this system for
their security

4 PROJECT DESIGN

4.1 Data Flow Diagram:

1. The IOT camera captures the images of people entering and exiting the secretariat office.

2. The images are sent to the IOT gateway device, which processes the images using
computer vision techniques to count the number of people entering and exiting the
secretariat office.

3. The IOT gateway device sends this data to the IBM Cloud platform for storage and
processing.

4. The data is processed and analyzed using Node-RED on the IBM Cloud platform.
5. The processed data is made available to the mobile app and web UI for the secretariat
office security authorities to view and estimate the crowd and enhanced the security.

6. The security authorities can use this data to avoid from difficult situation and enhanced
the security of secretariat.

7. The data is continuously updated and stored in the cloud for future reference and
analysis.
Canva Link For DFD:
https://www.canva.com/design/DAFiurLyLGw/9J8W0AckR4gtYa_0EWA9vQ/edit
?utm_content=DAFiurLyLGw&utm_campaign=designshare&utm_medium=link2
&utm_source=sharebutton

User Stories:

Functional User Team


User Requiremen Story Acceptance Priorit Membe
User Story/Task
Type t Number Criteria y r
As a secretariat, I want to The mobile
be able to access the application
people counting and should display
vehicle counting data the number of
Secretaria Mobile from the mobile Team
USN1 people HIGH
t Application application, so that I can estimated to be Lead
make informed decisions inside the
about security and secretariat
reassign the meetings. office.
As a visitor, I want to be The people enter
Official able to check the and exit data
gov estimated crowd and should be Team
officer User Experience USN2 security in the secretariat displayed in a Medium Member
Or office, so that I can prominent 1
politician decide whether to visit or location near the
not. office entrance.

The mobile
As a security police application
officer , I want to be able should display
to access the people
Crowd real-time people
enter Team
Security Estimation and entering and exit
USN3 and exit data through a High Member
police provide data, including
mobile application, so the number of 2
officer security that I can enhanced the people currently
security and prepare inside the
emergency exit. secretariat office
The system
As a visitor, I want to call should provide a
receptionist and ask mobile
the meeting schedule Team
Cabinet Contactless application for
USN4 through a mobile High Member
ministers Service ask the meeting
application, so that I can 3
schedule of
realize to this is the right secretariat to
time to visit or not. the gov officials
As a gov official I want to
be able to access the The people enter
historical people visited and exit data in
data and emergency Team
Higher gov secretariat office
USN5 situation people data, so Medium Member
officials Analytics should be stored
that I can perform trend 4
in a database in
analysis and derive the cloud.
insights.

Link for Solution Architecture:


https://lucid.app/lucidchart/a5b4b7fb-9670-4ba5-a198-777763de1514/edit?viewport_loc=388%2C-
174%2C2220%2C1114%2C0_0&invitationId=inv_0dadbec7-c22f-4904-ad3ae8cb02c25c84

4.2.2 Technical Architecture:


S.
Component Description Technology
No.
User Interface Mobile app or A Web Node-red
1
based dashboard
Application Collects the images IoT Camera
Logic -1 and videos of people
2
entering and leaving
the office.
Application Process the data IoT Device (Python code)
3 Logic - 2 collected from the
IoT camera
Application The processed data IBM Watson IoT
4 Logic - 3 can be stored in the Platform
cloud-based database
Cloud Database services on IBM Cloud Services
5
Database Cloud
File Storage File Storage IBM Watson IoT
6 requirements Platform,
Node-Red

5.1.1 Code: personCount.py:


##People Counter
import numpy as np
import cv2
import Person
import time

import pyttsx3

import requests
import time
import sys
import ibmiotf.application

import ibmiotf.device

import random

organization = "l8xr1u"

deviceType = "PeopleCounter"

deviceId = "1234"

authMethod = "token"
authToken = "12345678"

engine = pyttsx3.init()
engine.say('Hello')

engine.runAndWait()

cnt_up = 0
cnt_down = 0

#TAKING THE VIDEO INPUT

#cap = cv2.VideoCapture(0)
#cap = cv2.VideoCapture('people.mp4')
##cap.set(3,160) #Width
##cap.set(4,120) #Height

#PRINT THE CAPTURE PROPERTIES TO CONSOLE

cap = cv2.VideoCapture('video.mp4')

#cap = cv2.VideoCapture(0)

for i in range(19):

print (i, cap.get(i))

w = cap.get(3)
h = cap.get(4)

frameArea = h*w
areaTH = frameArea/250
print ('Area Threshold', areaTH)
#LINES COORDINATE FOR COUNTING
line_up = int(2*(h/5))

line_down = int(3*(h/5))

up_limit = int(1*(h/5))
down_limit = int(4*(h/5))

print ("Red line y:",str(line_down))

print ("Blue line y:", str(line_up))

line_down_color = (255,0,0)

line_up_color = (0,0,255)

pt1 = [0, line_down];

pt2 = [w, line_down];


pts_L1 = np.array([pt1,pt2], np.int32)

pts_L1 = pts_L1.reshape((-1,1,2))
pt3 = [0, line_up];

pt4 = [w, line_up];


pts_L2 = np.array([pt3,pt4], np.int32)
pts_L2 = pts_L2.reshape((-1,1,2))
pt5 = [0, up_limit];
pt6 = [w, up_limit];

pts_L3 = np.array([pt5,pt6], np.int32)

pts_L3 = pts_L3.reshape((-1,1,2))
pt7 = [0, down_limit];
pt8 = [w, down_limit];
pts_L4 = np.array([pt7,pt8], np.int32)

pts_L4 = pts_L4.reshape((-1,1,2))

#BACKGROUND SUBTRACTOR

fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True)

#STRUCTURING ELEMENTS FOR MORPHOGRAPHIC FILTERS


kernelOp = np.ones((3,3),np.uint8)

kernelOp2 = np.ones((5,5),np.uint8)
kernelCl = np.ones((11,11),np.uint8)

#Variables
font = cv2.FONT_HERSHEY_SIMPLEX
persons = []
max_p_age = 5
pid = 1

def ibmwork(cnt_up,cnt_down,deviceCli):

data = { 'UP' : cnt_up, 'down': cnt_down}


#print data
def myOnPublishCallback():
print ("Published Up People Count = %s" % str(cnt_up), "Down
People Count = %s " % str(cnt_down), "to IBM Watson")

success = deviceCli.publishEvent("PeopleCounter", "json", data, qos=0,


on_publish=myOnPublishCallback)

if not success:

print("Not connected to IoTF")

deviceCli.disconnect()

def ibmstart(cnt_up,cnt_down):
try:
deviceOptions = {"org": organization, "type": deviceType, "id":
deviceId, "auth-method": authMethod, "auth-token": authToken}

deviceCli = ibmiotf.device.Client(deviceOptions)
print(type(deviceCli))
#..............................................

except Exception as e:

print("Caught exception connecting device: %s" % str(e))


sys.exit()
deviceCli.connect()

ibmwork(cnt_up,cnt_down,deviceCli)

while(cap.isOpened()):
##for image in camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True):

ret, frame = cap.read()

## frame = image.array
for i in persons:
i.age_one() #age every person one frame

#########################

# PRE-PROCESSING #
#########################

#APPLY BACKGROUND SUBTRACTION

fgmask = fgbg.apply(frame)

fgmask2 = fgbg.apply(frame)

#BINARIZATION TO ELIMINATE SHADOWS

try:

ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY)
ret,imBin2 = cv2.threshold(fgmask2,200,255,cv2.THRESH_BINARY)

#Opening (erode->dilate) to remove noise


mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp)

mask2 = cv2.morphologyEx(imBin2, cv2.MORPH_OPEN, kernelOp)


#Closing (dilate -> erode) to join white region
mask = cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl)
mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernelCl)
except:
print('EOF')

print ('UP:',cnt_up)

print ('DOWN:',cnt_down)
break
#################
# CONTOURS #

#################

# RETR_EXTERNAL returns only extreme outer flags. All child


contours are left behind.

contours0, hierarchy =
cv2.findContours(mask2,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_
SIMPLE)

for cnt in contours0:


area = cv2.contourArea(cnt)
if area > areaTH:

#################
# TRACKING #
#################
#Missing conditions for multipersons ,outputs and screen entries

M = cv2.moments(cnt)

cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
x,y,w,h = cv2.boundingRect(cnt)

new = True

if cy in range(up_limit,down_limit):

for i in persons:

if abs(cx-i.getX()) <= w and abs(cy-i.getY()) <= h:

# the object is close to one that has been already detected

new = False
i.updateCoords(cx,cy) #update coordinates in the objects
and resets age
if i.going_UP(line_down,line_up) == True:

cnt_up += 1;
print ("ID:",i.getId(),'crossed going up
at',time.strftime("%c"))

engine.say('A Person is Going UP ')


engine.runAndWait()
elif i.going_DOWN(line_down,line_up) == True:
cnt_down += 1;

print ("ID:",i.getId(),'crossed going down


at',time.strftime("%c"))
engine.say('A Person is Going Down')
engine.runAndWait()

break

if i.getState() == '1':

if i.getDir() == 'down' and i.getY() > down_limit:


i.setDone()
elif i.getDir() == 'up' and i.getY() < up_limit:

i.setDone()

if i.timedOut():

#get out of the people list

index = persons.index(i)
persons.pop(index)

del i #free the memory of i


if new == True:
p = Person.MyPerson(pid,cx,cy, max_p_age)

persons.append(p)
pid += 1
#################
# DRAWINGS #

#################

cv2.circle(frame,(cx,cy), 5, (0,0,255), -1)


img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
#cv2.drawContours(frame, cnt, -1, (0,255,0), 3)

#END for cnt in contours0

#########################

# DRAWING TRAJECTORIES #

#########################

for i in persons:
## if len(i.getTracks()) >= 2:

## pts = np.array(i.getTracks(), np.int32)


## pts = pts.reshape((-1,1,2))

## frame = cv2.polylines(frame,[pts],False,i.getRGB())
## if i.getId() == 9:
## print str(i.getX()), ',', str(i.getY())
cv2.putText(frame,
str(i.getId()),(i.getX(),i.getY()),font,0.3,i.getRGB(),1,cv2.LINE_AA)
#######################

##DISPLAY ON FRAME #

#######################
str_up = 'UP: '+ str(cnt_up)
str_down = 'DOWN: '+ str(cnt_down)
print('-----------------------------------------')

print ('UP:',cnt_up)

print ('DOWN:',cnt_down)

#r1 =
requests.get('https://api.thingspeak.com/update?api_key=4BGMGGBRLQ
M3VRHO&field1='+str(cnt_up))

# r2 =
requests.get('https://api.thingspeak.com/update?api_key=4BGMGGBRLQ
M3VRHO&field2='+str(cnt_down))

# print(r1.status_code)

# print(r2.status_code)
frame = cv2.polylines(frame,[pts_L1],False,line_down_color,thickness=2)

frame = cv2.polylines(frame,[pts_L2],False,line_up_color,thickness=2)
frame = cv2.polylines(frame,[pts_L3],False,(255,255,255),thickness=1)
frame = cv2.polylines(frame,[pts_L4],False,(255,255,255),thickness=1)
cv2.putText(frame, str_up
,(10,40),font,0.5,(255,255,255),2,cv2.LINE_AA)
cv2.putText(frame, str_up ,(10,40),font,0.5,(0,0,255),1,cv2.LINE_AA)

cv2.putText(frame, str_down
,(10,90),font,0.5,(255,255,255),2,cv2.LINE_AA)
cv2.putText(frame, str_down ,(10,90),font,0.5,(255,0,0),1,cv2.LINE_AA)

cv2.imshow('Frame',frame)

#cv2.imshow('Mask',mask)

ibmstart(cnt_up,cnt_down)

# Disconnect the device and application from the cloud

k = cv2.waitKey(30) & 0xff


if k == 27:
break

#END while(cap.isOpened())

#################
# CLEANING #
#################

cap.release()

cv2.destroyAllWindows()

5.1.2 Solution:

• Import the required libraries, including numpy, cv2


(OpenCV), Person (custom class for person tracking), time,
pyttsx3 (text-to-speech library), requests, sys, and ibmiotf
(IBM Watson IoT SDK).
• Set up the credentials and connection parameters for the IBM
Watson IoT platform, including organization, device type,
device ID, authentication method, and authentication token.
• Initialize the text-to-speech engine and play a notification
message to indicate that the Rush Estimator is running.
• Define variables for counting people entering and exiting
(cnt_up, cnt_down), as well as the video source (cap) and its
properties.
• Define the position and color of the lines used for counting
people entering and exiting.
• Create a background subtractor and define the morphological
kernel for image processing.
• Define variables for font style, person objects, and maximum
person age.
• Define a function (ibmwork) to publish the count data to the
IBM Watson IoT platform.
• Define a function (ibmstart) to connect to the IBM Watson
IoT platform and call the ibmwork function to publish the
data.
• Start a loop to process each frame of the video stream.
• Preprocess the frame by applying background subtraction and
morphological operations to remove noise.
• Find contours in the binary image and iterate over them.
• Perform tracking and counting operations on each detected
person.
• Draw the detected persons and their IDs on the frame.
• Draw the counting lines and display the count data on the
frame.
• Publish the count data to the IBM Watson IoT platform using
the ibmstart function.
• Display the frame with the overlayed information.
• Wait for the ESC key to be pressed to exit the loop.
• Clean up by releasing the video capture and closing all
windows.
5.2.1 Code: Person.py:
from random import randint
import time

class MyPerson:

tracks = []

def __init__(self, i, xi, yi, max_age):


self.i = i
self.x = xi

self.y = yi

self.tracks = []

self.R = randint(0,255)

self.G = randint(0,255)
self.B = randint(0,255)

self.done = False
self.state = '0'
self.age = 0

self.max_age = max_age
self.dir = None
def getRGB(self):
return (self.R,self.G,self.B)

def getTracks(self):

return self.tracks
def getId(self):
return self.i
def getState(self):

return self.state

def getDir(self):

return self.dir

def getX(self):

return self.x

def getY(self):
return self.y

def updateCoords(self, xn, yn):


self.age = 0

self.tracks.append([self.x,self.y])
self.x = xn
self.y = yn
def setDone(self):
self.done = True
def timedOut(self):

return self.done

def going_UP(self,mid_start,mid_end):
if len(self.tracks) >= 2:
if self.state == '0':
if self.tracks[-1][1] < mid_end and self.tracks[-2][1] >= mid_end:
#cruzo la linea

state = '1'
self.dir = 'up'
return True

else:

return False

else:

return False
def going_DOWN(self,mid_start,mid_end):

if len(self.tracks) >= 2:
if self.state == '0':
if self.tracks[-1][1] > mid_start and self.tracks[-2][1] <=
mid_start: #cruzo la linea
state = '1'
self.dir = 'down'
return True

else:

return False
else:
return False
def age_one(self):

self.age += 1

if self.age > self.max_age:

self.done = True

return True

class MultiPerson:

def __init__(self, persons, xi, yi):


self.persons = persons

self.x = xi
self.y = yi

self.tracks = []
self.R = randint(0,255)
self.G = randint(0,255)
self.B = randint(0,255)
self.done = False

6 RESULTS
6.1 Performance Metrics:

Parameter Values Screenshot

Python
accuracy
of
Metrics prediction
and
output
screenshot
6.2 Output:
6.2.1 Python Code Output:
6.2.2 Given Video Output:
6.2.3
Cloud Data Output:

6.2.4
6.2.5
Node-RED Debug Messages Output:

6.2.6
6.2.7
Node-RED Dashboard Output:

6.2.8
7 ADVANTAGES & DISADVANTAGES

7.1 Advantages:
1.1.1 security:
By providing estimated rush or waiting times, the system
allows users to plan their visits more protectively.
1.1.2 Improved customer experience:
By providing real-time estimates, the system helps users
make informed decisions and manage their expectations. This
leads to a more positive customer experience, as they can plan
their visit accordingly and avoid unnecessary waiting.
1.1.3 Optimal resource allocation:
The system provides insights into the rush patterns and
waiting times, to allocate their resources more effectively. They
can plan their meeting schedule accordingly.
1.1.4 Data-driven decision making:
The system collects data on rush patterns, waiting times, and
user behavior. This data can be analyzed to gain valuable
insights into visitor preferences, peak hours, and other relevant
factors..
1.1.5 Enhanced safety and crowd management:
Knowing the over crowd at a location or event can help with
crowd management and safety measures. By controlling the flow
of visitors and avoiding overcrowding, the system can contribute
to a safer and more organized environment.

1.2 Disadvantages:
7.2.1 Accuracy limitations:
Rush estimators rely on various data sources and algorithms
to provide estimates. However, the accuracy of these estimates can
be affected by factors such as unexpected events, sudden changes
in crowd behavior, or technical issues
7.2.2 Dependency on data availability:
The accuracy and reliability of rush estimators heavily
depend on the availability of real-time data. If there are gaps or
delays in data collection, the estimates may not reflect the current
rush situation accurately. This can lead to users making decisions
based on outdated information.
7.2.3 User behavior impact:
Rush estimators can influence user behavior and potentially
create self-fulfilling prophecies. For example, if users consistently
avoid certain times or locations based on rush estimates, it can lead
to imbalanced crowd distribution and increased rush during other
periods. This can undermine the accuracy of the estimates and
potentially create new congestion patterns.
7.2.4 Psychological impact:
Relying heavily on rush estimates can create a sense of time
pressure and urgency, impacting the overall user experience.
7.2.5 Bias and inequality:
Rush estimators can inadvertently introduce biases or
inequalities. For example, if the system favors specific user groups,
it can lead to unequal access to accurate rush information. This can
further exacerbate existing disparities in terms of wait times and
access to services.

8 CONCLUSION

In conclusion, the IoT project has demonstrated the potential to address the
challenges of managing people and optimizing resource allocation in various
scenarios. The project leverages real-time data collection and analysis to provide
visitor with estimated wait times and crowd density information, enabling them to
make informed decisions and navigate crowded areas more efficiently.
Throughout the project, several key achievements have been realized. The
development of the Rush Estimator IoT device, incorporating sensors for data
collection and a robust data processing algorithm, has enabled accurate and timely
estimation of rush hour conditions. The integration of cloud computing and IoT
technologies has facilitated seamless data transmission, storage, and analysis,
ensuring the availability of real-time information for users.

9 APPENDIX

9.1 Source Code:


9.1.1 Drive link for people Code:

You might also like