Project Report
Project Report
Project Report
PROJECT REPORT
SUBMITTED BY
Team ID - NM2023TMID12150
INFO
Technology:
Internet of Things (IoT)
Project Title:
INTELLIGENT PEOPLE AND VEHICLE
COUNTING SYSTEM FOR SECRETARIAT
Team ID:
NM2023TMID12150
Team Members:
M MOHAMMED ABUBACKER
mohammedabubacker4@gmail.com
M MUTHU KUMAR
Muthukumar2004@gmail.com
B NITHISH KUMAR
Nithishkumar.2020.nk07@gmail.com
M AJAY KUMAR
ajaykumar@gmail.com
M SANTHOSH KUMAR
Santhoshkumarmarimuthu08@gmail.com
1. INTRODUCTION
1.1 Project Overview
1.2 Purpose
2. IDEATION & PROPOSED SOLUTION
2.1 Problem Statement Definition
2.2 Empathy Map Canvas
2.3 Ideation & Brainstorming
2.4 Proposed Solution
3. REQUIREMENT ANALYSIS
3.1 Functional requirement
3.2 Non-Functional requirements
4. PROJECT DESIGN
4.1 Data Flow Diagrams
4.2 Solution & Technical Architecture
4.3 User Stories
5. CODING & SOLUTIONING (Explain the features added in the
project along with code)
5.1 Feature 1
5.2 Feature 2
5.3 Database Schema (if Applicable)
6. RESULTS
6.1 Performance Metrics
7. ADVANTAGES & DISADVANTAGES
8. CONCLUSION
9. FUTURE SCOPE
10. APPENDIX
Source Code
GitHub & Project Video Demo Link
1 INTRODUCTION
1.2 Purpose:
The purpose of the people counting is to provide an accurate
estimation of visitors in various establishments, such as government
office,big private office and also in it companies. It address the challenges of
security and queues by leveraging IoT technology and real-time data
analysis.
visitors benefit from shorter waiting times, a more pleasant and
efficient experience, and an overall improved service. They no longer
have to endure long queues and can enjoy a hassle-free visit to the
establishment. The project aims to create a healthy protective
environment.
2 IDEATION AND PROPOSED SOLUTION
To ensure the security and privacy of the data collected by the system,
appropriate security measures such as encryption and access control
can be implemented. The data collected can be stored in a secure
location and can be accessed only by authorized personnel.
Description
S. No. Parameter
In many state there will be many people who
Problem generally visit the secretariat every day.
Statement Sometimes the rush in these secretariat office
(Problem to be would be more and there may be the chances of
unhygienic environment and the security
solved)
becomes low.
1.
The solution involves using computer vision
techniques to detect people entering and exiting
the secretariat office, and using this data to
estimate the number of people inside the
Idea / Solution secretariat office at any given time. The data is
description stored in the cloud, and the office authorities can
access it through a mobile application, which
allows them to monitor the crowd estimation
2. and adjust the meeting accordingly.
3 REQUIREMENT ANALYSIS
Functional requirement:
FR Functional
No. Requirement Sub-Requirement (Task/Story)
(Epic)
FR1 Entry and exit The system should be able to track the
tracking number of people entering and exiting
the secretariat office
FR2 Crowd estimation Based on the number of entries
and exits, the system should
update the number of people inside
the secretariat office
FR3 Cloud storage All data related to entry and exit
tracking should be stored in the
IBM-IOT cloud.
FR4 User-friendly The node red UI used by security
interface officer should have a simple
interface, allowing them to quickly
and easily access the information
they need.
Non-
NFR
Functional Description
No.
Requirement
NFR1 Usability The system must be user-friendly, with a
simple and attractive interface that makes it
easy for users to access and interpret data.
4 PROJECT DESIGN
1. The IOT camera captures the images of people entering and exiting the secretariat office.
2. The images are sent to the IOT gateway device, which processes the images using
computer vision techniques to count the number of people entering and exiting the
secretariat office.
3. The IOT gateway device sends this data to the IBM Cloud platform for storage and
processing.
4. The data is processed and analyzed using Node-RED on the IBM Cloud platform.
5. The processed data is made available to the mobile app and web UI for the secretariat
office security authorities to view and estimate the crowd and enhanced the security.
6. The security authorities can use this data to avoid from difficult situation and enhanced
the security of secretariat.
7. The data is continuously updated and stored in the cloud for future reference and
analysis.
Canva Link For DFD:
https://www.canva.com/design/DAFiurLyLGw/9J8W0AckR4gtYa_0EWA9vQ/edit
?utm_content=DAFiurLyLGw&utm_campaign=designshare&utm_medium=link2
&utm_source=sharebutton
User Stories:
The mobile
As a security police application
officer , I want to be able should display
to access the people
Crowd real-time people
enter Team
Security Estimation and entering and exit
USN3 and exit data through a High Member
police provide data, including
mobile application, so the number of 2
officer security that I can enhanced the people currently
security and prepare inside the
emergency exit. secretariat office
The system
As a visitor, I want to call should provide a
receptionist and ask mobile
the meeting schedule Team
Cabinet Contactless application for
USN4 through a mobile High Member
ministers Service ask the meeting
application, so that I can 3
schedule of
realize to this is the right secretariat to
time to visit or not. the gov officials
As a gov official I want to
be able to access the The people enter
historical people visited and exit data in
data and emergency Team
Higher gov secretariat office
USN5 situation people data, so Medium Member
officials Analytics should be stored
that I can perform trend 4
in a database in
analysis and derive the cloud.
insights.
import pyttsx3
import requests
import time
import sys
import ibmiotf.application
import ibmiotf.device
import random
organization = "l8xr1u"
deviceType = "PeopleCounter"
deviceId = "1234"
authMethod = "token"
authToken = "12345678"
engine = pyttsx3.init()
engine.say('Hello')
engine.runAndWait()
cnt_up = 0
cnt_down = 0
#cap = cv2.VideoCapture(0)
#cap = cv2.VideoCapture('people.mp4')
##cap.set(3,160) #Width
##cap.set(4,120) #Height
cap = cv2.VideoCapture('video.mp4')
#cap = cv2.VideoCapture(0)
for i in range(19):
w = cap.get(3)
h = cap.get(4)
frameArea = h*w
areaTH = frameArea/250
print ('Area Threshold', areaTH)
#LINES COORDINATE FOR COUNTING
line_up = int(2*(h/5))
line_down = int(3*(h/5))
up_limit = int(1*(h/5))
down_limit = int(4*(h/5))
line_down_color = (255,0,0)
line_up_color = (0,0,255)
pts_L1 = pts_L1.reshape((-1,1,2))
pt3 = [0, line_up];
pts_L3 = pts_L3.reshape((-1,1,2))
pt7 = [0, down_limit];
pt8 = [w, down_limit];
pts_L4 = np.array([pt7,pt8], np.int32)
pts_L4 = pts_L4.reshape((-1,1,2))
#BACKGROUND SUBTRACTOR
kernelOp2 = np.ones((5,5),np.uint8)
kernelCl = np.ones((11,11),np.uint8)
#Variables
font = cv2.FONT_HERSHEY_SIMPLEX
persons = []
max_p_age = 5
pid = 1
def ibmwork(cnt_up,cnt_down,deviceCli):
if not success:
deviceCli.disconnect()
def ibmstart(cnt_up,cnt_down):
try:
deviceOptions = {"org": organization, "type": deviceType, "id":
deviceId, "auth-method": authMethod, "auth-token": authToken}
deviceCli = ibmiotf.device.Client(deviceOptions)
print(type(deviceCli))
#..............................................
except Exception as e:
ibmwork(cnt_up,cnt_down,deviceCli)
while(cap.isOpened()):
##for image in camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True):
## frame = image.array
for i in persons:
i.age_one() #age every person one frame
#########################
# PRE-PROCESSING #
#########################
fgmask = fgbg.apply(frame)
fgmask2 = fgbg.apply(frame)
try:
ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY)
ret,imBin2 = cv2.threshold(fgmask2,200,255,cv2.THRESH_BINARY)
print ('UP:',cnt_up)
print ('DOWN:',cnt_down)
break
#################
# CONTOURS #
#################
contours0, hierarchy =
cv2.findContours(mask2,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_
SIMPLE)
#################
# TRACKING #
#################
#Missing conditions for multipersons ,outputs and screen entries
M = cv2.moments(cnt)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
x,y,w,h = cv2.boundingRect(cnt)
new = True
if cy in range(up_limit,down_limit):
for i in persons:
new = False
i.updateCoords(cx,cy) #update coordinates in the objects
and resets age
if i.going_UP(line_down,line_up) == True:
cnt_up += 1;
print ("ID:",i.getId(),'crossed going up
at',time.strftime("%c"))
break
if i.getState() == '1':
i.setDone()
if i.timedOut():
index = persons.index(i)
persons.pop(index)
persons.append(p)
pid += 1
#################
# DRAWINGS #
#################
#########################
# DRAWING TRAJECTORIES #
#########################
for i in persons:
## if len(i.getTracks()) >= 2:
## frame = cv2.polylines(frame,[pts],False,i.getRGB())
## if i.getId() == 9:
## print str(i.getX()), ',', str(i.getY())
cv2.putText(frame,
str(i.getId()),(i.getX(),i.getY()),font,0.3,i.getRGB(),1,cv2.LINE_AA)
#######################
##DISPLAY ON FRAME #
#######################
str_up = 'UP: '+ str(cnt_up)
str_down = 'DOWN: '+ str(cnt_down)
print('-----------------------------------------')
print ('UP:',cnt_up)
print ('DOWN:',cnt_down)
#r1 =
requests.get('https://api.thingspeak.com/update?api_key=4BGMGGBRLQ
M3VRHO&field1='+str(cnt_up))
# r2 =
requests.get('https://api.thingspeak.com/update?api_key=4BGMGGBRLQ
M3VRHO&field2='+str(cnt_down))
# print(r1.status_code)
# print(r2.status_code)
frame = cv2.polylines(frame,[pts_L1],False,line_down_color,thickness=2)
frame = cv2.polylines(frame,[pts_L2],False,line_up_color,thickness=2)
frame = cv2.polylines(frame,[pts_L3],False,(255,255,255),thickness=1)
frame = cv2.polylines(frame,[pts_L4],False,(255,255,255),thickness=1)
cv2.putText(frame, str_up
,(10,40),font,0.5,(255,255,255),2,cv2.LINE_AA)
cv2.putText(frame, str_up ,(10,40),font,0.5,(0,0,255),1,cv2.LINE_AA)
cv2.putText(frame, str_down
,(10,90),font,0.5,(255,255,255),2,cv2.LINE_AA)
cv2.putText(frame, str_down ,(10,90),font,0.5,(255,0,0),1,cv2.LINE_AA)
cv2.imshow('Frame',frame)
#cv2.imshow('Mask',mask)
ibmstart(cnt_up,cnt_down)
#END while(cap.isOpened())
#################
# CLEANING #
#################
cap.release()
cv2.destroyAllWindows()
5.1.2 Solution:
class MyPerson:
tracks = []
self.y = yi
self.tracks = []
self.R = randint(0,255)
self.G = randint(0,255)
self.B = randint(0,255)
self.done = False
self.state = '0'
self.age = 0
self.max_age = max_age
self.dir = None
def getRGB(self):
return (self.R,self.G,self.B)
def getTracks(self):
return self.tracks
def getId(self):
return self.i
def getState(self):
return self.state
def getDir(self):
return self.dir
def getX(self):
return self.x
def getY(self):
return self.y
self.tracks.append([self.x,self.y])
self.x = xn
self.y = yn
def setDone(self):
self.done = True
def timedOut(self):
return self.done
def going_UP(self,mid_start,mid_end):
if len(self.tracks) >= 2:
if self.state == '0':
if self.tracks[-1][1] < mid_end and self.tracks[-2][1] >= mid_end:
#cruzo la linea
state = '1'
self.dir = 'up'
return True
else:
return False
else:
return False
def going_DOWN(self,mid_start,mid_end):
if len(self.tracks) >= 2:
if self.state == '0':
if self.tracks[-1][1] > mid_start and self.tracks[-2][1] <=
mid_start: #cruzo la linea
state = '1'
self.dir = 'down'
return True
else:
return False
else:
return False
def age_one(self):
self.age += 1
self.done = True
return True
class MultiPerson:
self.x = xi
self.y = yi
self.tracks = []
self.R = randint(0,255)
self.G = randint(0,255)
self.B = randint(0,255)
self.done = False
6 RESULTS
6.1 Performance Metrics:
Python
accuracy
of
Metrics prediction
and
output
screenshot
6.2 Output:
6.2.1 Python Code Output:
6.2.2 Given Video Output:
6.2.3
Cloud Data Output:
6.2.4
6.2.5
Node-RED Debug Messages Output:
6.2.6
6.2.7
Node-RED Dashboard Output:
6.2.8
7 ADVANTAGES & DISADVANTAGES
7.1 Advantages:
1.1.1 security:
By providing estimated rush or waiting times, the system
allows users to plan their visits more protectively.
1.1.2 Improved customer experience:
By providing real-time estimates, the system helps users
make informed decisions and manage their expectations. This
leads to a more positive customer experience, as they can plan
their visit accordingly and avoid unnecessary waiting.
1.1.3 Optimal resource allocation:
The system provides insights into the rush patterns and
waiting times, to allocate their resources more effectively. They
can plan their meeting schedule accordingly.
1.1.4 Data-driven decision making:
The system collects data on rush patterns, waiting times, and
user behavior. This data can be analyzed to gain valuable
insights into visitor preferences, peak hours, and other relevant
factors..
1.1.5 Enhanced safety and crowd management:
Knowing the over crowd at a location or event can help with
crowd management and safety measures. By controlling the flow
of visitors and avoiding overcrowding, the system can contribute
to a safer and more organized environment.
1.2 Disadvantages:
7.2.1 Accuracy limitations:
Rush estimators rely on various data sources and algorithms
to provide estimates. However, the accuracy of these estimates can
be affected by factors such as unexpected events, sudden changes
in crowd behavior, or technical issues
7.2.2 Dependency on data availability:
The accuracy and reliability of rush estimators heavily
depend on the availability of real-time data. If there are gaps or
delays in data collection, the estimates may not reflect the current
rush situation accurately. This can lead to users making decisions
based on outdated information.
7.2.3 User behavior impact:
Rush estimators can influence user behavior and potentially
create self-fulfilling prophecies. For example, if users consistently
avoid certain times or locations based on rush estimates, it can lead
to imbalanced crowd distribution and increased rush during other
periods. This can undermine the accuracy of the estimates and
potentially create new congestion patterns.
7.2.4 Psychological impact:
Relying heavily on rush estimates can create a sense of time
pressure and urgency, impacting the overall user experience.
7.2.5 Bias and inequality:
Rush estimators can inadvertently introduce biases or
inequalities. For example, if the system favors specific user groups,
it can lead to unequal access to accurate rush information. This can
further exacerbate existing disparities in terms of wait times and
access to services.
8 CONCLUSION
In conclusion, the IoT project has demonstrated the potential to address the
challenges of managing people and optimizing resource allocation in various
scenarios. The project leverages real-time data collection and analysis to provide
visitor with estimated wait times and crowd density information, enabling them to
make informed decisions and navigate crowded areas more efficiently.
Throughout the project, several key achievements have been realized. The
development of the Rush Estimator IoT device, incorporating sensors for data
collection and a robust data processing algorithm, has enabled accurate and timely
estimation of rush hour conditions. The integration of cloud computing and IoT
technologies has facilitated seamless data transmission, storage, and analysis,
ensuring the availability of real-time information for users.
9 APPENDIX