Conference PaperPDF Available

Compare People Counting Accuracy with OpenCV on Raspberry Pi and Infrared Sensor

Authors:

Figures

Content may be subject to copyright.
Compare People Counting Accuracy with OpenCV
on Raspberry Pi and Infrared Sensor
Khanista Namee*
Faculty of Industrial
Technology and
Management
King Mongkut's University
of Technology North
Bangkok
Bangkok, Thailand
Khanista.N@fitm.kmutnb.a
c.th
Rudsada Kaewsaeng-On
Faculty of Humanities and
Social Sciences
Prince of Songkla
University
Songkla, Thailand
rudsada.k@psu.ac.th
Jantima Polpinij
Intellect Laboratory,
Department of Computer
Science
Faculty of Informatics
Mahasarakham University
Mahasarakham, Thailand
Jantima.P@msu.ac.th
Kavin Rueagraklikhit
Faculty of Industrial
Technology and
Management
King Mongkut's University
of Technology North
Bangkok
Bangkok, Thailand
esred@gmail.com
Areej Meny
King Saud bin Abdulaziz University for Health Sciences
Jeddah, Saudi Arabia
Menya@ksau-hs.edu.sa
Sanya Namee
Department of Disaster Prevention and Mitigation
Royal Thai Government
Bangkok, Thailand
Sanya_ard@hotmail.com
Abstract—This paper presents the results obtained from the
comparison of techniques used to count the number of people in a
room in real time for use in smart office systems. Counting people
through a webcam based on a top view. After that, the image is
processed with OpenCV on the Raspberry Pi and another
technique is to count people with an infrared sensor attached to
the door. The results obtained from the test revealed that the
camera method used to count people with a top view had the
highest accuracy at 84.28%. However, this value was obtained
from a test on a PC. This value will vary according to the
performance of the applied device. When used with the Raspberry
Pi, the accuracy is reduced to only 75.63%. Hence, when actually
using it, it can be seen that using an infrared sensor would be more
appropriate. Since it has an accuracy of counting people at the
door at 79.81%.
Keywords—OpenCV, internet of things, smart office, infrared
sensor, edge computing, people counting
I. INTRODUCTION
At present, there are attempts to invent develop various
systems to have many more capabilities and intelligence [1].
One of the systems that we are trying to develop is smart office,
which in this system has many components, such as temperature
control, light control room environment, counting people, voice
commands, checking to see who is in the room, etc [2]. In this
paper, it will be a system development focusing on counting the
number of people in the room in real time because it will be the
first factor in bringing the number of people to work , other in
the smart office system anymore [3,4].
In this article, we will discuss an OpenCV implementation
to process images received through a webcam and connected to
Netpie. Image processing is described, calling various libraries
in the system. In this test, we assume that the smart office has
only one door to enter and exit this room. Therefore, the
installation of the camera and the installation of the infrared
sensor will be installed only in the door area where the test is
performed [5-8]. Also, consider when someone walks into the
door as walking into the room or walking out of the room, testing
with an infrared sensor is also counting the activity that occurs
at the door [9]. The results obtained from the test were
satisfactory.
The rest of this paper is organized as follows. Section II
provides a brief overview of some theory which had to be
implemented within this research. Then, in Section III, the
system and network designs are presented in detail. The
experimental deployment is presented in Section IV to
demonstrate the design. The measurement results and
performance are presented in Section IV. Finally, conclusions
are drawn in Section V and following with acknowledgement
section.
II. BACKGROUND
A. Image Processing
Image Processing is the processing of images take into
account computer calculations. There are many ways of
calculating. Each method has different benefits. Whether it is
bringing the color of each pixel to color, calculating the area of
many points together (Area), such as viewing the pattern, texture,
analyzing the shape and other analysis [10, 11].
The source of the image may be digital cameras, scans, or
other digital media. Then go through some processes to create a
new image, such as making a blurred image, embossing image,
edge detector, which this science can be used for many benefits
[12]. In many areas, such as the medical field security check the
King Mongkut’s University of Technology North Bangkok
71
2023 International Conference on Innovation, Knowledge, and Management (ICIKM)
979-8-3503-0331-5/23/$31.00 ©2023 IEEE
DOI 10.1109/ICIKM59709.2023.00022
2023 International Conference on Innovation, Knowledge, and Management (ICIKM) | 979-8-3503-0331-5/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICIKM59709.2023.00022
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
number of people or check the movement of various objects
within the image.
Digital image processing is a field that deals with techniques
and algorithms. That uses image processing in digital image.
Image here includes the meaning of digital signals in other two
dimensions. It covers video or moving images [13]. This will be
a series of still images, called frames, many of which are
sequential over time. That is, 3D signals when counting as the
3rd dimension, or may cover other 3D signals such as 3D
medical images, or maybe more, such as 3D images and
multimodal images. The concept of image processing is simple,
not much hassle but what causes learning problems for
beginners is math equation [14].
B. OpenCV
OpenCV (Open source Computer Vision) is a library of
programming functions mainly aimed at real-time computer
vision. It was originally developed by Intel. It was later
supported by Willow Garage, followed by Itseez. OpenCV is a
cross-platform library. It is cross-platform and free to use under
the Open-Source BSD License [15].
OpenCV also supports Deep Learning Frameworks,
including TensorFlow, Torch/PyTorch, and Caffe. Examples of
OpenCV applications are as follows: 2D and 3D feature toolkits,
ego motion estimation, facial recognition system, gesture
recognition and human-computer interaction (HCI) [16].
OpenCV is written in C++ and has support for Python, Java,
and MATLAB/OCTAVE —  APIs [17]. For these interfaces can
be found in the online documentation. It includes several
languages such as C#, Perl, Ch, Haskell and Ruby, developed to
promote adoption by increasing users.
C. Edge Computing
Edge Computing is part of the distributed computing
infrastructure. By processing information that is close to the
edge. The meaning of the word edge here is close to the source.
Edge Computing is a part of Cloud Computing. Cloud
Computing takes all the received data to be processed in one
place [18]. Which in this case is the Cloud. If you want to give
a specific example, the data will be stored in the Data Center and
if you want to process any data, the Data Center will send the
data to the Cloud.
But what if it's Edge Computing? Edge Computing
processes data close to the data source, meaning that it processes
data at the edge of the network (the data source, to put it simply,
IoT) rather than in the center of the web [19]. The network is
like Cloud Computing, which is processing close to the data
source. It will reduce the time to send data. Instead of sending
all the data to be processed on the distant Cloud to waste time.
Then send the data to be processed in the edge that is near us
first and then send the processed data to the Cloud, which will
reduce the burden on the Cloud as well.
D. Raspberry Pi
The Raspberry Pi is a small computer developed by the
Raspberry Pi Foundation, a UK charity. That works to bring
digital power to users around the world. Therefore, users can
easily understand and create more digital world. Solve important
problems and be ready for future work [8, 20]. The Raspberry Pi
is a high-performance, affordable, high-performance computer
that people use to learn, solve, and have fun. There is also an
online community to develop free resources such as articles.
Sample projects, to help people learn about computers and how
to do things with them. Whether it is general use or
programming skills which can save the cost of learning
especially programming.
The Raspberry Pi can be connected to a wired or wireless
network. Turning it into a complete Internet of Things device,
allowing researchers and other interested parties to apply it to
connect to sensors to collect data as needed. Including being able
to easily connect to a keyboard and mouse as well [21]. The
operating system used is Raspbian, a Linux-based operating
system specifically adapted for the Raspberry Pi, and the
operating system can be installed via a Micro SD Card, can be
set up as a server and use various services such as Web Server,
FTP Server, etc [22].
E. Infrared Sensor
IR (Infrared Object Detection) Sensor is a proximity sensor
module. There is a working principle by allowing Infrared LED
to send a signal [10, 23]. It is infrared light that strikes objects
detected in the range and reflected back to the photodiode that
receives infrared light. It outputs a digital signal, but some
modules may support analog signal output as well. The R is used
to adjust the sensitivity, infrared light detection. This will affect
the detection distance of the sensor's object. This module is
cheap, small, convenient to be installed in applications such as
robots, smart cars, obstacle avoidance robots, etc [24].
Fig. 1. Infrared sensor used in people counting experiments in this research.
72
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
III. SYSTEM DESIGN AND ARCHITECTURE
A. Counting People Entering/Exiting the Door with
Raspberry Pi
This method is a technique for counting the total number of
people inside the room, from room entry and room exit using
OpenCV. Start by installing the following package: imutils and
OpenCV.
Then camera installation, by way of mounting the camera,
we will install it as a top view. It parallel to the earth like this.
Mounting the camera on top like this is better than other angles
because the camera will see everyone passing through. There is
almost no opportunity to walk and overshadow each other at all.
After that, draw a reference line to separate the line between
inside and outside the room as shown in Figure 2.
x Under reference line = outside the room
x Above the reference line = inside the room
B. Conditions for Entry/Exit
x Enter — The first time the frame appears outside the
room and then later moves to cut the reference line
coming up above the reference line will be considered
entering the room.
x Exit — The first time the frame appears, stay inside the
room and then later moves to cut the reference line
coming down below the reference line will be considered
leaving the room as shown in Figure 2.
Fig. 2. Draw a reference line on the door.
C. Preparing Image Processing
Start developing the program by creating a file called
Peoplecount_netpie.py and importing the necessary packages.
Afterthat install the imutils package with OpenCV for video
manipulation. Set global variables, the important variables
include:
x frameSize: the size of the frame.
x pointMove[]: saves the position of the selected tracking
point in order to be able to continue to follow because
choosing a path calculated from the nearest route.
x firstTrack[]: records the first point that the path appears.
To know if the object starts outside the room or inside
the room.
x statusMove[]: timer of each route to delete routes that
have not been used for a long time.
x countPeople: to count the number of people.
D. Image Processing for Counting the number of people in
Top View
It reads the image fetch frame by frame, then detects motion.
Motion detection methods are as follows:
1. The cv2.cvtColor converts RGB color images to gray-
scale as shown in Figure 3 and 4.
2. The cv2.threshold after that, change it to black and white,
focusing on the hair color that is dark to black to cut out
the doors or objects that are white to light in Figure 5.
3. Then use the BackgroundSubtractor algorithm. The way
it works is that an algorithm looks at objects that are not
moving or changing color. For a while, it has the
background, and it will delete the part it thinks is the
background, leaving only the point where there is
movement as shown Figure 6.
4. The results that come out will only be objects that are
moving only.
5. Then do tracking process of the trace function is started,
when the last point appears. The process is to find the
original path that already exists, follow it.
6. The method is to find the original path closest to the
object. Compare from the latest position of each route.
However, if the original path to the object location is too
far away, it will interpret that the path was not found.
7. Verify that the starting point displayed and the latest point
are close to the reference line. Is it too close? In order to
prevent double addition and subtraction, there must be an
extra space, not the starting point too close to the
reference line.
8. Check conditions for entering or exiting with the
following conditions:
x Exit — The initial point is inside the room and the last
point is outside the room, will delete the
number of people.
x Enter— The initial point is outside the room and the
last point is inside the room, will increase the
number of people.
Fig. 3. Detect motion.
73
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
Fig. 4. Gray-scale.
Fig. 5. Black and white.
Fig. 6. BackgroundSubtractor.
IV. RESULTS
A. Final Result Image from the Top View
Image obtained after processing and what appears on the
screen in the program when processing. It can be shown as
shown below in Figure 7.
Fig. 7. Image obtained after processing.
B. Results of Testing the Invention of a Device for Counting
the Number of People in the Room
Compare the results of testing equipment for counting the
number of people in the room. In this test, two counting
techniques were used for comparison, namely:
1. Count the number of people using the camera as
mentioned above in Table II.
2. Count the number of people using infrared sensors in
table III.
Test the number of people counting function in a room using
a camera. Made by recording videos at different angles in real-
life situations and bring the video come to process with the
program to count the number of people on the Raspberry pi.
Then take the results from the program, then compare with the
actual events in the video then save the result. The results will
be categorized as follows in Table I:
TABLE I. DIFFERENT TYPES OF RESULTS FROM THE PROGRAM AS
COMPARED TO REALITY
Label
Activity
Result
A
IN
IN
B
EXIT
IN
C
IN
EXIT
D
EXIT
EXIT
E
-
IN
F
-
EXIT
G
IN -
H
EXIT
-
74
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
TABLE II. THE EXAMPLE RESULTS OF PEOPLE COUNTING PROGRAM IN
TOP VIEW WITH TESTING VIDEO
Video File
Length
Reality
Run Script
Label
IN
OUT
Time
IN
OUT
Time
File1.mp4
8:02
/
1:02
/
1:02
D
/
4:53
/
4:53
A
/
4:55
/
4:55
D
/
5:03
/
5:04
D
/
6:02
G
File2.mp4
50:01
/
2:29
/
2:29
D
/
3:44
/
3:44
D
/
6:32
/
6:35
A
/
29:26
F
/
37:00
G
File3.mp4
53:35
/
1:37
/
1:37
A
/
28:14
/
28:14
D
File4.mp4
3:55
/
2:44
/
2:45
D
TABLE III. THE EXAMPLE RESULTS OF PEOPLE COUNTING PROGRAM
USING INFRARED SENSORS
Video File
Length
Reality
Run Script
Result
Label
IN
OUT
Time
IN
OUT
Time
File5.mp4
9:30:10
/
8:03:49
/
8:03:51
TRUE
D
/
8:05:09
/
8:05:10
TRUE
A
/
8:46:26
FALSE
G
File6.mp4
14:28:54
/
10:34:12
/
10:34:12
TRUE
D
/
10:37:07
/
10:37:06
TRUE
A
/
12:01:42
FALSE
G
/
12:16:50
FALSE
H
/
12:21:43
/
12:21:44
TRUE
D
/
12:21:57
/
12:21:58
TRUE
A
/
13:33:08
/
13:33:09
FALSE
C
File7.mp4
15:49:41
/
14:11:02
/
14:11:03
TRUE
D
Video File
Length
Reality
Run Script
Result
Label
IN
OUT
Time
IN
OUT
Time
File5.mp4
9:30:10
/
8:03:49
/
8:03:51
TRUE
D
/
8:05:09
/
8:05:10
TRUE
A
/
8:46:26
FALSE
G
File6.mp4
14:28:54
/
10:34:12
/
10:34:12
TRUE
D
/
10:37:07
/
10:37:06
TRUE
A
/
12:01:42
FALSE
G
/
12:16:50
FALSE
H
/
12:21:43
/
12:21:44
TRUE
D
/
12:21:57
/
12:21:58
TRUE
A
/
13:33:08
/
13:33:09
FALSE
C
/
14:12:11
/
14:12:13
TRUE
A
C. Summary of the Test Results of the Program to Count the
Number of People in the Form of a Top View
From the test with 22 top view video clips, total length 17
hours 14 minutes, summarizing all results. It is shown in Table
IV.
TABLE IV. SUMMARY OF THE TEST OF THE PROGRAM TO COUNT THE
NUMBER OF PEOPLE IN TOP VIEW
Label
Activity
Result
Number of
Times
A
IN
IN
137
B
OUT
IN
7
C
IN
OUT
3
D
OUT
OUT
130
E
-
IN
11
F
-
OUT
18
G
IN
-
16
H
OUT
-
24
In all, the results obtained from the test were separated into
two cases: traffic entering the room and traffic out of the room.
In each case, the test results can be summarized as follows:
x Inbound traffic: A total of 157 walks into the room were
counted correctly 137 times. The accuracy of traffic
entering the room was 87.82%.
x Outbound traffic: A total of 161 walks into the room were
counted correctly 130 times. The accuracy of the traffic
leaving the room was 80.75%.
x When taking the accuracy obtained from both cases to
calculate the average of all accuracy, the total accuracy is
84.28%.
75
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
D. Summary of the Results of Using Infrared Sensors to
Count the Number of People
From the test in Figure 1, by comparing the video clip with
the results obtained from the counting device. Using 4 videos,
the total length is 23 hours 10 minutes. Summarize all results, it
is shown in Table V.
TABLE V. SUMMARY OF PEOPLE COUNTING PROGRAM TESTING USING
INFRARED SENSORS
Label
Activity
Result
Number of
Times
A
IN
IN
101
B
OUT
IN
12
C
IN
OUT
8
D
OUT
OUT
116
E
-
IN
6
F
-
OUT
4
G
IN
-
17
H
OUT
-
18
In summary, the results obtained from the test were
separated into 2 cases: traffic entering the room and traffic
outgoing the room. In each case, the test results can be
summarized as follows.
x Inbound traffic: A total of 126 walks into the room were
counted correctly 101 times. The accuracy of the traffic
entering the room was 80.16%.
x Outbound traffic: There were a total of 146 walks into the
room, 116 of which were correctly counted, and the
accuracy of the traffic leaving the room was 79.45%.
x When taking the accuracy obtained from both cases to
calculate the average of all accuracy. The total accuracy is
79.81%.
When comparing the accuracy of the 2 methods, it was found
that the camera counting method with the top view was the most
accurate at 84.28%, but this value was obtained from the test on
a PC. This value will vary according to the performance of the
applied equipment. When used with the Raspberry Pi, the
accuracy is less. Therefore, it is necessary to switch to using the
infrared sensor method instead which gives better results in
actual use.
V. CONCLUSION
This paper presents the results of an innovative way to count
the number of people in a room. In this experiment, people will
be counted assuming that there is only one door in the room and
counting the people who walk in and out of the door. To know
the number of people in the room in real time, so the number of
people in the room will always change when someone goes
through the door. It is hoped that this research will be applied as
a part of counting the number of people in a room in a smart
office system. Knowing the number of people in a room will be
information for controlling other parts of the room in the future.
In this research, two people counting techniques were used to
compare the efficiency obtained. It was found that the top-view
person counting camera method was the most accurate at
84.28%. Tested on a PC computer, this value will vary according
to the performance of the applied device. When used with the
Raspberry Pi, the accuracy is less. Therefore, it is necessary to
switch to using the infrared sensor method instead which gives
better results in actual use because it has an accuracy of 79.81%.
ACKNOWLEDGMENT
This research was funded by King Mongkut’s University of
Technology North Bangkok. Contract no KMUTNB-64-
DRIVE-34. We would like to deliver our greatest appreciation
for their support.
REFERENCES
[1] M. Piechocki, M. Kraft, T. Pajchrowski, P. Aszkowski and D. Pieczynski,
"Efficient People Counting in Thermal Images: The Benchmark of
Resource-Constrained Hardware," in IEEE Access, vol. 10, pp. 124835-
124847, 2022, doi: 10.1109/ACCESS.2022.3225233.
[2] K. Namee, S. Karnbunjong and J. Polpinij, "The Integration of File Server
Function and Task Management Function to Replace Web Application on
Cloud Platform for Cost Reduction," 2019 IEEE Asia Pacific Conference
on Circuits and Systems (APCCAS), Bangkok, Thailand, 2019, pp. 405 -
408, doi: 10.1109/APCCAS47518.2019.8953164.
[3] E. P. Myint and M. M. Sein, "People Detecting and Counting System,"
2021 IEEE 3rd Global Conference on Life Sciences and Technologies
(LifeTech), Nara, Japan, 2021, pp. 289-290, doi:
10.1109/LifeTech52111.2021.9391951.
[4] S. Saxena and D. Songara, "Design of people counting system using
MATLAB," 2017 Tenth International Conference on Contemporary
Computing (IC3), Noida, India, 2017, pp. 1-3, doi:
10.1109/IC3.2017.8284344.
[5] S. T. Kouyoumdjieva, P. Danielis and G. Karlsson, "Survey of Non-
Image-Based Approaches for Counting People," in IEEE
Communications Surveys & Tutorials, vol. 22, no. 2, pp. 1305-1336,
Secondquarter 2020, doi: 10.1109/COMST.2019.2902824.
[6] P. Ren, W. Fang and S. Djahel, "A novel YOLO-Based real-time people
counting approach," 2017 International Smart Cities Conference (ISC2),
Wuxi, China, 2017, pp. 1-2, doi: 10.1109/ISC2.2017.8090864.
[7] M. Chen, Z. Tian, Y. Jin and M. Zhou, "A State Recognition Approach
Based on Distribution Difference for Passive People Counting," 2021
IEEE Asia-Pacific Microwave Conference (APMC), Brisbane, Australia,
2021, pp. 124-126, doi: 10.1109/APMC52720.2021.9661718.
[8] M. C. Le, M. -H. Le and M. -T. Duong, "Vision -based People Counting
for Attendance Monitoring System," 2020 5th International Conference
on Green Technology and Sustainable Development (GTSD), Ho Chi
Minh City, Vietnam, 2020, pp. 349-352, doi:
10.1109/GTSD50082.2020.9303117.
[9] J. W. Choi, X. Quan and S. H. Cho, "Bi-Directional Passing People
Counting System Based on IR-UWB Radar Sensors," in IEEE Internet of
Things Journal, vol. 5, no. 2, pp. 512-522, April 2018, doi:
10.1109/JIOT.2017.2714181.
[10] E. S. Wahyuni, R. R. Alinra and H. Setiawan, "People counting for indoor
monitoring," 2017 International Conference on Computing, Engineering,
and Design (ICCED), Kuala Lumpur, Malaysia, 2017, pp. 1-5, doi:
10.1109/CED.2017.8308112.
[11] A. KUMAR SINGH, D. SINGH and M. GOYAL, "People Counting
System Using Python," 2021 5th International Conference on Computing
Methodologies and Communication (ICCMC), Erode, India, 2021, pp.
1750-1754, doi: 10.1109/ICCMC51019.2021.9418290.
[12] K. Namee, C. Kamjumpol and W. Pimsiri, “Development of Smart
Vegetable Growing Cabinet with IoT, Edge Computing and Cloud
Computing,” 2020 2nd International Conference on Image Processing and
Machine Vision (IPMV), Bangkok, Thailand, 2020, pp. 47–52,
https://doi.org/10.1145/3421558.3421588.
[13] K. Rantelobo, M. A. Indraswara, N. P. Sastra, D. M. Wiharta, H. F. J.
Lami and H. Z. Kotta, "Monitoring Systems for Counting People using
Raspberry Pi 3," 2018 International Conference on Smart Green
Technology in Electrical and Information Systems (ICSGTEIS), Bali,
Indonesia, 2018, pp. 57-60, doi: 10.1109/ICSGTEIS.2018.8709141.
76
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
[14] P. Khoenkaw and P. Pramokchon, "Bi-directional Portable Automatic
People Counting Using A Single Ultrasonic Range Finder," 2020 Joint
International Conference on Digital Arts, Media and Technology with
ECTI Northern Section Conference on Electrical, Electronics, Computer
and Telecommunications Engineering (ECTI DAMT & NCON), Pattaya,
Thailand, 2020, pp. 34-37, doi:
10.1109/ECTIDAMTNCON48261.2020.9090689.
[15] W. Liu, M. Salzmann and P. Fua, "Counting People by Estimating People
Flows," in IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 44, no. 11, pp. 8151-8166, 1 Nov. 2022, doi:
10.1109/TPAMI.2021.3102690.
[16] A. Lalchandani and S. Patel, "Smart IoT Based People Counting System,"
2021 International Conference on Artificial Intelligence and Machine
Vision (AIMV), Gandhinagar, India, 2021, pp. 1-6, doi:
10.1109/AIMV53313.2021.9670970.
[17] S. Muthukumar, W. S. Mary, S. K. Abijith, P. Haribalan and S. Sakthivel,
"Sensor Based Automated People Counting System," 2018 Second
International Conference on Intelligent Computing and Control Systems
(ICICCS), Madurai, India, 2018, pp. 1796-1798, doi:
10.1109/ICCONS.2018.8662887.
[18] I. Ahmad, Z. UI Islam, F. Ullah, M. Abbas Hussain and S. Nabi, "An
FPGA Based Approach For People Counting Using Image Processing
Techniques," 2019 11th International Conference on Knowledge and
Smart Technology (KST), Phuket, Thailand, 2019, pp. 148-152, doi:
10.1109/KST.2019.8687568.
[19] M. Cruz, J. J. Keh, R. Deticio, C. V. Tan, J. A. Jose and E. Dadios, "A
People Counting System for Use in CCTV Cameras in Retail," 2020 IEEE
12th International Conference on Humanoid, Nanotechnology,
Information Technology, Communication and Control, Environment, and
Management (HNICEM), Manila, Philippines, 2020, pp. 1-6, doi:
10.1109/HNICEM51456.2020.9400048.
[20] S. Sun, N. Akhtar, H. Song, C. Zhang, J. Li and A. Mian, "Benchmark
Data and Method for Real-Time People Counting in Cluttered Scenes
Using Depth Sensors," in IEEE Transactions on Intelligent Transportation
Systems, vol. 20, no. 10, pp. 3599-3612, Oct. 2019, doi:
10.1109/TITS.2019.2911128.
[21] F. Wang, F. Zhang, C. Wu, B. Wang and K. J. Ray Liu, "Passive People
Counting using Commodity WiFi," 2020 IEEE 6th World Forum on
Internet of Things (WF-IoT), New Orleans, LA, USA, 2020, pp. 1-6, doi:
10.1109/WF-IoT48130.2020.9221456.
[22] J. Kalikova and J. Krcal, "People counting in smart buildings," 2018 3rd
International Conference on Intelligent Green Building and Smart Grid
(IGBSG), Yilan, Taiwan, 2018, pp. 1-3, doi:
10.1109/IGBSG.2018.8393517.
[23] E. Hagenaars, A. Pandharipande, A. Murthy and G. Leus, "Single-Pixel
Thermopile Infrared Sensing for People Counting," in IEEE Sensors
Journal, vol. 21, no. 4, pp. 4866-4873, 15 Feb, 2021, doi:
10.1109/JSEN.2020.3029739.
[24] J. -H. Choi, J. -E. Kim and K. -T. Kim, "People Counting Using IR-UWB
Radar Sensor in a Wide Area," in IEEE Internet of Things Journal, vol. 8,
no. 7, pp. 5806-5821, 1 April, 2021, doi: 10.1109/JIOT.2020.3032710.
77
Authorized licensed use limited to: King Mongkut's University of Technology North Bangkok. Downloaded on March 13,2024 at 03:55:48 UTC from IEEE Xplore. Restrictions apply.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The monitoring of presence is a timely topic in intelligent building management systems. Nowadays, most rooms, halls, and auditoriums use a simple binary presence detector that is used to control the operation of HVAC systems. This strategy is not optimal and leads to significant amounts of energy being wasted due to inadequate control of the system. Therefore, knowing the exact person count facilitates better adjustment to current needs and cost reduction. The vision-based people-counting is a well-known area of computer vision research. In addition, with rapid development in the artificial intelligence and IoT sectors, power-limited and resource-constrained devices like single-board computers or microcontrollers are able to run even such sophisticated algorithms as neural networks. This capability not only ensures the tiny size and power effectiveness of the device but also, by definition, preserves privacy by limiting or completely eliminating the transfer of data to the cloud. In this paper, we describe the method for efficient occupancy estimation based on low-resolution thermal images. This approach uses a U-Net-like convolutional neural network that is capable of estimating the number of people in the sensor’s field of view. Although the architecture was optimized and quantized to fit the limited microcontroller’s memory, the metrics obtained by the algorithm outperform the other state-of-the-art solutions. Additionally, the algorithm was deployed on a range of embedded devices to perform a set of benchmarks. The tests carried out on embedded processors allowed the comparison of a wide range of chips and proved that people counting can be efficiently executed on resource-limited hardware while maintaining low power consumption.
Article
Modern methods for counting people in crowded scenes rely on deep networks to estimate people densities in individual images. As such, only very few take advantage of temporal consistency in video sequences, and those that do only impose weak smoothness constraints across consecutive frames. In this paper, we advocate estimating people flows across image locations between consecutive images and inferring the people densities from these flows instead of directly regressing them. This enables us to impose much stronger constraints encoding the conservation of the number of people. As a result, it significantly boosts performance without requiring a more complex architecture. Furthermore, it allows us to exploit the correlation between people flow and optical flow to further improve the results. We also show that leveraging people conservation constraints in both a spatial and temporal manner makes it possible to train a deep crowd counting model in an active learning setting with much fewer annotations. This significantly reduces the annotation cost while still leading to similar performance to the full supervision case.
Article
Conventional radar-based people counting systems are designed mainly for dense spatial distributions in a small region of interest (ROI). Therefore, a system with only conventional energy-based features, which are effective for a small ROI with a limited spatial distribution of individuals, generally fails to cope with the diverse and complex spatial distributions that arise from the freer movements of individuals as the ROI widens. To address this problem, a novel approach that achieves robust people counting in both wide and small ROIs is presented in this study. The proposed technique incorporates modified CLEAN-based features in the range domain and energy-based features in the frequency domain to efficiently address both dense and dispersed distributions of individuals. Subsequently, principal component analysis and an appropriate normalization of the proposed features are performed for improving the people counting system further. Based on several experiments in practical environments with wide ROIs and severe multipath effects, we observed that the proposed approach yields significantly improved performance compared with traditional people counting systems.