Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Indoor Human Localization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

POLITECNICO DI TORINO

Master degree course in Electronic Engineering

Master Degree Thesis

Indoor human localization: a


sensor fusion approach using
long distance capacitive and
infrared sensors

Supervisors Candidate
prof. Mihai Teodor Lazarescu Irene Castro
prof. Luciano Lavagno ID:242266

Accademic Year 2019-2020


This work is subject to the Creative Commons Licence
"Gutta cavat lapidem, non vi sed sæpe cadendo."
Dripping water hollows out stone, not through force but through persistence
Lucretio, De rerum natura

3
Summary
The possibility to have a reliable system that detects the exact position of
a human in a room arouses a lot of interest since it could lead to numerous
improvements in everyday life. Fields of application range from smart homes,
where indoor human localization can be used for adjusting the lights or the
heating in user’s proximity, to hospitals and hospices where it can help to
monitor patients remotely. For these and more other applications, a growing
interest in this field has developed during the last years.
In this context, a research team at the Department of electronics and
telecommunications (DET) of Politecnico di Torino is working to design a
low cost, easy to use, unobtrusive, passive, tag-less and privacy-aware indoor
localization system that can be safely and easily installed in smart homes and
assisted living environments by using long-range capacitive sensors, digital
filters and neural networks. Within this project this thesis work has the
purpose of improving the performance, reliability and accuracy of the overall
system by implementing a complementary sensor fusion technique between
the existing capacitive sensors network localization system and an infrared
thermal sensor localization system designed for this scope.
At the beginning of this thesis work, a study of the state of art on indoor
human localization has been carried out. The advantages and disadvantages
of the analyzed systems have been identified and a comparison among them
has been made. From this analysis, it emerged that, even though the infrared
and capacitive sensors have not the best accuracy among all the systems an-
alyzed, their accuracy can be sufficient for indoor localization purpose. More-
over, both of them are cheap, safe, tagless, do not consume much power and
are privacy-aware. Moreover, most of the weak points of these two techniques
are complementary.
The main problem of capacitive sensors lies in the fact that they have a
sensitivity that steeply decreases with the increase of distance from them.
Furthermore, they are affected by different sources of noise that cannot be
easily controlled. Above all, drift affects most this kind of sensor. On the
4
other hand, the infrared thermal sensor is affected by kinds of noise that are
often complementary to the ones affecting capacitive sensors. For this reason,
a sensor fusion between the two has been implemented.
During the experimental part of this thesis, the design of the infrared
sensor system has been made. Going into detail, a MEMS Thermal Sensor has
been chosen for the complementary acquisition system. It uses the thermopile
technique in order to give information about the surface temperature of an
object in an array of 16 pixels (4x4). By attaching it on the ceiling of a
room, when a human passes underneath the sensor inside that area, it detects
him/her as a higher temperature pixel with respect to the floor temperature,
allowing to detect user’s position and respecting the privacy.
Data from the infrared sensor have been collected using as microcontroller
an Arduino Uno board using I2C communication protocol. Reliability of data
has been improved by checking the integrity of the received data using the
CRC-8 (Cyclic Redundancy Check) and by requesting to re-send in case of
error. The samples collected on the board have been sent via radio to a
second Arduino Uno board connected to a computer. For this reason, two
Xbee modules have been programmed, one used as transmitter and the other
as receiver.
The output wave signals from the sensor have been analyzed with an
oscilloscope to choose the maximum reliable sampling rate for the infrared
sensor system, that turned out to be 8 Hz. Data processing and packing have
been made using the software MATLAB. For each set of data, a heat-map
of the room has been plotted to have a graphical representation of data.
The system has been tested and a stability characterization has been carried
out by comparing the output temperature from the infrared sensor with a
reference DTH11 Humidity Temperature Sensor. It has been observed that
the difference between the values read from the sensors changed in synchrony
with the DHT11 sensor.
The operation of each of the four capacitive sensor nodes has been tested
separately. In particular, a sensitivity test has been made by moving in front
of each sensor in a straight line, starting from a distance of 1.8 meters and ap-
proaching each time by a small step of 30 centimetres and recording the data
acquired. The test has been repeated in different conditions, by changing the
location of the sensors in the room, by changing some electronic components
of the circuit and re-soldering some contacts. In the end, an increment of the
sensitivity and a reduction of high-frequency noise have been obtained for
most sensor nodes.
After testing all system components separately, a complete experiment has
5
been set up. In addition to the two systems described, an ultrasound sensor
network has been installed to have an accurate reference for the position of
the person in the room. This system is composed by a network of 4 stationary
ultrasonic beacons and a mobile beacon - called hedgehog - worn by the
person to be tracked. The room used for the experiment is a 3 m x 3 m room
with a ceiling at a height of 3.05 m. In this structure, capacitive sensor plates
are placed at the centre of the walls at a height of 120 cm from the floor, the
infrared sensor is placed at the central point of the ceiling and the ultrasound
sensors are placed at the four edges of the ceiling and communicate with a
tag on the person’s head.
Data from all sensors have been acquired simultaneously and the obtained
results have been analyzed. Considering the factors that could positively and
negatively influence the reading of the data by the capacitive and infrared
sensors, different experiments have been carried out under different condi-
tions. In particular, experimental data were collected in four experiments
executed by two different people and lasted half an hour each. During these
experiments, the person was slowly and continuously walking inside the room
while all three localization systems were active. The first two experiments
(one for each person) were carried out in the evening after sunset, while the
other two during the afternoon with the sun’s rays penetrating the room
through blinds. It was, therefore, possible to test the system under different
temperature conditions and with and without the interference of sunlight in
the room. In the third experiment, an element of disturbance to the infrared
system has been inserted to better test the sensor fusion technique.
From experimental results has been observed that by merging the data
from the two sensing systems a much more accurate, sensitive and robust
system can be created. Capacitive drift can be corrected by using the infrared
sensor since it is not affected by the same problem. The overall sensitivity
range can be extended for both the systems: the lack of sensitivity of the
infrared sensor at the borders of the room are balanced by the high sensitivity
of the capacitive sensors in the proximity of the walls. In the same way, the
lack of sensitivity of the capacitive sensors in the middle point of the room is
compensated by the field of view area of the infrared one. In the parts where
both systems have good sensitivity, the infrared non-homogeneous sensitivity
when crossing pixels can be corrected by the capacitive ones.
To check the improvement obtained, it will be necessary to analyze the
data through Neural Networks by comparing the localization performance
using first the data-set composed by data from only capacitive sensors, then
only data from infrared sensors and at last a merge of the two systems. In this
6
way, it will be clear in terms of accuracy and error the actual improvement
that the sensor fusion can give to both the systems.
During this work, a lot of effort has been made to make everything work in
the best way and to try to have a scientific approach to the problems encoun-
tered. The expectation is that this system will be enhanced for improving
the lives of users, especially those who need care and assistance.

7
Contents

List of Tables 10

List of Figures 11

1 Introduction to Indoor Human Localization 13


1.1 Importance of indoor human localization and its fields of use . 13
1.2 Characteristics of an ideal indoor human localization system . 14
1.3 Indoor localization techniques . . . . . . . . . . . . . . . . . . 16
1.3.1 Vision-based method . . . . . . . . . . . . . . . . . . . 16
1.3.2 Infrared-based method . . . . . . . . . . . . . . . . . . 16
1.3.3 Motion-based method . . . . . . . . . . . . . . . . . . 17
1.3.4 Sound-based method . . . . . . . . . . . . . . . . . . . 17
1.3.5 Ultrasound-based method . . . . . . . . . . . . . . . . 18
1.3.6 Radio Frequency-based methods . . . . . . . . . . . . . 19
1.3.7 Visible Light-based method . . . . . . . . . . . . . . . 20
1.3.8 Pressure-based method . . . . . . . . . . . . . . . . . . 21
1.3.9 Capacitive-based method . . . . . . . . . . . . . . . . . 22
1.3.10 Summary of the State of Art . . . . . . . . . . . . . . . 25
1.4 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.5 Sensor fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6 Main contribution of this thesis . . . . . . . . . . . . . . . . . 30

2 Thermal Infrared Sensor acquisition System 32


2.1 Infrared sensor selection . . . . . . . . . . . . . . . . . . . . . 32
2.2 Omron D6T MEMS Thermal Sensor . . . . . . . . . . . . . . 34
2.2.1 Operating principle . . . . . . . . . . . . . . . . . . . 35
2.2.2 Field of view . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.3 Transmission of data through I2C . . . . . . . . . . . 38
2.3 System implementation using microcontroller . . . . . . . . . . 41
8
2.3.1 Connection with the Thermal Infrared Sensor . . . . . 42
2.3.2 Wireless communication using Xbee module . . . . . . 42
2.4 Programming the Microcontroller using Arduino . . . . . . . . 43
2.4.1 Error detection using CRC and retransmission . . . . . 44
2.4.2 Radio transmission of data through Xbee module . . . 46
2.4.3 Sensor data acquisition period . . . . . . . . . . . . . . 46
2.4.4 Data acquisition with MATLAB . . . . . . . . . . . . . 47

3 Experiment setup 49
3.1 Capacitive Sensors System . . . . . . . . . . . . . . . . . . . 50
3.1.1 Capacitive sensor module . . . . . . . . . . . . . . . . 50
3.1.2 Testing the module and implemented optimization . . . 51
3.1.3 Filtering the noise . . . . . . . . . . . . . . . . . . . . 56
3.2 Ultrasound Sensors System . . . . . . . . . . . . . . . . . . . . 56
3.3 Infrared Sensor System . . . . . . . . . . . . . . . . . . . . . . 57
3.3.1 Infrared sensor: area of sensing evaluation . . . . . . . 59

4 Data acquisition and Sensor Fusion Results 62


4.1 Experiments one and two - Evening . . . . . . . . . . . . . . . 63
4.2 Experiments three and four - Afternoon . . . . . . . . . . . . 69
4.3 Experimental data merging . . . . . . . . . . . . . . . . . . . 74

5 Conclusion and future work 76

A Microcontroller code 79
A.1 Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.2 Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

B MATLAB code 85
B.1 Acquisition of the data through serial port . . . . . . . . . . . 85
B.2 Data plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Bibliography 88

9
List of Tables
1.1 Comparison between the most used Indoor positioning systems 25
2.1 Overview of the characteristics of Omron D6T MEMS Ther-
mal Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1 Main characteristics of DHT11 Humidity Temperature Sensor 58

10
List of Figures
1.1 Localization with ultrasound sensors system . . . . . . . . . . 18
1.2 Overview of the main capacitive sensing techniques . . . . . . 24
1.3 Main building blocks of capacitive sensor Node and Base Statio 28
2.1 Structure of a thermopile . . . . . . . . . . . . . . . . . . . . . 34
2.2 Inside detail of D6T MEMS Thermal Sensor . . . . . . . . . . 36
2.3 Angle of view of D6T-44L-06 MEMS Thermal Sensor by Omron 37
2.4 Field of view area positioning D6T sensor at 3 m and at 1 m
from the floor . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 Outer view and connections of the Omron D6T-44L-06 MEMS
Thermal Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 I2C data line flow and Output data composition of D6T-44L-
06 MEMS Thermal Sensor . . . . . . . . . . . . . . . . . . . . 39
2.7 Start and Stop of a transmission from D6T sensor to the mas-
ter using I2C protocol . . . . . . . . . . . . . . . . . . . . . . 40
2.8 Schematic of the overall infrared thermal system . . . . . . . . 41
2.9 Electrical connection between D6T sensor and MCU . . . . . . 42
2.10 Arduino connection to Xbee module through a shield . . . . . 43
2.11 Graphical rapresentation of CRC-8 . . . . . . . . . . . . . . . 44
2.12 Function implementing CRC-8 algorithm . . . . . . . . . . . 45
2.13 Call of the calc_crc function implementing CRC-8 algorithm
inside the main code . . . . . . . . . . . . . . . . . . . . . . . 45
2.14 Initialization of the packet of data to be sent via radio using
Xbee. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.15 Schematic of how the sampling period is organized. . . . . . . 47
2.16 Output scheme of the infrared sensor acquisition system. . . . 48
3.1 Representation of the overall experiment setup . . . . . . . . 49
3.2 555-based capacitance-frequency converter . . . . . . . . . . . 50
3.3 Sensitivity test results made for all the four capacitive sensor
nodes and repeated by changing the timer IC . . . . . . . . . 53
11
3.4 Drift acquisition for all four capacitive sensor nodes for several
555 ICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5 Plots of the sensitivity test for sensor nodes 1 and 3 before
and after the hardware debugging operations. . . . . . . . . . 55
3.6 Ultrasound Localization System By Marvelmind . . . . . . . . 57
3.7 D6T thermal sensor stability characterization . . . . . . . . . 58
3.8 Infrared sensor field of view evaluation . . . . . . . . . . . . . 59
3.9 Values used for the field of view computation referred to the
room. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1 Plot of the raw data acquired in Experiment 1 from the ultra-
sound sensors system . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Plot of the data acquired in Experiment 1 from the ultrasound
sensors system after filtering them with Hampel filter . . . . . 64
4.3 Plot of the data acquired in Experiment 1 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4 Plot of the data acquired in Experiment 2 from the ultrasound
sensors system after filtering them with Hampel filter . . . . . 65
4.5 Plot of the data acquired in Experiment 2 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.6 Images from the infrared sensor during Experiment 1 and com-
parison with a schematic showing the ground truth. . . . . . . 68
4.7 Plot of the data acquired in Experiment three from the Ultra-
sound sensors system after filtering them with Hampel filter
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.8 Plot of the data acquired in Experiment 3 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.9 Plot of the data acquired in Experiment 4 from the infrared
sensors and comparison with ground truth . . . . . . . . . . . 71
4.10 Two samples acquired in Experiment 4 from the infrared sen-
sors system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.11 Plot of the data acquired in Experiment four from the ultra-
sound sensors system after filtering them with Hampel filter
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.12 Plot of the data acquired in Experiment 4 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

12
Chapter 1

Introduction to Indoor
Human Localization

1.1 Importance of indoor human localization


and its fields of use
Nowadays technology is increasingly present in our lives: the furniture in our
homes becomes increasingly smarter and interacts with us, robots are replac-
ing us in carrying out a lot of works, even in the assistance and monitoring
of people in need of care. In this context to localize people inside a closed
environment arouses a lot of interest.
The possibility to have a reliable system that detects the exact position of
a human in a room could lead to numerous improvements in everyday life.
Starting from trivial applications, indoor human localization can be used
in smart homes for predicting the needs of the inhabitants based on their loca-
tion and movements. For example, an automatic system can be programmed
for adjusting the lights or the heating in the proximity of the person; a dis-
tributed sound system could use this information to follow the user as he
moves among the various rooms. This can be useful to reduce the waste
of energy, switching off some electronic devices when the user is not near
them. Moreover, an indoor localization system makes it possible to detect
the presence of intruders in the house and to activate an alarm system.
Inside a museum, the position of the user could be used for giving him
contextualized contents about the artwork he is looking at through audio
guides.
13
1 – Introduction to Indoor Human Localization

In the entertainment field, video-games could take advantage of the posi-


tion of the player, for example, to move its avatar in a virtual world.
Considering also other application domains, indoor human localization
can play an essential role in improving the quality of life of people with
health problems. It would be for example a huge help for people with visual
impairments, who could benefit from this technology succeeding in finding
the way even in unknown places.
In hospitals and hospices or even in the houses of sick or elderly people, a
localization system can help monitor patients remotely.
In the industry field, a reliable indoor location system could be used for
the full automation of employees’ work and to prevent accidents, increasing
both productivity and safety.
Summing up, the indoor human localization can be useful for making
life easier, more comfortable, smarter and even safer. Considering all these
advantages, it is understandable the increasing interest in this field.

1.2 Characteristics of an ideal indoor human


localization system
In the last years, various technologies have been developed to detect the
presence of people in closed environments.
Global Navigation Satellite Systems (GPS or GNSS), the most used tech-
nology for localization in external environments, is instead useless inside
buildings, both for the lack of precision and for the architectural barriers
that prevent the signal from passing. In fact, GPS relies on radio transmis-
sions in the spectrum of microwaves (on frequencies close to 1500 MHz) that
suffer a heavy absorption phenomenon when they have to pass through roofs,
walls and other conductive objects.
New techniques have therefore been studied by the scientific community,
in order to go beyond those limits and allow their usability in a domestic
environment.
In the following, the main characteristics an ideal indoor human localization
system should have, are analyzed.
• Reliable, Precise and Accurate: Considering the rooms of a house
as the target environment of use, the accuracy of the positioning sys-
tem must be much less than the size of the room itself. Moreover, the
system must be good in quality and performances, giving always precise
information about the position inside the room.
14
1 – Introduction to Indoor Human Localization

• Safe and Secure: The system must not affect the health of people
who are localized (safe) and the information about their position or
the presence or not of people inside the rooms must be protected and
encrypted such that it cannot be used for malicious activities (secure).

• Easy to use, Passive and Device free: Considering that the end-
user could be an elderly person or without any knowledge of technology,
in the ideal case the system should be as easy to use as possible, so
that, once installed, the system should operate automatically without
the need to perform any specific activities for being localized.
Many of the best indoor localization systems make use of tags that the
user must carry around in order to be located. Although these methods
may be simple and inexpensive to implement, especially by exploiting
technologies already widespread such as smartphones and smartwatches,
they may be, at the end of the day, unreliable because the user could
forget to carry the tag with him moving around the rooms of the house.
Moreover, the user could be uncomfortable and reluctant to wear a device
in every moment of his day, especially in the relaxing moments. For this
reason, the passive and tag-less localization systems are considered the
most suitable for the scopes of this work.

• Privacy aware: The system must be usable and accepted by the user
even in environments where, for reasons of privacy, the user does not
want to be filmed. For this reason, the acquisition of high-resolution im-
ages for human localization should be excluded. In fact, even by ensuring
that the captured images would be encrypted, obscured and not used for
other purposes, the user would not trust this technology.

• Unobtrusive: the system should not interfere with the user daily ac-
tivities and movements. Furthermore, considering also the surveillance
applications, it should be invisible and not easy to be disabled by thieves
and intruders.

• Cheap and easy to install: Cost plays an essential role in the spread
of a product to as many users as possible. It is also important that the
system could be easily installed in existing buildings without making
difficult and expensive masonry works.

• Low maintenance and exploitation cost: A localization system must


not require the user to spend a lot of money on maintenance or to keep
15
1 – Introduction to Indoor Human Localization

the system on. A low power system for long battery life or a wireless
power supply system is preferable.

In real life, it is difficult to find a system that is a good compromise among


all these specifications. In the next section, some indoor localization systems
in the state of art will be analyzed taking care of observe if they present or
not the characteristics mentioned above.

1.3 Indoor localization techniques


In this section a brief review of the state-of-the-art methods for locating
people in closed environments will be made, analyzing, for each approach,
the main features, the advantages and disadvantages [1] [2].

1.3.1 Vision-based method


The use of video cameras for monitoring people inside a room is one of
the most traditional methods. Video camera-based monitoring systems are
among the most reliable and flexible since they suit a lot of different tasks [3].
In fact, they can be used as an accurate surveillance system, allowing to know
exactly and with high resolution what happens in a house while the owners
are away, maintain records of the images. It can be used not only to sense the
position of the user in a room, but also his movements and gestures, if he is
standing or lying on the floor, the activity he is making and his interactions
with the environment [4]. Moreover, it can be used to recognize between
different users and performing personalized actions according to the case.
In spite of all these advantages, it also has several drawbacks that involve
energy consumption, cost-inefficiency, an high computational cost and, above
all, lack of privacy. In fact, for an application like elderly care and assisted
living, monitoring human activities in places such as lavatory and bathroom
is necessary since in that places the probability of slipping and getting hurt
is high. In these cases the use of video cameras is inappropriate and the user
could be contrary to it. Moreover video cameras are ineffective in the dark.

1.3.2 Infrared-based method


Human localization systems based on infrared sensors exploit the human
skin’s ability to emit infrared radiation. In fact, any objects with tempera-
tures between 0°C and 70°C emit radiation in the long-wavelength infrared
16
1 – Introduction to Indoor Human Localization

spectrum subsection (from 8 µm to 15 µm). With this method, these particu-


lar radiations are sensed and then the processed data are collected in arrays,
obtaining a low-resolution image of the indoor environment [5]. Since in in-
door environment the human skin temperature is generally higher than the
ambient temperatures, from these images it is possible to detect and localize
a person in a room. The image collected is actually a matrix that in each cell
reports the temperature of a precise spot of the room, so no privacy problems
arise with this solution since nothing more than shapes can be distinguished
by the acquired data. By using low-resolution infrared sensors the respect
for privacy is even higher. The main problem of this kind of solution is the
fact that in the home environment there can be another heat source differ-
ent from human bodies that can radiate at a similar frequency influencing
the measurement signal. An example can be a mug of hot chocolate on the
table or a radiator. For solving this problem a solution could be to study the
heating process of different objects and to adapt the localization algorithm
in order to distinguish between a human being and an inanimate object.
More details about the infrared sensor method will be presented in Chapter
2.

1.3.3 Motion-based method


Nowadays every smartphone and smartwatch is equipped with gyroscopes
and accelerometers. This kind of sensors can be used in order to detect,
starting from a reference point, how far and in which direction the user is
going. In fact, the data from the accelerometer can be used to detect each
step the user takes and the corresponding step length while gyroscope data
can help decide the direction change for each step [6]. This method is very
easy and cheap to implement since the user does not need to buy additional
hardware, he already has it in his pocket. However physical discomfort issue
could raise for the user because he must bring his smart device always with
him in order to be localized. Moreover, the intense use of accelerometer and
gyroscope lead to a power consumption problem that would drain the phone
battery very quickly. Another problem is a linear increasing error that is
directly proportional to movement time [7].

1.3.4 Sound-based method


Making use of only microphones, this method localizes a person as a sound
source. It is, in fact, possible to deduce the direction of a sound by simply
17
1 – Introduction to Indoor Human Localization

analyzing the difference in the arrival time of the sound in three or more
microphones placed in different spots of the room and interpolating the data
using triangulation [8] [9]. Accuracy and reliability of the system can be
improved by increasing the number of microphones.
This technique obtains positions and distances of a person with centimetre-
scale accuracy in a quite inexpensive and without any annoyance to the user.
However the sensing can be easily influenced by other audio signals or noise,
so it is prone to false detection.

1.3.5 Ultrasound-based method


Ultrasonic waves belong to the spectrum of sound waves with frequencies so
high that are above the human ear’s audibility limit of about 20 kilohertz.
Using them for localization purposes is not a human invention: many animals,
such as bats and cicadas, use ultrasound to locate their prey in hunting. Using
ultrasounds for indoor positioning systems makes it possible to achieve a good
accuracy of few centimetres due to the slow propagation speed and the low
penetration in walls of ultrasound signals [10] [11].

Figure 1.1. Localization with ultrasound sensors system[10]. A set of fixed


ultrasonic transmitters are placed on known positions while the mobile node
operates as an ultrasonic receiver. A coordinator node is connected to a PC
coordinating and managing all other nodes of the network.

18
1 – Introduction to Indoor Human Localization

The operating principle is based on calculating the distance between re-


ceivers and transmitters having known the propagation speed of sound in
the transmission medium and the time of departure and arrival of the signal.
A correct temporal synchronization of the network nodes is required for a
precise calculation. The transmitters nodes are usually placed on the ceiling
while individuals need to carry a mobile receiver with them, allowing the line
of sight between transmitters and receivers.
An example of how the system is organized is presented in Figure 1.1.
Summing up, despite the high accuracy, reliability and the low cost of
infrastructure, the problem of this method is the fact of not being tagless.
For its advantages, this method can be used for obtaining some reference
measurement in the testing phase of other localization systems. Other details
about an application of this method will be given in Section 3.2.

1.3.6 Radio Frequency-based methods


Several localization methods use radio frequencies to localize people in a
room. Most of them exploit technologies already widespread in PCs and
smartphones for communication purposes such as IEEE 802.11, Bluetooth,
Zigbee, RFID and Ultra-Wideband (UWB).
In general, the primary methods used for positioning are [12]:

• Time of Flight (TOF): the position is estimated by calculating the time it


takes for electromagnetic waves to travel the distance from a transmitter
to a receiver.

• Angle of Arrival (AOA): the angle between an anchor point and the
sensor with respect to a coordinate system is measured and from this
information, the position is obtained.

• Signal Strength: exploits the fact that the signal from radio transmitter
gradually falls off in strength as the receiver moves further away from
the receiver.

• Phase: the phase difference between the transmitted and the received
signal is used for measuring the distance.

The first wireless network localization systems to be developed required


users to carry tags with them. In the last years also device-free RF system
has been realized. In fact, many studies about the effects of human presence
19
1 – Introduction to Indoor Human Localization

on the RF signal strength [13] claim that the human body produces both con-
structive or destructive interferences in wireless radio network environment,
changing the RF communication pattern between the wireless transceivers.
This radio irregularity, always considered as a drawback has been exploited to
locate the human presence in the indoor environment and even discriminate
human activities or gestures [1].
Among all the technique developed, the RSSI method is quite widespread
[14]. First, the system must do an offline measurement in order to learn
the signal strengths at all locations in the area of interest when there is no
human presence. Then, the online real-time measurements are compared with
the offline ones in the database to estimate the user location by analyzing
the differences in the signals.
The advantages of this method lie in the fact that the infrastructure used
is already present in many homes, therefore there are no costs to be added
for the purchase and placement of the sensors, just an adequate algorithm
to analyze the characteristics of the RF signal is needed. Another positive
point is the fact that the fingerprints collected during the offline phase are
on average stable over time unless relocation of the Access Point or the
introduction of new bulky objects within the area, which normally does not
happen often.
However, it requires additional time and work for the user in order to
collect the data in the "offline phase". Moreover, other radio devices that
transmit at the same frequency of Wi-Fi could interfere with the system
and give some faults. An other fact to consider is the safety: authors in [15]
claims that repeated Wi-Fi studies show that Wi-Fi causes oxidative stress,
sperm/testicular damage, neuropsychiatric effects including EEG changes,
apoptosis, cellular DNA damage, endocrine changes, and calcium overload.

1.3.7 Visible Light-based method


In recent years visible light communication (VLC) [16] using light devices
such as light emitting diodes (LEDs) has been developed. In [17] it is ex-
plained how is it possible to use LED both for illumination and positioning
purposes. With this technique, the intensity of the light of the LEDs present
on the ceiling is modulated through on-off switching, also known as “on-off
keying” (OOK), in order to send a packet of bits. This modulation method
is common in VLC and is usually called “intensity modulation” (IM) and is
made at a frequency such that the human eye cannot see any modification in
the intensity of illumination. Signals from the transmitters (LEDs) are then
20
1 – Introduction to Indoor Human Localization

received by an optical sensor (a photodiode or a camera) that the human to


be localized in the room must wear. The communication takes place through
a channel that can be any line-of-sight (LOS) paths from the lights to the
receiver. The characteristic of the received signals is then elaborated through
some positioning algorithm to obtain information about the position. This
architecture is similar to WiFi-based and Bluetooth-based location systems,
but using visible light as the carrier does not contribute to the increasing
crowding of the electromagnetic spectrum band allocated to Wi-Fi [18].
The main advantage of this technology lies in the fact that it can be easily
integrated into a domestic environment by just replacing normal bulbs with
special ones suitable for VLC. This would involve a negligible deployment
cost. Furthermore, it is a green technology, safe for health and with a long
lifetime. The disadvantage lies in the fact that the human to localize must
carry a receiver and that must lie in the LOS of the LED. The receiver can be
a small and cheap light sensor board connected to the smartphone through
the audio port, utilizing Analog-to-Digital Converter (ADC) in the audio to
sample the signals. If the VLC technology will have success in the next years
probably this receiver will be already integrated into modern smartphones
but at this moment this technology is not so widespread, so it would be
difficult for the user to find all the necessary equipment.

1.3.8 Pressure-based method

One of the most intuitive and traditional positioning system technology con-
sists in installing some pressure sensors grid on the floor and use the change
in pressure resulting from a person passing over it in order to localize it [19].
This method has many positive aspects because once installed under the floor
it is invisible, unobtrusive and privacy-aware. Moreover, taking advantage of
the differences in weight that may exist between the inhabitants of a house, it
can be used for user identification making it possible to distinguish between
different users or between an adult and a child or between a human and a
pet [20], [21].
The main disadvantage consists in the installation. In fact, it is laborious
and expensive and requires sufficient space beneath the floor surface and a
flexible flooring on the top of it. Even maintenance work is not easy to carry
out because it would be needed to dismantle the floor in order to implement
it.
21
1 – Introduction to Indoor Human Localization

1.3.9 Capacitive-based method


Between people, devices and conductive objects, a natural capacitive coupling
is always present. Electrical capacitance is defined as the electrical charge
stored on a conductive object divided by the resulting change of its potential.
This quantity depends on the size of the conductive object, the distance to
other conductive objects, the dielectric properties of the objects and of the
dielectric between them (e.g., air).
This physical property is the basis of capacitive sensing, a method that
can be used to detect touch, proximity, or deformation. This method has
been widely used in the last decade for realizing touchscreens and touchpads
on phones, tablets, and laptops but can be also applied in the human indoor
localization field. In fact, since the human body is made of conductive ma-
terial, its distance from a capacitive sensor can be measured in an indirect
way by measuring the capacitive coupling between them.
Capacitive sensors consist of sensing electrodes that basically are metallic
plates placed on the walls of the room. In general, the sensing can be [22]:
• Active: with this technique a transmitter electrode generates a known
signal that is received by a receiver electrode. Both the electrodes are
capacitively coupled one onto the other and onto the human body to
localize. The difference in the strength of the signal is an indication of
the presence and movement of the body.
• Passive: with these techniques, no signals are sent but the existing elec-
tric fields are passively sensed by the electrodes. In this way, it is possible
to sense human activities with minimal power and infrastructure. How-
ever, because of its principle of operation, passive systems tend to be
less precise and more susceptible to changes in their environment.
Another distinction can be made considering the operating mode of the ca-
pacitive sensing system that relies on the relative position between the trans-
mitter and receiver electrodes and the human body:
• Loading mode: this technique is an active, self-capacitance sensing mode
since only one electrode is used both for transmitting and receiving. If
a body gets close to the electrode, it loads the electrode causing a dis-
placement current to flow through the body to the ground. This current
increases as the distance between the body and the electrode decrease.
• Shunt mode: with this technique, there are two electrodes, a transmit-
ter and a receiver. Since there is a capacitive coupling between the 2
22
1 – Introduction to Indoor Human Localization

electrodes, a displacement current flowing from the transmitter to the


receiver electrode is also present. As the human body approaches the
electrodes it causes a capacitive coupling with both the electrodes and
a reduction of the displacement current flowing from the transmitter to
the receiver electrode. By measuring that current it is possible to obtain
the position of the body.
• Transmit mode and Receive mode: the working principle of these tech-
niques is similar to the one used in the shunt mode, what changes is the
position of the body with respect to the R/T electrodes. If the body is
much closer to the transmitter, it essentially becomes an extension of
the transmitter electrode since the coupling between the body and the
transmitter is much greater than the other couplings. In this case we are
in transmit mode and for obtaining the position of the body is sufficient
to measure the increase of the displacement current into the receiver
electrode when the body gets closer to it. The inverse of transmit mode
is the receive mode.

A drawing of the described modes is in Figure 1.2.


In all these techniques the role of ground is very important: without a
common potential, capacitive sensing systems do not have a shared reference
and the measurement would be incorrect.
The main advantage of this capacitive sensor method is the fact that it is
a tag-less, unobtrusive and privacy-aware technique that, once installed does
not need any action from the user in order to do its work. Moreover, since
electric fields can propagate through insulators, sensors can be hidden behind
some covers and become totally invisible and unobtrusive for the users. These
systems are also not expensive because electrodes are usually manufactured
from relatively cheap conductive materials (e.g. copper).
A disadvantage can be the fact that the sensitivity of the plate changes
with the distance of the body from it, such that it is more accurate in short
distances rather than long distances. Moreover the measurement can be in-
fluenced by big conductive objects that can be placed in the room such as a
fridge or metal shelving.

23
1 – Introduction to Indoor Human Localization

Figure 1.2. Overview of the main capacitive sensing techniques [22].

24
1 – Introduction to Indoor Human Localization

1.3.10 Summary of the State of Art


In Table 1.1 a summary of the methods described in this section and their
main characteristics.

Table 1.1. Comparison between the most used Indoor positioning systems.
Data inside the table have been taken by different articles cited in this chapter,
but in particular from [1], [2], [14], [17].

Methods Accuracy Price Safety Tag Privacy Power Notes


Vision- High resolution but lack
high high yes no no high
based of privacy and high cost
Infrared Privacy aware, cheap but
20 cm low yes no yes low
Sensors hot sources can give faults
Easy to implement, cost-
Motion 1m low yes yes yes high less if you have a smart-
phone but not tag-less
Cheap but other sources
Sound 33 cm low yes no yes low of sound different from
the person can interfere
High accuracy but not
Ultrasound 5 cm high yes yes yes medium
tag-less
Cheap if you have a wifi
system already present
0.5 m - 3
RSSI low no yes/no yes high but not safe and prone
m
of error due to wifi band
congestion
Cheap, safe, green but
VLC
22 cm low yes yes yes low the technology is not
(LED)
widespread
Accurate, unobtrusive
Pressure
5 cm high yes no yes low but difficoult and expen-
sensor
sive to install
Tagless, unobtrusive, pri-
15 cm - vacy aware but noisy
Capacitive low yes no yes low
30 cm and sensitivity depends
on distance

Among the systems analyzed, Vision-based system could reach the highest
accuracy and would give a lot more information about the person to localize
w.r.t the other techniques. However, its lack of privacy makes it not suitable
for indoor localization purpose. The system that, most of all combine high ac-
curacy and privacy awareness is the ultrasound sensor system that, however,
could be difficult to use in a smart home or assisted living environment for
the fact that it is not tagless. Talking about costs, the cheapest systems could
be the ones that use existing infrastructure or equipment that is already in
the pockets of many users. Among that, the motion-based technique, that
uses the smartphone to localize, the RSSI that uses the WiFi connection,
25
1 – Introduction to Indoor Human Localization

present in most of our homes, the sound system based that can exploit the
use of every kind of microphone. However the first is not tagless and could
be uncomfortable for the user to always bring the smartphone with him. The
second could be tagless but it could suffer from interference from other radio
devices transmitting at the same frequency and also safety problem due to
the constant exposure to radiofrequency could arise. The third method, that
is based on sounds the human produce could be full of errors since other
sound sources different from the user could interfere. Another cheap local-
ization system is represented by the VLC based method that is a novel and
interesting approach to the indoor localization problem, but the technology is
not widespread and users could have difficulty in finding the instrumentation
needed.
The other systems analyzed need an initial installation phase which can
also lead to masonry work, in the worst-case scenario. This is the case of
pressure sensors since for installing them it is necessary to remove the floor.
Despite the advantages that such a system can have (accurate, unobtrusive),
the initial installation cost makes it inaccessible from an economic point of
view. The infrared sensor and the capacitive sensor, instead, even if they need
an installation phase, no large masonry work is necessary: the capacitive
sensors can be attached to the existing walls and covered, to make them
invisible, with non-conductive materials, while the infrared sensors simply
have to be glued to the ceiling or anywhere in the room. These two systems
have not the best accuracy among all the systems analyzed but their accuracy
is quite enough for the indoor localization purpose. Moreover, both of them
are cheap, safe, tagless, do not consume much power and are privacy-aware.

1.4 Related works


This thesis is part of a larger project a research team at the Department of
electronics and telecommunications (DET) of Politecnico di Torino is working
on.
The project has the aim of inferring the indoor position and the trajectory
of a person using long range capacitive sensors, digital filters, and neural
networks.
The front end of the system has been made with a capacitance plate
(that acts as transducer) operating in load mode, a relaxation oscillator that
converts the measured capacitance into frequency, a micro-controller used for
collecting the data and send them over radio using an XBEE module. Once
26
1 – Introduction to Indoor Human Localization

received, each sensor data is processed using digital filters, then the data
labelled with the person position within the room has been used to train and
test some machine learning classifiers to infer the location of the person in
the room.
In order to achieve an increase in the sensing range, several experiments
on the capacitive plates and data processing techniques have been applied.
In particular, in [24] the design, implementation and experimental results
of the capacitive sensor node have been presented. Four capacitive sensors,
each attached to a wall of a 3 m x 3 m room have been used for the localiza-
tion of a single person inside the room. Different sizes of plates (4 cm x 4 cm,
8 cm x 8 cm and 16 cm x 16 cm) and several localization algorithms have
been tested in terms of precision, recall, average distance error, and detected
walking path. Has been observed that all these parameters improve signifi-
cantly with the increasing of plate area. The 16 cm square sensor plate has
been chosen and an ad hoc conditioning circuit has been designed. The de-
tails about capacitive-sensor front-end interface design have been presented
in [27] and [28].
Since the capacitive sensors have a distance-capacitance dependency that
is strongly-nonlinear and degrades the signal-to-noise ratio, advanced pro-
cessing techniques to improve the sensor performance are required. In [25]
the post processing of the data collected from the sensors have been done
by exploiting some Machine learning classifiers from the Weka collection.
Has been observed that the use of machine learning classifiers can effectively
mitigate sensor data variability and noise due to environmental conditions.
Comparing the localization performance of different algorithms, its variation
with the training set size, and the algorithm resource requirements for both
training and inferring, authors have found the best solution algorithm for this
scope in the Random Forest algorithm. In [30] all the detail about the archi-
tectures used for the neural network and the analysis of the results obtained
are summarized.
In Figure 1.3 is shown a schematic of the overall system.
Furthermore the use of capacitive sensors for human identification has
been explored. In [26] authors have noticed that human bodies with differ-
ent BMI have different influences on electric fields at different frequencies
concluding that capacitive sensors can successfully distinguish between the
people with significantly differeces in weight but that for a more accurate
identification a system with more sensor plates is needed. Remaining in the
field of human identification, in [29] the electric and dielectric properties of
27
1 – Introduction to Indoor Human Localization

Figure 1.3. Main building blocks of capacitive sensor Node and Base Sta-
tion. Four sensor Nodes were connected to a single Base Station [25]

human body tissues have been exploited in order to discriminate among dif-
ferent users. Based on the fact that each body has a unique composition, this
method represent a refinement with respect to the previous work, improving
the sensitivity and discrimination capability of the sensor.
Starting from the indoor localization system presented in this section, this
thesis work aims to add to the capacitive sensor network an infrared ther-
mal sensor by exploiting the sensor fusion technique in order to improve its
performance, reliability and accuracy. More detail about how the capacitive
sensor nodes have been used in this project will be given in Section 3.1.

1.5 Sensor fusion


Sensor fusion is a technique that allows combining data provided by different
sensor sources in order to obtain better results compared to what it would
have been possible to obtain using the same sources individually. This can
lead to obtaining more precise, more complete and more reliable values.
Traditionally, systems have one sensor transmitting information to a single
application. This single-sensor measurement method generally suffers from
the following problems [23]:
28
1 – Introduction to Indoor Human Localization

• Sensor deprivation: The breakdown of a sensor element causes a loss of


perception of the desired object.

• Limited spatial coverage: An individual sensor usually covers a restricted


region.

• Limited temporal coverage: Some sensors need a particular set-up time


to perform and to transmit a measurement, thus limiting the maximum
frequency of measurements.

• Imprecision: Measurements from individual sensors are limited to the


precision of the employed sensing element.

• Uncertainty: From a single sensor the uncertainty is difficult to be re-


duced since it can depend on missing data (e.g., for occlusions) or am-
biguity in the collected data.

These problems can be overcome by using sensor fusion. In fact, by using


multiple sensors, either homogeneous or heterogeneous, robustness and reli-
ability are increased by adding redundancy, spatial and temporal coverage
are extended and resolution and uncertainty are improved.
In the sensor fusion scenario, different configurations of sensors can be
made. In general, they fall into three categories:

• Complementary: the sensors of the system do not directly depend on


each other but they can be combined to give a more complete view of
the quantity under measurement.

• Competitive: the sensor of the system are independent one from the
other and delivers measurements of the same property

• Cooperative: the sensor of the system are independent one from the other
but the information provided by them are used to derive information that
would not be available from a single sensors.

In the context of indoor positioning, sensor fusion allows combining two or


more of the techniques analyzed in the previous section in order to mitigate
the defects and limitations of a single technique with the advantages of a
second technique, when possible. In this work, a sensor fusion approach will
be made by combining the data taken independently from a capacitive sensors
network and an infrared sensor, in a complementary way.
29
1 – Introduction to Indoor Human Localization

1.6 Main contribution of this thesis


This thesis work has the purpose of improving the performance, reliability
and accuracy of the existing capacitive sensors network localization system
described in Section 1.4 by implementing a complementary sensor fusion
technique with an infrared thermal sensor localization system designed for
this scope. The goal is to obtain a low cost, easy to use, unobtrusive, passive,
tag-less and privacy-aware indoor localization system that can be safely and
easily installed in smart homes and assisted living environments.
The main problem of capacitive sensors lies in the fact that they have
a sensitivity that steeply decreases with the increase of the distance from
them. By placing the capacitive sensors on the four walls of a room, the area
of greatest sensitivity is represented by the edges of the room itself, while
data acquired in the centre of the room are the most afflicted by noise and
errors. Placing an infrared sensor in the centre of the room on the ceiling gives
additional sensing data in the part of the room where data were less accurate
and improving the reliability. Moreover, using two different systems that
extract the localization information from two completely different physical
quantities, the obtained system would be more robust since the noises that
affect one of the two systems do not affect the other and vice versa.
In particular, the present work is so organized:
• The importance of indoor human localization has been highlighted, un-
derlining the characteristic an ideal indoor localization system should
have. A review on the state-of-art techniques for indoor localization has
been made analyzing for each method advantages and disadvantages
(Chapter 1).
• An infrared sensor acquisition system has been designed using an in-
frared sensor, in particular the D6T-44L-06 MEMS Thermal Sensors,
and Arduino board to acquire data about the position of the user. The
data acquired has been sent via radio through an Xbee module to a
computer where the elaboration and packing of the acquired informa-
tion have been made with MATLAB (Chapter 2).
• A complete experiment has been set up using four capacitive sensors,
the infrared sensor and an ultrasound sensor as reference. The operation
of each part has been tested separately (Chapter 3).
• Finally, experimental data from all the sensors have been collected si-
multaneously in four different experiments. The obtained results have
30
1 – Introduction to Indoor Human Localization

been analyzed and filtered and merged (Chapter 4).


• Conclusions and observations on possible future developments have been
drawn (Chapter 5).

31
Chapter 2

Thermal Infrared Sensor


acquisition System
2.1 Infrared sensor selection
With the prospect of using a sensor fusion technique with the capacitive
sensor system, an infrared sensor has been chosen for many reasons. First
of all, it is affected by kinds of noise that are often complementary to the
ones of capacitive sensors. Moreover, it is a tag-less, not expensive, privacy-
aware and safe localization system that can be easily placed in the target
environments ( smart homes and assisted living environment).
In the field of infrared radiation sensor, two main kinds of sensors exist:
1. Quantum well infrared photo-detector (QWIP) technology: it is a tech-
nique that uses the photoelectric effect for detecting the long-wavelength
infrared (i.e., 8–12 µm) radiations. Its principle of operation is based on
the inter-band transition of electrons across the band gap (Eg) from
the valence band to the conduction band when photons with energy hv
> Eg excite it. These photo-excited electrons create a current that is
proportional to the number of photons collected and that can be easily
measured [31]. This method is really sensitive and has a short response
time but it is necessary to cool the photo-detector to a temperature
of 70K in order to perform well, which makes it not suitable for the
intended application of this project.
2. Thermal detectors: are based on temperature increase in some materials
from absorption of IR radiation, which can cause changes in the mate-
rial characteristics. Among the advantages, their detectability is nearly
32
2 – Thermal Infrared Sensor acquisition System

wavelength-independent (flat spectral response), they do not need a cool-


ing operation and they usually work at room temperature and has a lower
cost, compared to quantum detectors. Disadvantages include lower sensi-
tivity or detectivity and slower response speed (normally in milliseconds)
[32]. Today, the most common thermal detectors are:
• Microbolometer: is based on the temperature-dependence of an elec-
trical resistance whose value changes when heat radiation is ab-
sorbed. The change in resistance causes a change in the signal voltage
drop across the bolometer resistance that can be easily measured. In
order to achieve high sensitivity and large specific detectivity, it is
necessary to use a material with a high coefficient of the temperature
of electrical resistance. A drawback is that this method is quite ex-
pensive and an initial temperature stabilization is necessary before
the use.
• Pyroelectric sensors: also called Passive InfraRed (PIR), they are
made with materials that generate energy when exposed to heat
(pyroelectric materials). PIR does not autonomously detect move-
ment but it detects sudden changes in temperature which modify
the state that the PIR had previously memorized. When someone
passes in front of it, a rapid change in the detected temperature is
caused and the person is located. The movement of objects of iden-
tical temperature with respect to the background, of course, is not
detected. In [34], authors use a set of PIRs to implement a human
movement detecting system that is able to recognize the direction of
movement, the distance of the body from the PIR sensors, the speed
of movement during two-way, back-and-forth walking and identify
the walking subject. This kind of system cannot be used for human
localization purposes since the PIR sensors can only detect changes
in heat flow and would not detect the presence of a person steady
that does not move.
• Thermopiles: it is composed of several thermocouples connected in
series that exploit the Seebeck effect in order to detect the tempera-
ture of the object under observation. In other words, they are stacks
of junctions of two different material [33] that are called active and
passive junctions.
As can be seen in Figure 2.1 the active junctions are attached to
a thermally isolated membrane that is exposed to radiation, while
the passive ones are only influenced by the ambient temperature. A
33
2 – Thermal Infrared Sensor acquisition System

Figure 2.1. Structure of a thermopile [5]

voltage difference proportional to the temperature difference at the


junction is then created. In this way, it is possible to measure the
effective temperature of the object.
The response times of this technology is usually between 20 ms and
50 ms, so they are fast enough for the detection of human motion.

Summing up, for thermal human indoor localization the best solution is
to use thermopiles because:

1. unlike pyroelectric sensors, their output does not depend on the rate of
change of the object’s temperature;

2. unlike bolometer-based devices, thermopiles do not require special tem-


perature stabilization and they are cheap;

3. unlike quantum based technique they do not necessarily need cooling.

For all these reasons, thermopiles technology has been chosen for this work.

2.2 Omron D6T MEMS Thermal Sensor


The sensor chosen for this work is the D6T-44L-06 MEMS Thermal Sensor
by Omron [35], [36], [37]. This component uses the thermopile technique to
34
2 – Thermal Infrared Sensor acquisition System

give information about the surface temperature of an object in arrays of 16


pixels (4x4). The main characteristics of this sensor are reported in Table
2.1. As is can be seen it has a maximum object temperature output accuracy
of ±1.5°C. Talking about resolution, it can be expressed in terms of “Noise
Equivalent Temperature Difference” that is a measure for how well a thermal
imaging detector can distinguish between very small differences in thermal
radiation in the image. For this sensor, the value is equal to 0.14°C.

Table 2.1. Overview of the characteristics of Omron D6T MEMS


Thermal Sensor
Temperature Current
Power supply Temperature
Item View angle resolution consump-
voltage accuracy
(NETD) tion

X direction:
D6T-44L-06 4.5 to 5.5 VDC ±1.5°C max 0.14°C 5 mA
44.2°
Y direction:
( VCC= 5.0 V, (typical)
45.7°
Ta= 25°C )

In the following sections, some other details about this infrared thermal
sensor and the design choices will be discussed.

2.2.1 Operating principle

The inside of a D6T MEMS Thermal Sensor is shown in Figure 2.2. As it


is possible to see from the image, a silicon lens on the top of the sensor is
present in order to collect the far-infrared rays emitted by an object onto the
thermopile sensor present in the module. These infrared rays arrive in the
thermopile sensor inside and an electromotive force is then created. A spe-
cialized downstream processing circuit adjacent to the sensor chip is present
in order to achieve low-noise temperature measurements.
An analog circuit inside the module converts the electromotive force gen-
erated by the thermopile in temperature information, obtaining both the
temperature of the object and the temperature value inside the module. Fi-
nally, the measured value is converted to digital information and outputted
through an I2C bus.
35
2 – Thermal Infrared Sensor acquisition System

Figure 2.2. Inside detail of D6T MEMS Thermal Sensor [35]

2.2.2 Field of view


On the top of the thermopile sensor, the silicon lens present in the D6T
module is optically designed in order to extend the sensitivity characteristic
view angle of the sensor. In order to calculate the actual area where the sensor
can detect the presence of a human, the Field of View indication must be
taken into account. The user manual defines the Field of View of the sensor
as an area angle of 50% for maximum sensitivity. As can be seen from Figure
2.3 the Field of View in the X direction is equal to 44.2° while in Y direction
is equal to 45.7°.
From these value it is possible to measure, by applying simple geometric
equations, the total measurable area (FOV) of the sensor, that, of course,
enlarges as the distance between the measured object increases. For example,
when placing the sensor at a height of 1 meter from the floor, it will cover
an area of 81 cm x 84 cm on the wall while if it is placed at a distance of 3
meters from the floor, the covered area will be 244 cm x 253 cm that is much
larger (as can be seen in Figure 2.4). However, as the distance increases, the
occupancy ratio of objects (people) in the FOV reduces and the measured
36
2 – Thermal Infrared Sensor acquisition System

Figure 2.3. Angle of view of D6T-44L-06 MEMS Thermal Sensor by Omron [35]

temperature values are strongly influenced by the background temperature


than the temperature of the intended object (people).
In conclusion, to correctly detect the presence of a person using this sensor,
the measured object must not be too small with respect to the total FOV
area since it is designed only for close-distance applications.

Figure 2.4. Field of view area positioning D6T sensor at 3 m and


at 1 m from the floor [37]

37
2 – Thermal Infrared Sensor acquisition System

2.2.3 Transmission of data through I2C


The I2C communication protocol is exploited by the D6T-44L-06 MEMS
Thermal Sensor for the communication of the sensed temperature data array.
This hardware protocol requires two serial communication lines: SDA (Serial
DAta) for data and SCL (Serial CLock) for the clock. Together with these 2
wires, a reference connection and a power supply line are also present (Figure
2.5). Both SDA and SCL are bidirectional lines, connected to a positive
supply voltage via pull-up resistors.

Figure 2.5. Outer view and connections of the Omron D6T-44L-06


MEMS Thermal Sensor [36]

The protocol is based on a master-slave mechanism in which the master is


in charge of initiating the data transfer on the bus and generating the clock
signals to permit that transfer. Any other device connected to the bus is then
considered a slave and is addressable by a unique address [38].
In the case of the D6T-44L-06 MEMS Thermal Sensor, it acts as an I2C
slave. A schematic of the data transfer from the sensor to the master is
represented in Figure 2.6. In order to read the data acquired from the sensor
is then necessary to connect a microcontroller to the bus, acting as a master,
and then to follow these steps:
1. The master must send a start condition that corresponds to a HIGH to
LOW transition on the SDA line while SCL is HIGH.
2. The master sends the clock in the SCL line while is sending the 7 bits
address of the slave in the SDA line. In this case, the manual of the D6T
states that the address of the sensor is "0001010" in binary. At the end
38
2 – Thermal Infrared Sensor acquisition System

Figure 2.6. I2C data line flow and Output data composition from the
datasheet of D6T-44L-06 MEMS Thermal Sensor [36]

of this address word, an eighth bit indicates if the master wants to read
(bit at "1") or write (bit at "0") in the slave register. In this case, it puts
the last bit at 0 and sends in the SDA line the command word "4C" in
hexadecimal. In this way, the slave understands that it will be requested
to send the acquired data.
3. A repeat start condition is then sent by the master followed by the
address of the slave and, this time, a Read request.
4. If the addressed slave has received the command it takes control of the
data line on the next high pulse of the SCL and it forces the line to be
low (acknowledge condition). In this way, the master will be sure that
the slave is ready to send the data.
5. The slave sends the data to the master in groups of 8 bit. As is can
be seen in the bottom part of Figure 2.6, the total packet of output
data is composed by 35 bytes and includes the reference temperature
inside the sensor module (PTAT), the array of 16 temperatures read by
the sensor(from P0 to P15), and a byte for the cyclic redundancy check
(PEC).
6. At the end of the transmission, the master declares the end of the trans-
mission by sending a STOP condition that corresponds to a LOW to
HIGH transition on the SDA line while SCL is HIGH.
39
2 – Thermal Infrared Sensor acquisition System

In Figure 2.7 two images of SDA and SCL signal taken with the oscilloscope
during a data transmission of the D6T sensor are presented. In particular,
on the left, a start condition is shown while on the right a stop condition is
present.

Figure 2.7. Start and Stop of a transmission from D6T sensor to the mas-
ter using I2C protocol. Red lines represent the SCL signal while blue lines
represent the SDA signal.

40
2 – Thermal Infrared Sensor acquisition System

2.3 System implementation using microcon-


troller

Figure 2.8. Schematic of the overall infrared thermal system.


Image realized based on [35]

The acquisition of data from the sensor has been done using the micro-
controller of an Arduino Uno board. Arduino Uno is a microcontroller board
based on the ATmega328P, a CMOS 8-bit microcontroller [39].
In Figure 2.8 the schematic of the overall system. As it can be seen, the
sensor communicates the data through a bus using the I2C protocol to the
Arduino Uno board which, in turn, sends the collected data via radio to
a second Arduino board. The data received are then transferred to a PC
via a USB cable and processed using the MATLAB software. In the next
subsections all the details about the electrical connection and design choices
adopted will be given.
41
2 – Thermal Infrared Sensor acquisition System

2.3.1 Connection with the Thermal Infrared Sensor


In Figure 2.9 the schematic of electrical connection adopted in order to con-
nect the sensor to the microcontroller.

Figure 2.9. Electrical connection between D6T sensor and MCU [36]

Referring to the guidelines of the I2C protocol, a pull-up resistance of the


value between 3 kW and 10 kW has been connected between the two serial
communication lines SDA and SCL and the power supply VCC line. The aim
of this resistor is to maintain the SDA and SCL lines at a HIGH level when
the bus is free and ensure that the signals are pulled up from a LOW to a
HIGH level within the required rise time. Several tests were carried out to
choose a value that could be fine for this application. In fact, by choosing a
pull-up resistance value too close to the upper limit and observing the SCL
signal, it is possible to see that the rising and falling time of the generated
square wave are longer than the ones observed when a lower value of pull-up
resistance is chosen. On the other hand, choosing a too small pull up resistor
would give a smaller rising and falling time but higher power consumption
and a sharper square wave that would give worst cross-talk effects in the near
wires. For these reasons, after some tries, an intermediate value of 6.8kΩ±1%
for the resistor has been chosen.

2.3.2 Wireless communication using Xbee module


Due to the position in which the sensor should be placed, i.e. the ceiling
of a room, it has been decided to send the collected data from the sensor
via radio to another Arduino board in order to save and elaborate the data
in a computer. For this scope, a couple of Xbee modules have been used
connecting them to the two Arduino boards using an Xbee shield, as shown
in Figure 2.10.
42
2 – Thermal Infrared Sensor acquisition System

Figure 2.10. Arduino connection to Xbee module through a shield

The Xbee module used is the Digi XBee® Embedded ZigBee modules
[40]. It allows making a wireless end-point connectivity between two or more
devices using the IEEE 802.15.4 networking protocol. The platform XBee
Configuration and Test Utility (XCTU) has been used for the configure and
test the Digi RF devices [41]. With this tool has been possible to configure
the two modules such that one can act as a coordinator and the other as an
end device.
In particular, two unique addresses have been assigned to the two modules
and the interface data rate has been set to 9600 bps. A channel of communica-
tion different from the one used for the Xbee communication of the capacitive
sensors has been assigned in order to avoid interference.

2.4 Programming the Microcontroller using


Arduino
The Arduino Software (IDE) has been used to write the programs and upload
them to the Arduino Uno boards. In Appendix A the Arduino codes for the
acquisition of data and the complete communication are present.
In order to handle the D6T-44L-06 thermal sensor the Wire and WireExt
libraries have been used. They define some classes of function that help in the
43
2 – Thermal Infrared Sensor acquisition System

I2C communication with the sensor, following the rules explained in Section
2.2.3. In particular the start condition with the address of the sensor device
is sent followed by the command of starting the communication. Then a
repeated start condition is sent to the sensor with a read command. The
Data read from the sensor are then saved byte by byte in a buffer until the
end of the transmission.

2.4.1 Error detection using CRC and retransmission


At the end of the reception a check on the integrity of the received data
is made using the CRC-8 (Cyclic Redundancy Check). In fact the sensor,
together with the data, sends also a PEC (packet error code) byte that is ap-
pended at the end of each transaction. The CRC principle consists in treating
binary sequences as binary polynomials, i.e. polynomials whose coefficients
correspond to the binary sequence. In this case the PEC is calculated by the
sensor using the following Generator Polynomial G (n + 1 bit) of order 8:

C(x) = x8 + x2 + x + 1

that corresponds to the binary number 100000111. Assumed that this poly-
nomial is known by both the transmitter and the receiver, and being the
message (M) to be transmitted any sequence of bits, the CRC will be given
by the division of: M/G. For making this division bit-wise XOR and left shift
of the bits are used, as represented in Figure 2.11.

Figure 2.11. Graphical rapresentation of CRC-8

The receiver of the message, in this case the Arduino board, by applying a
bit-wise algorithm that mimics the hardware shift register method, is able to
compute the packet error code from the received data. The function, defined
into the code, that implements the CRC algorithm is reported in Figure 2.12
and is called "calc_crc".
As it can be seen, it receives a byte data that is shifted left of one bit at
a time and then, if the MSB is at 1, the division is performed by making a
44
2 – Thermal Infrared Sensor acquisition System

Figure 2.12. Function implementing CRC-8 algorithm

bit-wise XOR with the number "00000111" in binary (that corresponds to 7


in decimal).
The call of the calc_crc function inside the code in Figure 2.13.

Figure 2.13. Call of the calc_crc function implementing CRC-8 algo-


rithm inside the main code

As it can be observed, all the 34 bytes received from the sensor are used
as input of the function one after the other, making a bit-wise XOR with the
previous computed CRC output before sending them to the function.
The resulting byte is then compared to the PEC received at the end of the
45
2 – Thermal Infrared Sensor acquisition System

transmission with the sensor and in case of inequality, a transmission error is


discovered and the sensor is requested to send the packet again. In particular,
a design choice of repeating the reading for a maximum of 3 times has been
made, in order to not slow down too much the system. A "valid byte" is set to
1 if the transmission has been successful. This check improves the reliability
of the bus.

2.4.2 Radio transmission of data through Xbee module


For the sending of data via Xbee the library XBee.h has been used. For
the transmitter, in the code a packet of data has been composed using the
function Tx16Request() as showed in the Figure 2.14.

Figure 2.14. Initialization of the packet of data to be sent via radio using Xbee.

In particular, the packet is composed by the address of the coordinator


(that is the receiver), the data to be sent, the size of the data sent and some
transmission options such as the handling of the acknowledge. Then, using
the function "xbee.send()" the packet is transmitted.
From the receiver side, the Arduino board has been programmed with the
code present in Appendix A.2. When the data are received an error check
is made. If no errors in the transmission are present, the data are saved,
unpacked and transmitted to the computer using the serial port.

2.4.3 Sensor data acquisition period


The Arduino code presented in the previous section repeats itself contin-
uously in a loop while the system is running. However, the D6T thermal
infrared sensor needs some time to acquire samples, pack and send them to
the board. Has been observed experimentally that at least 80ms are required
by the sensor for making an acquisition and sending of data. This interval of
46
2 – Thermal Infrared Sensor acquisition System

Figure 2.15. Schematic of how the sampling period is organized.

time is not constant and considering that, if an error occurs, the sensor must
resend data again, it could be longer. It has been observed that by letting
the Arduino continuously ask for data without a pause the communication
between the board and the sensor is worst and frequently a communication
timeout occurs due to communication mistakes. This timeout condition is
implemented with low input on SDA or SCL terminal for one second and
should be avoided since for one second the sensor cannot communicate its
readings to the board and samples for that period are lost. For this reason,
at the end of the code, a timer is set in order to pause the system for the
time necessary to complete the period, making a subtraction between period
and the elapsed time. In this way, data are received with a fixed sampling
rate. In Figure 2.15 the drawing represents a schematic of how the sampling
period is organized.
After some observation of the maximum acquisition time, made directly
on the signals with the oscilloscope and through software with timestamps,
the minimum sampling period has been set to 125 ms, obtaining a rate of 8
samples/s.

2.4.4 Data acquisition with MATLAB


Data received in the computer through the USB serial port have been ac-
quired using the software MATLAB. In this environment it is easier, w.r.t.
Arduino software, to elaborate the data saving them in a matrix and then
creating spreadsheets and appropriate graphs.
In Appendix B.1 the code used for the acquisition of data in MATLAB
47
2 – Thermal Infrared Sensor acquisition System

is shown. In particular the code is set to run for a defined amount of time
exploiting the "tic" and "toc" functions and a while loop. Data from the serial
port COM5 are saved into an array using a fscanf() function. The timestamp
of the arrival time of each packet of data is simultaneously saved in an array
with a resolution of one millisecond. This operation has been done in order
to have a time reference useful for later synchronizing data arriving from the
infrared sensor with those from the capacitive sensors.
A spreadsheet file in Comma Separated Values (CSV) format containing
the temperature data, the timestamps and the valid bytes are then created.
In order to have a graphical representation of the so acquired data, a heat-
map graph from each packet of data is then created using the code reported
in Appendix B.2. In this graph, the temperature of each of the 16 squares in
which is divided the field of view of the sensor is represented with a colour
that has a warmer tone as higher is the perceived temperature. In Figure
2.16 two examples of the output file acquired without and with the presence
of a person in the room. It can be seen that the square occupied by the
person (i.e. the square with coordinates (4, 2) in Figure 2.16 B) has a higher
temperature shown by a warmer colour.

Figure 2.16. Output scheme of the infrared sensor acquisition system. In


figure A the image from above of the empty room; in Figure B the pres-
ence of a person is highlighted by a pixel with higher temperature w.r.t. the
background, shown with a warmer color.

48
Chapter 3

Experiment setup
The complete indoor localization system includes the thermal infrared sensor
system described in Chapter 2 and the capacitive system designed in [30].
In addition to this system an ultrasound sensor network has been installed
in order to have an accurate reference for the position of the person in the
room.

Figure 3.1. Representation of the overall experiment setup

In Figure 3.1 a drawing representing the overall experiment setup. The


room used for the experiment is a 3 m x 3 m room with a ceiling at a height
of 3.05 m. In this structure, the capacitive sensor plates are placed at the
centre of the walls at a height of 120 cm from the floor, the infrared sensor
49
3 – Experiment setup

is placed at the central point of the ceiling and the ultrasound sensors are
placed at the four edges of the ceiling and communicate with a tag on the
head of the person.

3.1 Capacitive Sensors System


In this section more details about the capacitive sensor indoor localization
system used for the experiment will be given. Moreover, it will be explained
how the capacitive sensor nodes have been tested and some attempt to im-
prove the sensitivity of the sensors before making the overall experiment.

3.1.1 Capacitive sensor module


The capacitive sensor module is based on a 16 cm x 16 cm metal plate
installed at a height of 120 cm from the ground, considering this as an average
height of a person’s bust. Defining as d the distance between the plate and the
human body, plate capacitance cannot be determined analytically for d much
longer than the plate diagonal. The measure of capacitance is then indirectly
obtained using a relaxation oscillator based on a 555 timer integrated circuit
(IC). The timer is configured in astable multivibrator configuration and the
schematic of the circuit can be seen in Figure 3.2.

Figure 3.2. 555-based capacitance-frequency converter [24].

The oscillation frequency f of the timer can be represented by the following


formula where R1, R2 are the values of resistors in the multivibrator circuit
50
3 – Experiment setup

and C is the capacitance of the plate:


1
f= (3.1)
0.7(R1 + 2R2)C
As it can be seen, f is inversely proportional to sensor plate capacitance C
through a constant determined by the resistor values. From this value of fre-
quency, the distance d of the person from the plate could be then empirically
computed.
The frequency information is then collected by a micro-controller that
sends it to the base station using a Zigbee radio module. The sensor is
battery-powered and has no galvanic connection to the ground.

3.1.2 Testing the module and implemented optimiza-


tion
Each of the four capacitive sensor nodes has been tested separately in order
to check if there was space for improvement. In particular a sensitivity test
has been made by moving in front of each sensor in a straight line, starting
from a distance of 1.8 meters and approaching each time by a small step of
30 centimeters. For each step 50 samples have been taken with a sampling
rate of 5 samples per second.
The sensitivity test has been repeated two times, changing two different
kind of timer based on different technologies:
• LMC 555CN by Texas Instument that is based on CMOS logic [43].
• NE 555N by ST-Microelectronics that is based on Transistor-transistor
logic (TTL) [44].
In the following graphs in Figure 3.3 it is possible to observe the sensitivity
test acquisitions made with the two different 555 ICs at the distances from
the sensors: 180 cm, 150 cm, 120 cm, 90 cm, 60 cm and back to 180 cm. The
data have been normalized in order to obtain value that could be compared.
The average of points at "long range" distance of 1.8 m have been computed
and then all points have been divided by this average.
Also the drift has been observed, by acquiring 6000 samples in a 20 minutes
reading (5 samples per second), while the room was empty. The normalized
plots can be observed in Figure 3.4.
As can be seen, using the TTL based timer (NE 555) a lower noise, yet
lower sensitivity are present than using CMOS based (LMC 555). Since noise
51
3 – Experiment setup

can be removed by filtering the data, the CMOS based timer has been chosen
after this experiment.
From this sensitivity test has been observed that not all the sensors have
the same sensitivity. In particular, sensor 4 seems less susceptible to changes
in the capacity of the plate and is able to detect the presence of a person and
to locate it at a lower maximum distance compared with the other nodes.
This can be due to the position of sensors in the room or to the hardware of
the sensor node.
In order to check if the position where the sensor is in the moment of the
acquisition could influence the reading, all the measurements have been re-
peated by placing all the sensors in the same position. No significant changes
have been observed with this change of position. For this reason, the HW
reasons why the system was not working as good as possible have been an-
alyzed. In particular, all the contacts have been checked with a multimeter,
most of the contacts of the circuit board have been re-soldered, the position
of the battery with respect to the plate has been changed since it could give
some noise. The plate has been also cleaned and the LMC 555 IC has been
substituted with a brand new one of the same type.
All these operations have been repeated for all sensors in the same way,
but significant improvements have only been observed for two, in particular,
sensor 1 and 3, whose graphs of before and after are present in Figure 3.5.
As it is possible to observe, an increment of the sensitivity for sensor 1 and
a reduction of high-frequency noise have been obtained. For the other two
sensors, no significant changes have been observed.

52
3 – Experiment setup

Figure 3.3. Sensitivity test results made for all the four capacitive sensor
nodes and repeated by changing the timer IC. In the upper part (A) data
acquired using LMC 555CN by Texas Instruments, in the bottom part (B)
data acquired using NE 555N by ST-Microelectronics

53
3 – Experiment setup

Figure 3.4. Drift acquisition for all four capacitive sensor nodes for
several 555 ICs. In the upper part (A) data acquired using LMC 555CN
by Texas Instruments, in the bottom part (B) data acquired using NE
555N by ST-Microelectronics

54
3 – Experiment setup

Figure 3.5. Plots of the sensitivity test for sensor nodes 1 and 3 before and
after the hardware debugging operations.

55
3 – Experiment setup

3.1.3 Filtering the noise


The obtained signal from this circuit is usually strongly affected by different
kinds of ambient noise that could for example come from nearby appliances,
electrostatic discharge, temperature and humidity changes. This noise falls
in two main categories:

• high-frequency noise from, for example, appliances and light switches;


• a low-frequency drift, that is a DC component that varies much slower
than the useful signal component changes. This drift could be attributed
to temperature and humidity changes or to slow leak of static charges.

For removing this noise some filtering techniques are used. The output of
the sensor node has been sent to both a Median Filter (MF) and a Low-Pass
Filter (LPF): with the first the slow drift has been extracted, while with the
latter the high-pitch noise has been removed. Then, by simply subtracting
median filter output from the LPF output a clean output, without any noise
has been obtained. This signal would be the input for the Neural network.

3.2 Ultrasound Sensors System


To provide real-time reference data on the exact position of the person inside
the room, and therefore to be able to test the localization system based on
capacitive and infrared sensors, an ultrasound sensor system has been used.
As already discussed in Section 1.3.5, this kind of sensors guarantee high
precision, reliability and low infrastructure costs but makes use of tags for
localization. The commercial sensor system used for this project is the one
produced by Marvelmind [42] and is depicted in Figure 3.6. It is an off-the-
shelf indoor navigation system, designed to provide location data with an
accuracy of ±2 cm. The system is composed of a network of four stationary
ultrasonic beacons and a mobile beacon - called HedgHog - installed on ob-
jects to be tracked. The beacons are interconnected via radio interface and a
modem providing a gateway to the system from PC is also present. The mo-
dem is the central controller of the system that is in charge of setting up the
system, monitoring it, and interact with the user through the Marvelmind
Dashboard software.
The principle of operation is simple: the Time-of-Flight, i.e. the propa-
gation delay, of an ultrasonic pulse between stationary and mobile beacons
is calculated and then the position of the mobile beacon is computed using
56
3 – Experiment setup

Figure 3.6. Ultrasound Localization System By Marvelmind. It is composed


by a modem and five sensors, among which four are used as stationary
reference beacons and one is used as mobile HedgHog beacon [42]

trilateration algorithm. Of course, a direct line of sight among beacons and


HedgHog is needed.
The four beacons have been put on the ceiling at a height of 3.05 meters
on the angle points of the room, while the HedgHog beacon has been put
on a helmet for stability reasons and put on the person to locate. Using a
MATLAB script, data from the mobile beacon, provided by the Dashboard
have been analyzed.

3.3 Infrared Sensor System


In order to verify the correct behaviour of the D6T infrared thermal sen-
sor, 9-hours temperature measurement has been made in an empty room. To
be sure that no one could influence the reading, the sensor was left to take
a measurement during an entire night collecting 1 sample every 3 seconds.
Together with the infrared sensor, the measurement of the room tempera-
ture has been acquired using a temperature and humidity sensor which was
located inside the same room. The sensor used was DHT11 Humidity Tem-
perature Sensor that exploits an NTC temperature measurement component
[45]. The basic characteristics of DHT11 component are reported in Table
3.1. As can be seen, for the temperature reading it has an accuracy of ±2°C
and a resolution of 1°C.
These values are worse w.r.t. the characteristics of the infrared thermal
sensor, that has a temperature resolution (NETD) of 0.14°C and an object
temperature output accuracy equal to ±1.5°C. This means that comparing
57
3 – Experiment setup

Table 3.1. Main characteristics of DHT11 Humidity Temperature Sensor

Measurement Humidity Temperature


Item Resolution Package
Range Accuracy Accuracy

DHT11 20-90% RH 5% RH ±2°C 1 4 Pin


0-50°C Single Row

the data read from the two different sensors this difference in the accuracy
and resolution must be taken into account. The difference in temperature
between the value read from DHT11 temperature sensor and the average of
the reading from the D6T thermal sensor has been computed and the plot
of this value over the sample number is reported in Figure 3.7.

Figure 3.7. D6T thermal sensor stability characterization. Temperature dif-


ference between the values read from DHT11 temperature sensor and the
average of the readings from the D6T thermal sensor is here plotted. The
difference remains in the interval of 1°C , the resolution of DHT11

As can be seen, there is at least 1°C of difference between the two sensors,
probably due to the different positions of the sensors in the room or due to
the different accuracy. Moreover, the values remain in the interval of 1°C,
that is the resolution of the DHT11 sensor. While the ambient temperature
was decreasing over the night, the value measured by the D6T decreased
too and the difference with the value measured by the DHT11 sensor was
58
3 – Experiment setup

increasing, as shown in the drift present in the plot. Then, at the end of the
drift, when also the DHT11 changed its value, the difference returns to be at
1°C.

3.3.1 Infrared sensor: area of sensing evaluation


The room of the experiment is 3.05 meters high and the infrared sensor has
been put at the ceiling level. With this information, the infrared sensor field
of view at different heights can be easily computed using some geometrical
formulas. The field of view of the infrared sensor can be seen, for each direc-
tion, as an isosceles triangle where the height is the distance from the sensor
and the angle opposite to the base that is the angle of view given by the D6T
datasheet. Referring to Figure 3.8, the following formulas can be exploited,
knowing α and h, in order to obtain the base a.

Figure 3.8. Infrared sensor field of view evaluation

180 − α
β= (3.2)
2

a
h= tan(β) (3.3)
2
59
3 – Experiment setup

The variables that will be used from now on to compute the field of view
of the sensor are shown in Figure 3.9.

Figure 3.9. Values used for the field of view computation referred to the
room. In the figure, h1 is the distance between the sensor and the floor,
h2 is the distance between the sensor and a person that is h3 = 1.65 m
tall. The angle of view αx and αx and the corresponding projection on the
flooray and ax are also highlighted.

Along x direction, with αx = 44.2°, and h1 = 3.05m, the maximum covered


area is:

2 · h1
ax = = 2.48m (3.4)
tan(βx )
Along y direction, with αy = 45.7°, and h1 = 3.05m, the maximum covered
area is
2 · h1
ay = = 2.57m (3.5)
tan(βy )
In this way the sensor covers a surface on the floor that is slightly lower
than the total area of the room. However, considering that the points of
higher sensitivity for the capacitive sensors are the points near the walls
60
3 – Experiment setup

where the capacitive nodes are placed, it is not necessary a higher sensitivity
area for the infrared sensor.
Moreover, it must be considered that the part that is better seen by the
infrared sensor is the head, both because of proximity and because it is one of
the warmest parts of the body usually not covered by clothes. For this reason
should be also interesting to consider the field of view not at the floor level
but at the head level. Considering that the average height of human is 1.65
m, the same computation has been done with h2 = 3.05m − 1.65m = 1.4m .
the obtained results are:
2 · h2
ax = = 1.13m (3.6)
tan(βx )
2 · h2
ay = = 1.18m (3.7)
tan(βy )
This area is about half of the area of sensitivity at the floor level, but it
is characterized by a higher resolution since the occupancy ratio of people
head in the FOV is higher.

61
Chapter 4

Data acquisition and


Sensor Fusion Results
After the instrumentation has been tested, the acquisition of data has been
done. Considering the factors that could positively and negatively influence
the reading of the data by the capacitive and infrared sensors, different ex-
periments have been carried out under different conditions. In particular,
the experimental data were collected in four experiments by two different
people, hereafter will be called User One and User Two. In particular, User
One is a male 182 cm tall and was wearing a cotton shirt and denim jeans
so that exposed parts of the body were hands neck and head. User Two is
a female 163 cm tall wearing cotton T-shirt and cotton trousers so that ex-
posed parts of the body were arms, hands, neck and head. This information
is here provided because the taller a person is, the closer the infrared sensor
is to his head, therefore, the smaller the field of view area at the height of
his head and the greater the precision of measurements from the infrared
sensor. Furthermore, the parts not covered by clothing are those which emit
the most infrared rays. The use of cotton clothes has been chosen in order to
avoid that the reading would be influenced by textile materials which may
accumulate electric charge.
Each experiment lasted half an hour. During these experiments, the person
was slowly and continuously walking inside the room while all three local-
ization systems were active. The first two experiments (one for each User)
were carried out in the evening after sunset, while the other two during the
afternoon with the sun’s rays penetrating the room through blinds. It was,
therefore, possible to test the system under different temperature conditions
and with and without the interference of sunlight in the room. In the third
62
4 – Data acquisition and Sensor Fusion Results

experiment, an element of disturbance to the infrared system has been in-


serted in order to better test the sensor fusion technique. Since it has been
observed that the capacitive sensors have higher drift immediately after they
are connected to the power supply and that after some time they arrive in
a more stable state, all experiments were carried out after leaving the ca-
pacitive sensors on for at least half an hour. All the collected data will be
commented in the next sections.

4.1 Experiments one and two - Evening


The first two experiments have been made during the evening by both Users
and each of them lasted 30 minutes. In the first experiment User One was
walking inside the room following some specific and repeated patterns. A
graph of the data from the ultrasound system can be seen in Figure 4.1.

Figure 4.1. Plot of the raw data acquired in Experiment 1 from the
ultrasound sensors system

As can be seen, some spikes are present in the reading so it has been neces-
sary to filter them out. In particular, the Hampel Filter function of MATLAB
has been used to detect and remove outliers to both the data of x axis and y
axis. This filter computes the median of a window of k neighbour samples for
each sample of the dataset. It estimates also the standard deviation of each
sample about its window median using the median absolute deviation. If a
sample differs from the median by more than n_sigma standard deviations,
it is replaced with the median. The filter has been applied two times, the
63
4 – Data acquisition and Sensor Fusion Results

first time with a high value of k= 25 and n_sigma= 2 in order to remove


the very high spikes, the second time with a smaller window (k= 10 and
n_sigma= 1) in order to smooth remaining data. These values have been set
after some tries, looking at the combination of value that could improve the
data without removing useful information. The obtained result is in Figure
4.2.

Figure 4.2. Plot of the data acquired in Experiment 1 from the ultrasound
sensors system after filtering them with Hampel filter

As it can be seen from the image, this data-set does not cover all the spots
of the room, so in the subsequent experiments, a more random direction of
walking has been used, trying to cover the whole surface of the room.
The normalized data from the capacitive sensor are present in Figure 4.3.
For sensor one some steps that are not related to the movement are present.
The same can be observed at the beginning of the acquisition in sensor four.
This can be due to the condition of the room, some electric devices that were
near sensor one during the execution of the experiment or some faults on the
sensors.
The second experiment has been taken in the same evening, with the
same conditions, but by User two, in order to test if with a different user the
interference seen would have been the same. With respect to the previous
experiment, efforts have been made to cover every spot of the floor going in
a direction parallel to x and y axis, in diagonal and also in a random way.
The filtered data plot from the ultrasound system can be observed in Figure
4.4.
64
4 – Data acquisition and Sensor Fusion Results

Figure 4.3. Plot of the data acquired in Experiment 1 from the


capacitive sensors system

Figure 4.4. Plot of the data acquired in Experiment 2 from the ultrasound
sensors system after filtering them with Hampel filter

From the image, it is not possible to see the full path followed during the
experiment but it is indicative of the coverage of the room. The data acquired
from the capacitive sensor are presented in Figure 4.5.
65
4 – Data acquisition and Sensor Fusion Results

Figure 4.5. Plot of the data acquired in Experiment 2 from the


capacitive sensors system

The data from sensor one presents the same shifts as in experiment one
so it does not depend on the user or on the clothes he was wearing. Notice
that nothing has been changed in the room or around the room when the
measurements have been repeated for the other two experiments the day
after, but this behaviour did not appear during the other experiments, so
it could be due to external, not controllable sources of noise. Drift at the
beginning of acquisition by sensor 3 can also be observed, but it can be
removed with the technique illustrated in Section 3.1.3.
The data from the infrared sensor for both the experiments are not af-
fected by noise and the difference of temperature between the human and
the background reported by the infrared sensor is around 4°C, as it can be
seen from Figure 4.6, where some of the data from the first experiment are
compared with ground truth in the room.
The difference in temperature is high because the experiment has been
done during the evening so the ambient temperature is much lower than the
user temperature. However when the human moves across pixel boundaries
the sensor reports a lower body temperature since it is divided between mul-
tiple pixels. In this way, the difference in temperature between the body and
the background gets lower but it is still possible to recognize the position of
66
4 – Data acquisition and Sensor Fusion Results

the person.

67
4 – Data acquisition and Sensor Fusion Results

Figure 4.6. Images from the infrared sensor during Experiment 1 and com-
parison with a schematic showing the ground truth. From the top to the
bottom the images are in chronological order

68
4 – Data acquisition and Sensor Fusion Results

4.2 Experiments three and four - Afternoon


The same experiment has been repeated the day after. What changed was
the time at which the experiment has been made, in the afternoon on a
sunny day when the sunlight could give some noises to both the infrared
and capacitive sensors. In fact, the higher temperature of the room reduced
the difference of temperature between the user and the background and this
could be a problem for the infrared sensor. Moreover, the sensor components
are affected by thermal drift.
In the Experiment number three, an additional source of error for the
infrared sensor has been added: a bottle of hot water has been left on the
floor in a spot of the room while the User One was walking around the room.
Of course, the presence of the bottle has not been sensed by the ultrasound
and by the capacitive sensors but has been reported by the infrared as a
hotter point w.r.t. the background. This has been made to test if the complete
system would understand that the hot object on the floor was not a person
or it would give a wrong result, as the infrared sensor system would do if it
was alone. In Figure 4.7 the plot of the data acquired in Experiment three
from the ultrasound sensors system after applying the Hampel filter. As can
be seen, most of the room has been covered during the 30 minutes of walking.

Figure 4.7. Plot of the data acquired in Experiment three from the Ultra-
sound sensors system after filtering them with Hampel filter

In Figure 4.8 the data from the capacitive sensors system. In the plot
relative to Sensor four it is possible to observe that at a certain point a drift
69
4 – Data acquisition and Sensor Fusion Results

Figure 4.8. Plot of the data acquired in Experiment 3 from the


capacitive sensors system

gives the reading a decreasing trend. This can be explained by the fact that
at that point of the experiment the sunlight entered in the blinds hitting
directly the plate of sensor node four. Except for this deviation, the data set
appears to be less noisy than the day before. Some images taken from the
reading of the infrared sensor are in Figure 4.9 where also a schematic of
the actual situation of the room is present. As can be seen, the reading is
influenced by sunlight. In fact, the pixels in the upper right part of each data
matrix are at a higher temperature w.r.t. the ones at the bottom left because
the sun was hitting that part of the room and not the other. Moreover, it
is possible to spot the presence of the hot water bottle that is detected to
have a slightly higher temperature than the background. Actually, the real
temperature of the water was significantly higher than the room temperature
but since the distance from the sensor is around three meters and the bottle of
water was small with respect to the total field of view, temperature perceived
by the infrared sensor was averaged with the background. The temperature
detected by the sensor for the human is about 2°C higher than the hot water
but, when the human passes in a region of the room that is among two or
more pixels, the detected temperature for the human is comparable with the
one of hot water.
70
4 – Data acquisition and Sensor Fusion Results

Figure 4.9. Plot of the data acquired in Experiment 4 from the infrared
sensors and comparison with ground truth. From the top to the bottom, the
images are in chronological order

The last experiment has been made without the hot object on the floor,
only User 2 was in the room walking both following some pats and going
71
4 – Data acquisition and Sensor Fusion Results

in a random way. The experiment has been done shortly after the third one
but the curtains have been lowered so as to prevent the light from directly
hitting the sensors, having already ascertained in experiment three the effect
it has on the various sensors. The sunlight still heating the room, making
the difference in temperature between the person and the background less
evident compared to the experiment carried out during the evening before.
An image from the infrared sensor system acquisition data set is in Figure
4.10. As can be seen, the difference in the detected temperature between
human body and background is around 2°C.

Figure 4.10. Two samples acquired in Experiment 4 from the infrared sensors
system. In the upper image the room was empty, in the bottom one there was
a person in the position indicated by the scheme

The filtered output data from ultrasound sensors are plotted in Figure
4.11, where it is possible to notice that the room has been fully covered
during the experiment using random and non-random patterns.
The normalized data from the capacitive sensors in Figure 4.12. As can be
seen, this data set is less affected by drift noise w.r.t. the previous acquisition.

72
4 – Data acquisition and Sensor Fusion Results

Figure 4.11. Plot of the data acquired in Experiment four from the ultra-
sound sensors system after filtering them with Hampel filter

Figure 4.12. Plot of the data acquired in Experiment 4 from the


capacitive sensors system

73
4 – Data acquisition and Sensor Fusion Results

4.3 Experimental data merging


Data from the three sensor systems have been acquired simultaneously, with
approximately the same start and stop times, for exactly the same amount of
time. However, they were acquired by three separate systems, each with its
sampling rate which can, depending on the case, be lower or higher than the
expected one based on problems encountered or not during the communica-
tion phase. Therefore, in order to have a data-set with the same sampling
rate and referred to the same timestamps, the following approach has been
used:

• The data were sampled by the various sensors with the highest sampling
rate that could be obtained without errors from each type: 8 Hz for the
infrared sensor, 5 Hz for the capacitive sensors and 3.5 Hz for the ul-
trasound sensor. During the acquisition phase, the timestamp related to
the acquired data has been saved. Although the systems communicated
with different computers, since all the computers on which the MAT-
LAB scripts ran were connected to the same internet network, the time
reference, given by the web, is the same for each system.

• After the acquisition, data from all sensors have been checked to verify
that the start and stop moments were common to all three. If not, some
samples have been discarded in order to have the same interval for all
the sensors.

• A vector of timestamps evenly spaced over the specified interval has been
obtained using the linspace() function of MATLAB. Then, using linear
interpolation, the data from the three systems have been resampled at
the exact time present in the vector of timestamps obtained. The final
sampling frequency has been in this way set to 5 Hz. The MATLAB
function interp1() has been used for making the interpolation, generating
the missing samples for each timestamp.

The data have been saved in a CSV file reporting for each row the data
from all the sensors and a unique timestamp. It consists of 23 column ma-
trix (with rows depending on the number of samples obtained during the
experiment). Four columns correspond to the four capacitive sensors. Six-
teen columns contain the data obtained from the IR sensor, and the last
two columns contain X and Y axis information obtained from the ultrasound
sensor. This file will be the input file of a Neural Network that will analyze
74
4 – Data acquisition and Sensor Fusion Results

the data and obtain the best model to identify the final position of a person
in the room.

75
Chapter 5

Conclusion and future


work
A sensor fusion technique between an infrared thermal sensor and a capacitive
sensors system was applied to an indoor location system to be used in smart
homes and assisted living environments.
Analyzing the two systems separately it can be seen how their strengths
and weaknesses have been perfectly balanced with the sensor fusion approach.
The capacitive sensor method applied in the localization field can bring
many advantages since it is tag-less, unobtrusive and privacy-aware. The
installation is easy since plates can be attached on the wall and can be
hidden behind some covers and become invisible and unobtrusive for the
users. Moreover, it is cheap both because of the material it is made of and
because of the low power consumption during the usage. The main problem
with this kind of sensor is the sensitivity that changes with the distance. This
leads to a low sensitivity area in the middle of the room if only capacitive
sensors are used. Moreover, they are affected by different sources of noise that
cannot be easily controlled. Above all drift affects most this kind of sensor.
The infrared thermal sensor suits well the purpose of indoor human lo-
calization because it acquires images of the situation in the room without
violating the privacy of the user. It is tag-less and unobtrusive and does not
need a big effort to install it. However, it has uneven sensitivity: when a hu-
man passes through the area covered by two different pixels the temperature
value sensed is less than the temperature sensed when the human occupies
the area covered by a pixel alone. Moreover considering 3 meters as the stan-
dard height of a room’s ceiling, the floor area covered by an infrared sensor
with the same sensitivity of the one used in this work, is around 2.5 m x 2.5
76
5 – Conclusion and future work

m, that means the border of a standard 3 m x 3 m room is not covered. Merg-


ing the data of the two sensing systems can create a much more accurate,
sensitive and robust system:
• Capacitive drift can be corrected by the use of a sensor like the infrared
one that is not affected by the same problem;
• The overall sensitivity range can be extended for both the systems: the
lack of sensitivity of the infrared sensor at the borders of the room are
balanced by the high sensitivity of the capacitive sensors in the proximity
of the walls. In the same way, the lack of sensitivity of the capacitive
sensors in the middle point of the room is compensated by the field
of view area of the infrared one. In the parts where both systems have
good sensitivity, the infrared non-homogeneous sensitivity when crossing
pixels can be corrected by the capacitive ones.
• Infrared sensor parallax errors can be corrected by the capacitive sensor.
In conclusion, the sensor fusion approach can lead to an improvement
in the system. In fact, robustness and reliability are increased by adding
redundancy, spatial and temporal coverage are extended and resolution and
uncertainty are improved.
To check the improvement obtained, it will be necessary to analyze the
data through Neural Networks.
Should be compared the localization performance using first the data-set
composed by data from only capacitive sensors, then only data from infrared
sensors and at last a merge of the two systems. In this way, it will be clear in
terms of accuracy and error the actual improvement that the sensor fusion
can give to both the systems.
It will be possible to see if the errors and drifts from one of the two sides
will be actually recognized in the overall system. For example, using the
data-set obtained from the Experiment three it will be possible to see if the
hot water bottle placed in the room during the experiment and sensed by
the infrared sensor will be recognized as the presence of a person or if the
capacitive sensor contributes will avoid this error.
Further improvements can be implemented in future work:
• The capacitive sensor system could be optimized both in hardware com-
ponents and in processing techniques in order to reduce drift and noise
level. This could lead to more accurate measurement and could also
speed up the system by allowing to make the method applicable to more
77
5 – Conclusion and future work

realistic and dynamic contexts. Some improvements could also be done


to reduce the power consumption for long-term service using either bat-
tery or wireless power.
• Regarding the infrared sensor system, the Arduino board used for col-
lecting and sending data from the sensor to the computer can be replaced
with a customized circuit containing a microcontroller IC and an Xbee
module. This would reduce the size of the system and improve the per-
formances.
• The size of the experimental room could be increased beyond 3 m x 3 m:
the number of capacitive plates and infrared sensors could be increased
in order to cover a bigger space and other sensors positioning strategies
could be tested.

During this work, a lot of effort has been made to make everything work
in the best way and to try to have a scientific approach to the problems
encountered. The expectation is that this system will be enhanced to be
ready and available to improve the lives of users, especially those who need
care and assistance.

78
Appendix A

Microcontroller code
A.1 Transmitter
The code used for the trasmission of data from the infrared sensor is here
reported.

#include <Wire.h>
#include <WireExt.h>
#define D6T_addr 0x0A // Address of OMRON D6T is 0x0A in hex
#define D6T_cmd 0x4C // Standard command is 4C in hex

/*--------------------------------*/

#include <XBee.h>
XBee xbee = XBee();
uint8_t payload[33] ; // bytes to be transmitted via radio
uint8_t frameId = NO_RESPONSE_FRAME_ID; // xbee variable inizialization
uint8_t option = DISABLE_ACK_OPTION;
Tx16Request tx = Tx16Request(0xDCBA, option, payload, sizeof(payload), frameId);
uint16_t frq;

/*--------------------------------*/

const long period = 333;


unsigned long start = millis();
int counter;
uint8_t rbuf[35]; // 35 bytes coming from the sensor.
uint8_t valid_byte;
int tdata[16]; // The data coming from the sensor is 16 elements, in a 16x1 array
int tPTAT; //mean value of temperature
int tPEC; // crc value from the sensor
uint16_t value[16];
unsigned char crc;
double temp;

int i;
int MAXREADS=3;

79
A – Microcontroller code

//------------------start--------------------------

void setup()
{
Wire.begin();
Serial.begin(9600);
xbee.setSerial(Serial);
pinMode(13, OUTPUT);
}

void loop()
{
start = millis();// save the information about the starting time of transmission

//digitalWrite(13, 0);

valid_byte=0;

for (i = 0; i < 35; i++)


{
rbuf[i] = 0;
}

//-------------------------communication and reading of data from d6t sensor----------------


Wire.beginTransmission(D6T_addr);
Wire.write(D6T_cmd);
Wire.endTransmission();

if (WireExt.beginReception(D6T_addr) >= 0)
{
i = 0;
for (i = 0; i < 35; i++)
{
rbuf[i] = WireExt.get_byte(); //read the data from the sensor
}
WireExt.endReception();
}

//----------------------------------------------------------------------------------------

// -----------------------crc calculation-----------------------

crc= calc_crc(0x15);
for (i = 0; i < 34; i++)
{
crc=calc_crc(rbuf[i] ^ crc);

}
tPEC= rbuf[34];

// ---------------------------------------------------------------

80
A – Microcontroller code

//If the so computed crc does not match the tPEC repeat the reading for a maximum of 3 times

for (counter = 0; (counter < MAXREADS) & (crc != tPEC); counter++)


{
//while (counter>0 & (crc!= tPEC)){
//Serial.print("ERRORE, prova di rilettura dei dati. Valore di counter:\n");
//Serial.print(counter);

if (WireExt.beginReception(D6T_addr) >= 0)
{
i = 0;
for (i = 0; i < 35; i++)
{
rbuf[i] = WireExt.get_byte();
}
WireExt.endReception();
}

crc= calc_crc(0x15);
for (i = 0; i < 34; i++)
{
crc=calc_crc(rbuf[i] ^ crc);

}
tPEC= rbuf[34];

if(crc== tPEC)
{valid_byte=1;}
//-------------------------------------------------------------------

tPTAT = (rbuf[0]+(rbuf[1]<<8)); //mean value of temperature, not used

//--------------conversion from bytes to 16 words------------------------

for (i = 0; i < 16; i++)


{
tdata[i]=(rbuf[(i*2+2)]+(rbuf[(i*2+3)]<<8));
value[i]=tdata[i] /* * 0.1*/;
}

//-----------------------------------------------------------------------------

//------------------------------sending of data via xbee----------------------


for (i=0; i<16; i++)
{
frq=value[i];

81
A – Microcontroller code

payload[i*2] = highByte(frq);
payload[i*2+1] = lowByte(frq);

}
payload[32]=valid_byte;

xbee.send(tx);

//-------------------------------------------------------------------------------------------

//----------------if the interval of time of 333ms is not elapsed wait-------------------

while (millis() < start + period){


}

//------------------------------------------------------------------------------------

//digitalWrite(13, 1);
}

//------------------------------------the end-----------------------

//------------------------------crc calculation function-------------------------


unsigned char calc_crc(unsigned char data)
{int index;
unsigned char temp;
for(index=0;index<8;index++)
{
temp= data;
data <<= 1;
if(temp & 0x80)
{
data ^= 0x07;
}
}
return data;
}

82
A – Microcontroller code

A.2 Receiver
The arduino code used for the reception of data via radio is here reported.
#define highWord(w) ((w) >> 16)
#define lowWord(w) ((w) & 0xffff)
#define makeLong(hi, low) (((long) hi) << 16 | (low))

#include <XBee.h>

XBee xbee = XBee();


XBeeResponse response = XBeeResponse();
//response objects for responses we expect to handle
Rx16Response rx16 = Rx16Response();

uint8_t rec_data[32];
uint16_t add;
uint8_t check_error;
uint16_t value[16];

int i;
String address_frq;

void setup() {

// start serial
Serial.begin(9600);
xbee.setSerial(Serial);

// continuously reads packets, looking for RX16 or RX64


void loop() {

xbee.readPacket();

if (xbee.getResponse().isAvailable()) {
// got something

if (xbee.getResponse().getApiId() == RX_16_RESPONSE) {
xbee.getResponse().getRx16Response(rx16);
check_error = rx16.getErrorCode();
if (check_error == NO_ERROR) {

add = rx16.getRemoteAddress16();
String add_str = String(add, HEX);

for (i=0; i<16; i++)


{
rec_data[i*2] = rx16.getData(i*2);
rec_data[i*2+1] = rx16.getData(i*2+1);
value[i]=(rec_data[(i*2+1)]+(rec_data[(i*2)]<<8));
Serial.println( value[i]);

83
A – Microcontroller code

}
rec_data[32] = rx16.getData(32);
Serial.println( rec_data[32]);// valid byte

}
}
}
}

84
Appendix B

MATLAB code
B.1 Acquisition of the data through serial port
clear all;
times_to_run=180;
time_vector = zeros(times_to_run,1);
Data=zeros(times_to_run, 16);
valid_byte=zeros(times_to_run, 1);
temperature_dht11=zeros(times_to_run, 1);
MATRIX=zeros(4,4);
counter=0;
minutes_to_run=1;
end_time= minutes_to_run *60;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

serialPort = ’COM5’;
s = serial(serialPort);
set(s,’BaudRate’,9600);
fopen(s);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

start_time= tic;
while(toc(start_time)< end_time )
counter= counter+1;
for i=1:16
arduino(i) = fscanf(s, ’%f \n’); %Read Data from Serial as String

end

valid_byte(counter)=fscanf(s, ’%f\n ’);


time_vector(counter) =now;

Data(counter, :)= arduino;

end

fclose(instrfind);

85
B – MATLAB code

Time_Stamp = datetime(time_vector,’ConvertFrom’,’datenum’,’Format’,’d-MMM-y HH:mm:ss:SSS’);

T= table( Data *0.1, Time_Stamp, valid_byte);


T2= table( Data*0.1 , time_vector, valid_byte);

writetable(T,’./infrared_output_date_time.csv’);
writetable(T2,’./infrared_output.csv’);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

86
B – MATLAB code

B.2 Data plotting


clear all;
MATRIX=zeros(4,4);
counter=0;
T = readtable(’./infrared_output.csv’);
toDelete = T.valid_byte <1;
T(toDelete,:) = [];
[times_to_run, col]=size(T);
Data(:,:)= T(:, 1: end-2).Variables;
writetable(T(:, 1: end),’./infrared_output_validbit.csv’);
mkdir images;
for counter=1:times_to_run
for i=1:4
for j=1:4
index=(i)+4*(j-1);
MATRIX (j, i)= Data(counter, index);
end
end

heatmap(MATRIX,’ColorMap’, jet, ’ColorLimits’, [10, 40]);

saveas(gcf,sprintf(’./images/infrared_%05d’,counter),’jpg’);

end

87
Bibliography
[1] S. Shukri, L. Munirah Kamarudin, and M. Hafiz Fazalul Rahiman,
‘Device-Free Localization for Human Activity Monitoring’, in Intelligent
Video Surveillance, A. J. R. Neves, Ed. IntechOpen, 2019.
[2] T. Kivimäki, T. Vuorela, P. Peltola, and J. Vanhala, ‘A Review on Device-
Free Passive Indoor Positioning Methods’, IJSH, vol. 8, no. 1, pp. 71–94,
Jan. 2014, doi: 10.14257/ijsh.2014.8.1.09.
[3] T. B. Moeslund and E. Granum, ‘A Survey of Computer Vision-Based
Human Motion Capture’, Computer Vision and Image Understanding,
vol. 81, no. 3, pp. 231–268, Mar. 2001, doi: 10.1006/cviu.2000.0897.
[4] Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali
Mahjoub, "Vision-based human action recognition: An overview
and real world challenges", Forensic Science International: Dig-
ital Investigation, Volume 32, 2020, 200901,ISSN 2666-2817,
https://doi.org/10.1016/j.fsidi.2019.200901.
[5] D. Hauschildt and N. Kirchhof, Advances in thermal infrared localiza-
tion: Challenges and solutions Int. Conf. Indoor Positioning and Indoor
Navigation, (2010), pp. 1-8.
[6] Hsu, Hui-Huang Peng, W.-J Shih, Timothy Pai, Tun-Wen Man, Ka.
(2015). Smartphone Indoor Localization with Accelerometer and Gyro-
scope. Proceedings - 2014 International Conference on Network-Based
Information Systems, NBiS 2014. 465-469. 10.1109/NBiS.2014.72.
[7] C. Hsu and C. Yu, "An Accelerometer Based Approach for Indoor Local-
ization," 2009 Symposia and Workshops on Ubiquitous, Autonomic and
Trusted Computing, Brisbane, QLD, 2009, pp. 223-227. doi: 10.1109/UIC-
ATC.2009.90
[8] Scott J., Dragovic B. (2005) Audio Location: Accurate Low-Cost Loca-
tion Sensing. In: Gellersen H.W., Want R., Schmidt A. (eds) Pervasive
Computing. Pervasive 2005. Lecture Notes in Computer Science, vol 3468.
Springer, Berlin, Heidelberg
88
Bibliography

[9] X. Bian, G. Abowd and J. Rehg, Using sound source localization in a


home environment Pervasive Computing, ser. Lecture Notes in Computer
Science, H. Gellersen, R. Want, and A. Schmidt, Eds. Springer, Berlin,
(2005), pp. 19-36.
[10] Li, Jian Han, Guangjie Zhu, Chunsheng Sun, Guiqing. (2016). An In-
door Ultrasonic Positioning System Based on TOA for Internet of Things.
Mobile Information Systems. 2016. 1-10. 10.1155/2016/4502867.
[11] Medina, C.; Segura, J.C.; De la Torre, Á. Ultrasound Indoor Positioning
System Based on a Low-Power Wireless Sensor Network Providing Sub-
Centimeter Accuracy. Sensors 2013, 13, 3501-3526.
[12] Goswami S. (2013) Radio Frequency Positioning. In: Indoor Location
Technologies. Springer, New York, NY
[13] El-Kafrawy K, Youssef M, El-Keyi A. Impact of the human motion on
the variance of the received signal strength of wireless links. In: Proceeding
of IEEE 22nd International Symposium on Personal Indoor and Mobile
Radio Communications (PIMRC’11); 11–14 September 2011; Toronto,
ON, Canada. IEEE; 2012. pp. 1208-1212
[14] iF. Zafari, A. Gkelias and K. K. Leung, "A Survey of Indoor Localization
Systems and Technologies," in IEEE Communications Surveys Tutorials,
vol. 21, no. 3, pp. 2568-2599, thirdquarter 2019.
[15] Martin L. Pall, Wi-Fi is an important threat to human health, Environ-
mental Research, Volume 164, 2018,Pages 405-416,ISSN 0013-9351.
[16] L. E. M. Matheus, A. B. Vieira, L. F. M. Vieira, M. A. M. Vieira and
O. Gnawali, "Visible Light Communication: Concepts, Applications and
Challenges," in IEEE Communications Surveys Tutorials, vol. 21, no. 4,
pp. 3204-3237, Fourthquarter 2019. doi: 10.1109/COMST.2019.2913348
[17] Y. Zhuang et al., "A Survey of Positioning Systems Using Visible LED
Lights," in IEEE Communications Surveys Tutorials, vol. 20, no. 3, pp.
1963-1988, thirdquarter 2018. doi: 10.1109/COMST.2018.2806558.
[18] J. P. D. Vries, L. Simi´c, A. Achtzehn, M. Petrova, and P. Mähönen,“The
Wi-Fi ‘congestion crisis’: Regulatory criteria for assessing spectrum con-
gestion claims,” Telecommun. Policy, vol. 38, nos. 8–9, pp. 838–850, 2014.
[19] Richardson, Bruce Leydon, Krispin Fernström, Mikael Par-
adiso, Joseph. "Z-tiles: Building blocks for modular, pressure-sensing
floorspaces". Proceedings of CHI. 1529-1532. 10.1145/985921.986107.
(2004)
[20] R. J. Orr and G. D. Abowd, The smart floor: a mechanism for natu-
ral user identification and tracking, Extended Abstracts Human Factors
Computing Syst., (2000), pp. 275-276.
89
Bibliography

[21] D. Savio and T. Ludwig, "Smart Carpet: A Footstep Tracking Interface,"


21st International Conference on Advanced Information Networking and
Applications Workshops (AINAW’07), Niagara Falls, Ont., 2007, pp. 754-
760.
[22] T. Grosse-Puppendahl et al., ‘Finding Common Ground: A Survey
of Capacitive Sensing in Human-Computer Interaction’, in Proceedings
of the 2017 CHI Conference on Human Factors in Computing Sys-
tems - CHI ’17, Denver, Colorado, USA, 2017, pp. 3293–3315, doi:
10.1145/3025453.3025808.
[23] D. Galar and U. Kumar, ‘Sensors and Data Acquisition’, in eMainte-
nance, Elsevier, 2017, pp. 1–72.
[24] A. Ramezani Akhmareh, M. Lazarescu, O. Bin Tariq, and L. Lavagno, “A
tagless indoor localization system based on capacitive sensing technology,”
Sensors, vol. 16, no. 9, p. 1448, 2016.
[25] O. B. Tariq, M. T. Lazarescu, J. Iqbal, and L. Lavagno, “Performance of
machine learning classifiers for indoor person localization with capacitive
sensors,” IEEE Access, vol. 5, pp. 12 913–12 926, 2017.
[26] J. Iqbal, A. Arif, O. B. Tariq, M. T. Lazarescu, and L. Lavagno, “A
contactless sensor for human body identification using RF absorption sig-
natures,” in 2017 IEEE Sensors Applications Symposium (SAS). IEEE,
2017, pp. 1–6.
[27] J. Iqbal, M. T. Lazarescu, O. B. Tariq, and L. Lavagno, “Long range,
high sensitivity, low noise capacitive sensor for tagless indoor human lo-
calization,” in 2017 7th IEEE International Workshop on Advances in
Sensors and Interfaces (IWASI), Jun. 2017, pp. 189–194.
[28] J. Iqbal, M. T. Lazarescu, A. Arif, and L. Lavagno, “High sensitivity,
low noise front-end for long range capacitive sensors for tagless indoor
human localization,” in 2017 IEEE 3rd International Forum on Research
and Technologies for Society and Industry (RTSI), Sep. 2017, pp. 1–6.
[29] J. Iqbal, M. T. Lazarescu, O. B. Tariq, A. Arif, and L. Lavagno, “Capac-
itive sensor for tagless remote human identification using body frequency
absorption signatures,” IEEE Transactions on Instrumentation and Mea-
surement, vol. 67, no. 4, pp. 789–797, Apr. 2018.
[30] O. B. Tariq, M. T. Lazarescu, and L. Lavagno, “Neural network-based
indoor tag-less localization using capacitive sensors,” in Proceedings of the
2019 ACM International Joint Conference on Pervasive and Ubiquitous
Computing and Proceedings of the 2019 ACM International Symposium
on Wearable Computers. ACM, 2019, pp. 9–12.
90
Bibliography

[31] S. D. Gunapala et al., "Quantum Well Infrared Photodetector Technol-


ogy and Applications," in IEEE Journal of Selected Topics in Quantum
Electronics, vol. 20, no. 6, pp. 154-165, Nov.-Dec. 2014, Art no. 3802312.
[32] Hwaiyu Geng, "ELECTRO-OPTICAL INFRARED SENSOR TECH-
NOLOGIES FOR THE INTERNET OF THINGS," in Internet of Things
and Data Analytics Handbook, Wiley, 2017, pp.167-185
[33] J. L. Honorato, I. Spiniak and M. Torres-Torriti, "Human Detection Us-
ing Thermopiles," 2008 IEEE Latin American Robotic Symposium, Natal,
Rio Grande do Norte, 2008, pp. 151-157.
[34] J. Yun and S.-S. Lee, ‘Human Movement Detection and Identifica-
tion Using Pyroelectric Infrared Sensors’, Sensors, vol. 14, no. 5, pp.
8057–8081, May 2014, doi: 10.3390/s140508057.
[35] Omron web site - D6T MEMS Thermal Sensors https://www.
components.omron.com/product-detail?partNumber=D6T - Last ac-
cess February 2020
[36] Omron, MEMS Thermal Sensors User’s Manual-D6T https:
//omronfs.omron.com/en_US/ecb/products/pdf/en_D6T_uses_
manual.pdf - Last access February 2020
[37] Omron, MEMS Thermal Sensors D6T catalog https://omronfs.
omron.com/en_US/ecb/products/pdf/en_D6T_catalog.pdf - Last ac-
cess February 2020
[38] ‘UM10204 I2C-bus specification and user manual’, vol. 2014, p. 64,
2014 https://www.nxp.com/docs/en/user-guide/UM10204.pdf - Last
access February 2020
[39] ATmega328P - 8-bit AVR Microcontroller with 32K
Bytes In-System Programmable Flash DATASHEET -
http://ww1.microchip.com/downloads/en/DeviceDoc/
Atmel-7810-Automotive-Microcontrollers-ATmega328P_Datasheet.
pdf
[40] ‘XBee/XBee-PRO S2C 802.15.4 RF Module’, https://www.digi.com/
resources/documentation/digidocs/pdfs/90001500.pdf - Last ac-
cess February 2020
[41] ’XCTU Next Generation Configuration Platform for XBee/RF
Solutions’, https://www.digi.com/products/embedded-systems/
digi-xbee/digi-xbee-tools/xctu#productsupport-utilities -
Last access February 2020
[42] Marvelmind Indoor Navigation System Operating manual https://
marvelmind.com/pics/marvelmind_navigation_system_manual.pdf -
Last access February 2020
91
Bibliography

[43] Texas Instruments - LMC555 CMOS Timer Datasheet http://www.ti.


com/lit/ds/symlink/lmc555.pdf - Last access March 2020
[44] STMicroelectronics NE555- SA555 - SE555 General-purpose single bipo-
lar timers Datasheet https://www.st.com/resource/en/datasheet/
cd00000479.pdf- Last access March 2020
[45] DHT11 Technical Data-Sheet https://www.mouser.com/datasheet/
2/758/DHT11-Technical-Data-Sheet-Translated-Version-1143054.
pdf - Last access March 2020
[46] MathWorks Outlier removal using Hampel identifier https://it.
mathworks.com/help/signal/ref/hampel.html - Last access March
2020
[47] Wikipedia - Training, validation, and test sets https://en.wikipedia.
org/wiki/Training,_validation,_and_test_sets - Last access March
2020

92
Acknowledgements
I would like to dedicate this space to people who, with their support, have
helped me in the realization of this thesis and during my university career.
A heartfelt thanks to my supervisors Mihai Lazarescu and Luciano Lavagno
for their infinite availability, for their indispensable advice and for the knowl-
edge transmitted throughout this thesis work.
Thanks to Osama Bin Tariq for helping and guiding me with practical
tips when running the experiments. The long waits hoping that the sensors
worked well have been less tedious together.
Thanks to the Politecnico di Torino, for welcoming me and providing the
tools and knowledge necessary to train me. Thanks to all the professors I met
during my university experience: each of them gave me the opportunity to
learn and grow. Among them, a special thank goes to Professor Passerone for
his kindness and for giving me the opportunity to see the Politecnico under
the starlight.
Thanks to the Polietnico choir which I had the honour of being part of
during part of my journey.
Thanks to the course colleagues: we supported, helped and encouraged
each other during the long exam sessions. Among these, a heartfelt thanks
to the one who most of all had to support and endure me in recent years,
Nicoletta. In you, I found a good colleague and friend. Thanks also to the
new friends met in Turin and to the old friends spread all over the world. In
particular I would like to thank my bachelor degree friends for being present
despite the distance.
Last but not least, I would like to thank my family and Giuseppe who
have always been by my side, often not physically but with the heart, in
good and bad times. Without your support, I would never have come to this
point. Thanks for being an inexhaustible source of love, support and joy.

93

You might also like