Indoor Human Localization
Indoor Human Localization
Indoor Human Localization
Supervisors Candidate
prof. Mihai Teodor Lazarescu Irene Castro
prof. Luciano Lavagno ID:242266
3
Summary
The possibility to have a reliable system that detects the exact position of
a human in a room arouses a lot of interest since it could lead to numerous
improvements in everyday life. Fields of application range from smart homes,
where indoor human localization can be used for adjusting the lights or the
heating in user’s proximity, to hospitals and hospices where it can help to
monitor patients remotely. For these and more other applications, a growing
interest in this field has developed during the last years.
In this context, a research team at the Department of electronics and
telecommunications (DET) of Politecnico di Torino is working to design a
low cost, easy to use, unobtrusive, passive, tag-less and privacy-aware indoor
localization system that can be safely and easily installed in smart homes and
assisted living environments by using long-range capacitive sensors, digital
filters and neural networks. Within this project this thesis work has the
purpose of improving the performance, reliability and accuracy of the overall
system by implementing a complementary sensor fusion technique between
the existing capacitive sensors network localization system and an infrared
thermal sensor localization system designed for this scope.
At the beginning of this thesis work, a study of the state of art on indoor
human localization has been carried out. The advantages and disadvantages
of the analyzed systems have been identified and a comparison among them
has been made. From this analysis, it emerged that, even though the infrared
and capacitive sensors have not the best accuracy among all the systems an-
alyzed, their accuracy can be sufficient for indoor localization purpose. More-
over, both of them are cheap, safe, tagless, do not consume much power and
are privacy-aware. Moreover, most of the weak points of these two techniques
are complementary.
The main problem of capacitive sensors lies in the fact that they have a
sensitivity that steeply decreases with the increase of distance from them.
Furthermore, they are affected by different sources of noise that cannot be
easily controlled. Above all, drift affects most this kind of sensor. On the
4
other hand, the infrared thermal sensor is affected by kinds of noise that are
often complementary to the ones affecting capacitive sensors. For this reason,
a sensor fusion between the two has been implemented.
During the experimental part of this thesis, the design of the infrared
sensor system has been made. Going into detail, a MEMS Thermal Sensor has
been chosen for the complementary acquisition system. It uses the thermopile
technique in order to give information about the surface temperature of an
object in an array of 16 pixels (4x4). By attaching it on the ceiling of a
room, when a human passes underneath the sensor inside that area, it detects
him/her as a higher temperature pixel with respect to the floor temperature,
allowing to detect user’s position and respecting the privacy.
Data from the infrared sensor have been collected using as microcontroller
an Arduino Uno board using I2C communication protocol. Reliability of data
has been improved by checking the integrity of the received data using the
CRC-8 (Cyclic Redundancy Check) and by requesting to re-send in case of
error. The samples collected on the board have been sent via radio to a
second Arduino Uno board connected to a computer. For this reason, two
Xbee modules have been programmed, one used as transmitter and the other
as receiver.
The output wave signals from the sensor have been analyzed with an
oscilloscope to choose the maximum reliable sampling rate for the infrared
sensor system, that turned out to be 8 Hz. Data processing and packing have
been made using the software MATLAB. For each set of data, a heat-map
of the room has been plotted to have a graphical representation of data.
The system has been tested and a stability characterization has been carried
out by comparing the output temperature from the infrared sensor with a
reference DTH11 Humidity Temperature Sensor. It has been observed that
the difference between the values read from the sensors changed in synchrony
with the DHT11 sensor.
The operation of each of the four capacitive sensor nodes has been tested
separately. In particular, a sensitivity test has been made by moving in front
of each sensor in a straight line, starting from a distance of 1.8 meters and ap-
proaching each time by a small step of 30 centimetres and recording the data
acquired. The test has been repeated in different conditions, by changing the
location of the sensors in the room, by changing some electronic components
of the circuit and re-soldering some contacts. In the end, an increment of the
sensitivity and a reduction of high-frequency noise have been obtained for
most sensor nodes.
After testing all system components separately, a complete experiment has
5
been set up. In addition to the two systems described, an ultrasound sensor
network has been installed to have an accurate reference for the position of
the person in the room. This system is composed by a network of 4 stationary
ultrasonic beacons and a mobile beacon - called hedgehog - worn by the
person to be tracked. The room used for the experiment is a 3 m x 3 m room
with a ceiling at a height of 3.05 m. In this structure, capacitive sensor plates
are placed at the centre of the walls at a height of 120 cm from the floor, the
infrared sensor is placed at the central point of the ceiling and the ultrasound
sensors are placed at the four edges of the ceiling and communicate with a
tag on the person’s head.
Data from all sensors have been acquired simultaneously and the obtained
results have been analyzed. Considering the factors that could positively and
negatively influence the reading of the data by the capacitive and infrared
sensors, different experiments have been carried out under different condi-
tions. In particular, experimental data were collected in four experiments
executed by two different people and lasted half an hour each. During these
experiments, the person was slowly and continuously walking inside the room
while all three localization systems were active. The first two experiments
(one for each person) were carried out in the evening after sunset, while the
other two during the afternoon with the sun’s rays penetrating the room
through blinds. It was, therefore, possible to test the system under different
temperature conditions and with and without the interference of sunlight in
the room. In the third experiment, an element of disturbance to the infrared
system has been inserted to better test the sensor fusion technique.
From experimental results has been observed that by merging the data
from the two sensing systems a much more accurate, sensitive and robust
system can be created. Capacitive drift can be corrected by using the infrared
sensor since it is not affected by the same problem. The overall sensitivity
range can be extended for both the systems: the lack of sensitivity of the
infrared sensor at the borders of the room are balanced by the high sensitivity
of the capacitive sensors in the proximity of the walls. In the same way, the
lack of sensitivity of the capacitive sensors in the middle point of the room is
compensated by the field of view area of the infrared one. In the parts where
both systems have good sensitivity, the infrared non-homogeneous sensitivity
when crossing pixels can be corrected by the capacitive ones.
To check the improvement obtained, it will be necessary to analyze the
data through Neural Networks by comparing the localization performance
using first the data-set composed by data from only capacitive sensors, then
only data from infrared sensors and at last a merge of the two systems. In this
6
way, it will be clear in terms of accuracy and error the actual improvement
that the sensor fusion can give to both the systems.
During this work, a lot of effort has been made to make everything work in
the best way and to try to have a scientific approach to the problems encoun-
tered. The expectation is that this system will be enhanced for improving
the lives of users, especially those who need care and assistance.
7
Contents
List of Tables 10
List of Figures 11
3 Experiment setup 49
3.1 Capacitive Sensors System . . . . . . . . . . . . . . . . . . . 50
3.1.1 Capacitive sensor module . . . . . . . . . . . . . . . . 50
3.1.2 Testing the module and implemented optimization . . . 51
3.1.3 Filtering the noise . . . . . . . . . . . . . . . . . . . . 56
3.2 Ultrasound Sensors System . . . . . . . . . . . . . . . . . . . . 56
3.3 Infrared Sensor System . . . . . . . . . . . . . . . . . . . . . . 57
3.3.1 Infrared sensor: area of sensing evaluation . . . . . . . 59
A Microcontroller code 79
A.1 Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.2 Receiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
B MATLAB code 85
B.1 Acquisition of the data through serial port . . . . . . . . . . . 85
B.2 Data plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Bibliography 88
9
List of Tables
1.1 Comparison between the most used Indoor positioning systems 25
2.1 Overview of the characteristics of Omron D6T MEMS Ther-
mal Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1 Main characteristics of DHT11 Humidity Temperature Sensor 58
10
List of Figures
1.1 Localization with ultrasound sensors system . . . . . . . . . . 18
1.2 Overview of the main capacitive sensing techniques . . . . . . 24
1.3 Main building blocks of capacitive sensor Node and Base Statio 28
2.1 Structure of a thermopile . . . . . . . . . . . . . . . . . . . . . 34
2.2 Inside detail of D6T MEMS Thermal Sensor . . . . . . . . . . 36
2.3 Angle of view of D6T-44L-06 MEMS Thermal Sensor by Omron 37
2.4 Field of view area positioning D6T sensor at 3 m and at 1 m
from the floor . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 Outer view and connections of the Omron D6T-44L-06 MEMS
Thermal Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 I2C data line flow and Output data composition of D6T-44L-
06 MEMS Thermal Sensor . . . . . . . . . . . . . . . . . . . . 39
2.7 Start and Stop of a transmission from D6T sensor to the mas-
ter using I2C protocol . . . . . . . . . . . . . . . . . . . . . . 40
2.8 Schematic of the overall infrared thermal system . . . . . . . . 41
2.9 Electrical connection between D6T sensor and MCU . . . . . . 42
2.10 Arduino connection to Xbee module through a shield . . . . . 43
2.11 Graphical rapresentation of CRC-8 . . . . . . . . . . . . . . . 44
2.12 Function implementing CRC-8 algorithm . . . . . . . . . . . 45
2.13 Call of the calc_crc function implementing CRC-8 algorithm
inside the main code . . . . . . . . . . . . . . . . . . . . . . . 45
2.14 Initialization of the packet of data to be sent via radio using
Xbee. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.15 Schematic of how the sampling period is organized. . . . . . . 47
2.16 Output scheme of the infrared sensor acquisition system. . . . 48
3.1 Representation of the overall experiment setup . . . . . . . . 49
3.2 555-based capacitance-frequency converter . . . . . . . . . . . 50
3.3 Sensitivity test results made for all the four capacitive sensor
nodes and repeated by changing the timer IC . . . . . . . . . 53
11
3.4 Drift acquisition for all four capacitive sensor nodes for several
555 ICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5 Plots of the sensitivity test for sensor nodes 1 and 3 before
and after the hardware debugging operations. . . . . . . . . . 55
3.6 Ultrasound Localization System By Marvelmind . . . . . . . . 57
3.7 D6T thermal sensor stability characterization . . . . . . . . . 58
3.8 Infrared sensor field of view evaluation . . . . . . . . . . . . . 59
3.9 Values used for the field of view computation referred to the
room. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1 Plot of the raw data acquired in Experiment 1 from the ultra-
sound sensors system . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Plot of the data acquired in Experiment 1 from the ultrasound
sensors system after filtering them with Hampel filter . . . . . 64
4.3 Plot of the data acquired in Experiment 1 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4 Plot of the data acquired in Experiment 2 from the ultrasound
sensors system after filtering them with Hampel filter . . . . . 65
4.5 Plot of the data acquired in Experiment 2 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.6 Images from the infrared sensor during Experiment 1 and com-
parison with a schematic showing the ground truth. . . . . . . 68
4.7 Plot of the data acquired in Experiment three from the Ultra-
sound sensors system after filtering them with Hampel filter
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.8 Plot of the data acquired in Experiment 3 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.9 Plot of the data acquired in Experiment 4 from the infrared
sensors and comparison with ground truth . . . . . . . . . . . 71
4.10 Two samples acquired in Experiment 4 from the infrared sen-
sors system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.11 Plot of the data acquired in Experiment four from the ultra-
sound sensors system after filtering them with Hampel filter
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.12 Plot of the data acquired in Experiment 4 from the capacitive
sensors system . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12
Chapter 1
Introduction to Indoor
Human Localization
• Safe and Secure: The system must not affect the health of people
who are localized (safe) and the information about their position or
the presence or not of people inside the rooms must be protected and
encrypted such that it cannot be used for malicious activities (secure).
• Easy to use, Passive and Device free: Considering that the end-
user could be an elderly person or without any knowledge of technology,
in the ideal case the system should be as easy to use as possible, so
that, once installed, the system should operate automatically without
the need to perform any specific activities for being localized.
Many of the best indoor localization systems make use of tags that the
user must carry around in order to be located. Although these methods
may be simple and inexpensive to implement, especially by exploiting
technologies already widespread such as smartphones and smartwatches,
they may be, at the end of the day, unreliable because the user could
forget to carry the tag with him moving around the rooms of the house.
Moreover, the user could be uncomfortable and reluctant to wear a device
in every moment of his day, especially in the relaxing moments. For this
reason, the passive and tag-less localization systems are considered the
most suitable for the scopes of this work.
• Privacy aware: The system must be usable and accepted by the user
even in environments where, for reasons of privacy, the user does not
want to be filmed. For this reason, the acquisition of high-resolution im-
ages for human localization should be excluded. In fact, even by ensuring
that the captured images would be encrypted, obscured and not used for
other purposes, the user would not trust this technology.
• Unobtrusive: the system should not interfere with the user daily ac-
tivities and movements. Furthermore, considering also the surveillance
applications, it should be invisible and not easy to be disabled by thieves
and intruders.
• Cheap and easy to install: Cost plays an essential role in the spread
of a product to as many users as possible. It is also important that the
system could be easily installed in existing buildings without making
difficult and expensive masonry works.
the system on. A low power system for long battery life or a wireless
power supply system is preferable.
analyzing the difference in the arrival time of the sound in three or more
microphones placed in different spots of the room and interpolating the data
using triangulation [8] [9]. Accuracy and reliability of the system can be
improved by increasing the number of microphones.
This technique obtains positions and distances of a person with centimetre-
scale accuracy in a quite inexpensive and without any annoyance to the user.
However the sensing can be easily influenced by other audio signals or noise,
so it is prone to false detection.
18
1 – Introduction to Indoor Human Localization
• Angle of Arrival (AOA): the angle between an anchor point and the
sensor with respect to a coordinate system is measured and from this
information, the position is obtained.
• Signal Strength: exploits the fact that the signal from radio transmitter
gradually falls off in strength as the receiver moves further away from
the receiver.
• Phase: the phase difference between the transmitted and the received
signal is used for measuring the distance.
on the RF signal strength [13] claim that the human body produces both con-
structive or destructive interferences in wireless radio network environment,
changing the RF communication pattern between the wireless transceivers.
This radio irregularity, always considered as a drawback has been exploited to
locate the human presence in the indoor environment and even discriminate
human activities or gestures [1].
Among all the technique developed, the RSSI method is quite widespread
[14]. First, the system must do an offline measurement in order to learn
the signal strengths at all locations in the area of interest when there is no
human presence. Then, the online real-time measurements are compared with
the offline ones in the database to estimate the user location by analyzing
the differences in the signals.
The advantages of this method lie in the fact that the infrastructure used
is already present in many homes, therefore there are no costs to be added
for the purchase and placement of the sensors, just an adequate algorithm
to analyze the characteristics of the RF signal is needed. Another positive
point is the fact that the fingerprints collected during the offline phase are
on average stable over time unless relocation of the Access Point or the
introduction of new bulky objects within the area, which normally does not
happen often.
However, it requires additional time and work for the user in order to
collect the data in the "offline phase". Moreover, other radio devices that
transmit at the same frequency of Wi-Fi could interfere with the system
and give some faults. An other fact to consider is the safety: authors in [15]
claims that repeated Wi-Fi studies show that Wi-Fi causes oxidative stress,
sperm/testicular damage, neuropsychiatric effects including EEG changes,
apoptosis, cellular DNA damage, endocrine changes, and calcium overload.
One of the most intuitive and traditional positioning system technology con-
sists in installing some pressure sensors grid on the floor and use the change
in pressure resulting from a person passing over it in order to localize it [19].
This method has many positive aspects because once installed under the floor
it is invisible, unobtrusive and privacy-aware. Moreover, taking advantage of
the differences in weight that may exist between the inhabitants of a house, it
can be used for user identification making it possible to distinguish between
different users or between an adult and a child or between a human and a
pet [20], [21].
The main disadvantage consists in the installation. In fact, it is laborious
and expensive and requires sufficient space beneath the floor surface and a
flexible flooring on the top of it. Even maintenance work is not easy to carry
out because it would be needed to dismantle the floor in order to implement
it.
21
1 – Introduction to Indoor Human Localization
23
1 – Introduction to Indoor Human Localization
24
1 – Introduction to Indoor Human Localization
Table 1.1. Comparison between the most used Indoor positioning systems.
Data inside the table have been taken by different articles cited in this chapter,
but in particular from [1], [2], [14], [17].
Among the systems analyzed, Vision-based system could reach the highest
accuracy and would give a lot more information about the person to localize
w.r.t the other techniques. However, its lack of privacy makes it not suitable
for indoor localization purpose. The system that, most of all combine high ac-
curacy and privacy awareness is the ultrasound sensor system that, however,
could be difficult to use in a smart home or assisted living environment for
the fact that it is not tagless. Talking about costs, the cheapest systems could
be the ones that use existing infrastructure or equipment that is already in
the pockets of many users. Among that, the motion-based technique, that
uses the smartphone to localize, the RSSI that uses the WiFi connection,
25
1 – Introduction to Indoor Human Localization
present in most of our homes, the sound system based that can exploit the
use of every kind of microphone. However the first is not tagless and could
be uncomfortable for the user to always bring the smartphone with him. The
second could be tagless but it could suffer from interference from other radio
devices transmitting at the same frequency and also safety problem due to
the constant exposure to radiofrequency could arise. The third method, that
is based on sounds the human produce could be full of errors since other
sound sources different from the user could interfere. Another cheap local-
ization system is represented by the VLC based method that is a novel and
interesting approach to the indoor localization problem, but the technology is
not widespread and users could have difficulty in finding the instrumentation
needed.
The other systems analyzed need an initial installation phase which can
also lead to masonry work, in the worst-case scenario. This is the case of
pressure sensors since for installing them it is necessary to remove the floor.
Despite the advantages that such a system can have (accurate, unobtrusive),
the initial installation cost makes it inaccessible from an economic point of
view. The infrared sensor and the capacitive sensor, instead, even if they need
an installation phase, no large masonry work is necessary: the capacitive
sensors can be attached to the existing walls and covered, to make them
invisible, with non-conductive materials, while the infrared sensors simply
have to be glued to the ceiling or anywhere in the room. These two systems
have not the best accuracy among all the systems analyzed but their accuracy
is quite enough for the indoor localization purpose. Moreover, both of them
are cheap, safe, tagless, do not consume much power and are privacy-aware.
received, each sensor data is processed using digital filters, then the data
labelled with the person position within the room has been used to train and
test some machine learning classifiers to infer the location of the person in
the room.
In order to achieve an increase in the sensing range, several experiments
on the capacitive plates and data processing techniques have been applied.
In particular, in [24] the design, implementation and experimental results
of the capacitive sensor node have been presented. Four capacitive sensors,
each attached to a wall of a 3 m x 3 m room have been used for the localiza-
tion of a single person inside the room. Different sizes of plates (4 cm x 4 cm,
8 cm x 8 cm and 16 cm x 16 cm) and several localization algorithms have
been tested in terms of precision, recall, average distance error, and detected
walking path. Has been observed that all these parameters improve signifi-
cantly with the increasing of plate area. The 16 cm square sensor plate has
been chosen and an ad hoc conditioning circuit has been designed. The de-
tails about capacitive-sensor front-end interface design have been presented
in [27] and [28].
Since the capacitive sensors have a distance-capacitance dependency that
is strongly-nonlinear and degrades the signal-to-noise ratio, advanced pro-
cessing techniques to improve the sensor performance are required. In [25]
the post processing of the data collected from the sensors have been done
by exploiting some Machine learning classifiers from the Weka collection.
Has been observed that the use of machine learning classifiers can effectively
mitigate sensor data variability and noise due to environmental conditions.
Comparing the localization performance of different algorithms, its variation
with the training set size, and the algorithm resource requirements for both
training and inferring, authors have found the best solution algorithm for this
scope in the Random Forest algorithm. In [30] all the detail about the archi-
tectures used for the neural network and the analysis of the results obtained
are summarized.
In Figure 1.3 is shown a schematic of the overall system.
Furthermore the use of capacitive sensors for human identification has
been explored. In [26] authors have noticed that human bodies with differ-
ent BMI have different influences on electric fields at different frequencies
concluding that capacitive sensors can successfully distinguish between the
people with significantly differeces in weight but that for a more accurate
identification a system with more sensor plates is needed. Remaining in the
field of human identification, in [29] the electric and dielectric properties of
27
1 – Introduction to Indoor Human Localization
Figure 1.3. Main building blocks of capacitive sensor Node and Base Sta-
tion. Four sensor Nodes were connected to a single Base Station [25]
human body tissues have been exploited in order to discriminate among dif-
ferent users. Based on the fact that each body has a unique composition, this
method represent a refinement with respect to the previous work, improving
the sensitivity and discrimination capability of the sensor.
Starting from the indoor localization system presented in this section, this
thesis work aims to add to the capacitive sensor network an infrared ther-
mal sensor by exploiting the sensor fusion technique in order to improve its
performance, reliability and accuracy. More detail about how the capacitive
sensor nodes have been used in this project will be given in Section 3.1.
• Competitive: the sensor of the system are independent one from the
other and delivers measurements of the same property
• Cooperative: the sensor of the system are independent one from the other
but the information provided by them are used to derive information that
would not be available from a single sensors.
31
Chapter 2
Summing up, for thermal human indoor localization the best solution is
to use thermopiles because:
1. unlike pyroelectric sensors, their output does not depend on the rate of
change of the object’s temperature;
For all these reasons, thermopiles technology has been chosen for this work.
X direction:
D6T-44L-06 4.5 to 5.5 VDC ±1.5°C max 0.14°C 5 mA
44.2°
Y direction:
( VCC= 5.0 V, (typical)
45.7°
Ta= 25°C )
In the following sections, some other details about this infrared thermal
sensor and the design choices will be discussed.
Figure 2.3. Angle of view of D6T-44L-06 MEMS Thermal Sensor by Omron [35]
37
2 – Thermal Infrared Sensor acquisition System
Figure 2.6. I2C data line flow and Output data composition from the
datasheet of D6T-44L-06 MEMS Thermal Sensor [36]
of this address word, an eighth bit indicates if the master wants to read
(bit at "1") or write (bit at "0") in the slave register. In this case, it puts
the last bit at 0 and sends in the SDA line the command word "4C" in
hexadecimal. In this way, the slave understands that it will be requested
to send the acquired data.
3. A repeat start condition is then sent by the master followed by the
address of the slave and, this time, a Read request.
4. If the addressed slave has received the command it takes control of the
data line on the next high pulse of the SCL and it forces the line to be
low (acknowledge condition). In this way, the master will be sure that
the slave is ready to send the data.
5. The slave sends the data to the master in groups of 8 bit. As is can
be seen in the bottom part of Figure 2.6, the total packet of output
data is composed by 35 bytes and includes the reference temperature
inside the sensor module (PTAT), the array of 16 temperatures read by
the sensor(from P0 to P15), and a byte for the cyclic redundancy check
(PEC).
6. At the end of the transmission, the master declares the end of the trans-
mission by sending a STOP condition that corresponds to a LOW to
HIGH transition on the SDA line while SCL is HIGH.
39
2 – Thermal Infrared Sensor acquisition System
In Figure 2.7 two images of SDA and SCL signal taken with the oscilloscope
during a data transmission of the D6T sensor are presented. In particular,
on the left, a start condition is shown while on the right a stop condition is
present.
Figure 2.7. Start and Stop of a transmission from D6T sensor to the mas-
ter using I2C protocol. Red lines represent the SCL signal while blue lines
represent the SDA signal.
40
2 – Thermal Infrared Sensor acquisition System
The acquisition of data from the sensor has been done using the micro-
controller of an Arduino Uno board. Arduino Uno is a microcontroller board
based on the ATmega328P, a CMOS 8-bit microcontroller [39].
In Figure 2.8 the schematic of the overall system. As it can be seen, the
sensor communicates the data through a bus using the I2C protocol to the
Arduino Uno board which, in turn, sends the collected data via radio to
a second Arduino board. The data received are then transferred to a PC
via a USB cable and processed using the MATLAB software. In the next
subsections all the details about the electrical connection and design choices
adopted will be given.
41
2 – Thermal Infrared Sensor acquisition System
Figure 2.9. Electrical connection between D6T sensor and MCU [36]
The Xbee module used is the Digi XBee® Embedded ZigBee modules
[40]. It allows making a wireless end-point connectivity between two or more
devices using the IEEE 802.15.4 networking protocol. The platform XBee
Configuration and Test Utility (XCTU) has been used for the configure and
test the Digi RF devices [41]. With this tool has been possible to configure
the two modules such that one can act as a coordinator and the other as an
end device.
In particular, two unique addresses have been assigned to the two modules
and the interface data rate has been set to 9600 bps. A channel of communica-
tion different from the one used for the Xbee communication of the capacitive
sensors has been assigned in order to avoid interference.
I2C communication with the sensor, following the rules explained in Section
2.2.3. In particular the start condition with the address of the sensor device
is sent followed by the command of starting the communication. Then a
repeated start condition is sent to the sensor with a read command. The
Data read from the sensor are then saved byte by byte in a buffer until the
end of the transmission.
C(x) = x8 + x2 + x + 1
that corresponds to the binary number 100000111. Assumed that this poly-
nomial is known by both the transmitter and the receiver, and being the
message (M) to be transmitted any sequence of bits, the CRC will be given
by the division of: M/G. For making this division bit-wise XOR and left shift
of the bits are used, as represented in Figure 2.11.
The receiver of the message, in this case the Arduino board, by applying a
bit-wise algorithm that mimics the hardware shift register method, is able to
compute the packet error code from the received data. The function, defined
into the code, that implements the CRC algorithm is reported in Figure 2.12
and is called "calc_crc".
As it can be seen, it receives a byte data that is shifted left of one bit at
a time and then, if the MSB is at 1, the division is performed by making a
44
2 – Thermal Infrared Sensor acquisition System
As it can be observed, all the 34 bytes received from the sensor are used
as input of the function one after the other, making a bit-wise XOR with the
previous computed CRC output before sending them to the function.
The resulting byte is then compared to the PEC received at the end of the
45
2 – Thermal Infrared Sensor acquisition System
Figure 2.14. Initialization of the packet of data to be sent via radio using Xbee.
time is not constant and considering that, if an error occurs, the sensor must
resend data again, it could be longer. It has been observed that by letting
the Arduino continuously ask for data without a pause the communication
between the board and the sensor is worst and frequently a communication
timeout occurs due to communication mistakes. This timeout condition is
implemented with low input on SDA or SCL terminal for one second and
should be avoided since for one second the sensor cannot communicate its
readings to the board and samples for that period are lost. For this reason,
at the end of the code, a timer is set in order to pause the system for the
time necessary to complete the period, making a subtraction between period
and the elapsed time. In this way, data are received with a fixed sampling
rate. In Figure 2.15 the drawing represents a schematic of how the sampling
period is organized.
After some observation of the maximum acquisition time, made directly
on the signals with the oscilloscope and through software with timestamps,
the minimum sampling period has been set to 125 ms, obtaining a rate of 8
samples/s.
is shown. In particular the code is set to run for a defined amount of time
exploiting the "tic" and "toc" functions and a while loop. Data from the serial
port COM5 are saved into an array using a fscanf() function. The timestamp
of the arrival time of each packet of data is simultaneously saved in an array
with a resolution of one millisecond. This operation has been done in order
to have a time reference useful for later synchronizing data arriving from the
infrared sensor with those from the capacitive sensors.
A spreadsheet file in Comma Separated Values (CSV) format containing
the temperature data, the timestamps and the valid bytes are then created.
In order to have a graphical representation of the so acquired data, a heat-
map graph from each packet of data is then created using the code reported
in Appendix B.2. In this graph, the temperature of each of the 16 squares in
which is divided the field of view of the sensor is represented with a colour
that has a warmer tone as higher is the perceived temperature. In Figure
2.16 two examples of the output file acquired without and with the presence
of a person in the room. It can be seen that the square occupied by the
person (i.e. the square with coordinates (4, 2) in Figure 2.16 B) has a higher
temperature shown by a warmer colour.
48
Chapter 3
Experiment setup
The complete indoor localization system includes the thermal infrared sensor
system described in Chapter 2 and the capacitive system designed in [30].
In addition to this system an ultrasound sensor network has been installed
in order to have an accurate reference for the position of the person in the
room.
is placed at the central point of the ceiling and the ultrasound sensors are
placed at the four edges of the ceiling and communicate with a tag on the
head of the person.
can be removed by filtering the data, the CMOS based timer has been chosen
after this experiment.
From this sensitivity test has been observed that not all the sensors have
the same sensitivity. In particular, sensor 4 seems less susceptible to changes
in the capacity of the plate and is able to detect the presence of a person and
to locate it at a lower maximum distance compared with the other nodes.
This can be due to the position of sensors in the room or to the hardware of
the sensor node.
In order to check if the position where the sensor is in the moment of the
acquisition could influence the reading, all the measurements have been re-
peated by placing all the sensors in the same position. No significant changes
have been observed with this change of position. For this reason, the HW
reasons why the system was not working as good as possible have been an-
alyzed. In particular, all the contacts have been checked with a multimeter,
most of the contacts of the circuit board have been re-soldered, the position
of the battery with respect to the plate has been changed since it could give
some noise. The plate has been also cleaned and the LMC 555 IC has been
substituted with a brand new one of the same type.
All these operations have been repeated for all sensors in the same way,
but significant improvements have only been observed for two, in particular,
sensor 1 and 3, whose graphs of before and after are present in Figure 3.5.
As it is possible to observe, an increment of the sensitivity for sensor 1 and
a reduction of high-frequency noise have been obtained. For the other two
sensors, no significant changes have been observed.
52
3 – Experiment setup
Figure 3.3. Sensitivity test results made for all the four capacitive sensor
nodes and repeated by changing the timer IC. In the upper part (A) data
acquired using LMC 555CN by Texas Instruments, in the bottom part (B)
data acquired using NE 555N by ST-Microelectronics
53
3 – Experiment setup
Figure 3.4. Drift acquisition for all four capacitive sensor nodes for
several 555 ICs. In the upper part (A) data acquired using LMC 555CN
by Texas Instruments, in the bottom part (B) data acquired using NE
555N by ST-Microelectronics
54
3 – Experiment setup
Figure 3.5. Plots of the sensitivity test for sensor nodes 1 and 3 before and
after the hardware debugging operations.
55
3 – Experiment setup
For removing this noise some filtering techniques are used. The output of
the sensor node has been sent to both a Median Filter (MF) and a Low-Pass
Filter (LPF): with the first the slow drift has been extracted, while with the
latter the high-pitch noise has been removed. Then, by simply subtracting
median filter output from the LPF output a clean output, without any noise
has been obtained. This signal would be the input for the Neural network.
the data read from the two different sensors this difference in the accuracy
and resolution must be taken into account. The difference in temperature
between the value read from DHT11 temperature sensor and the average of
the reading from the D6T thermal sensor has been computed and the plot
of this value over the sample number is reported in Figure 3.7.
As can be seen, there is at least 1°C of difference between the two sensors,
probably due to the different positions of the sensors in the room or due to
the different accuracy. Moreover, the values remain in the interval of 1°C,
that is the resolution of the DHT11 sensor. While the ambient temperature
was decreasing over the night, the value measured by the D6T decreased
too and the difference with the value measured by the DHT11 sensor was
58
3 – Experiment setup
increasing, as shown in the drift present in the plot. Then, at the end of the
drift, when also the DHT11 changed its value, the difference returns to be at
1°C.
180 − α
β= (3.2)
2
a
h= tan(β) (3.3)
2
59
3 – Experiment setup
The variables that will be used from now on to compute the field of view
of the sensor are shown in Figure 3.9.
Figure 3.9. Values used for the field of view computation referred to the
room. In the figure, h1 is the distance between the sensor and the floor,
h2 is the distance between the sensor and a person that is h3 = 1.65 m
tall. The angle of view αx and αx and the corresponding projection on the
flooray and ax are also highlighted.
2 · h1
ax = = 2.48m (3.4)
tan(βx )
Along y direction, with αy = 45.7°, and h1 = 3.05m, the maximum covered
area is
2 · h1
ay = = 2.57m (3.5)
tan(βy )
In this way the sensor covers a surface on the floor that is slightly lower
than the total area of the room. However, considering that the points of
higher sensitivity for the capacitive sensors are the points near the walls
60
3 – Experiment setup
where the capacitive nodes are placed, it is not necessary a higher sensitivity
area for the infrared sensor.
Moreover, it must be considered that the part that is better seen by the
infrared sensor is the head, both because of proximity and because it is one of
the warmest parts of the body usually not covered by clothes. For this reason
should be also interesting to consider the field of view not at the floor level
but at the head level. Considering that the average height of human is 1.65
m, the same computation has been done with h2 = 3.05m − 1.65m = 1.4m .
the obtained results are:
2 · h2
ax = = 1.13m (3.6)
tan(βx )
2 · h2
ay = = 1.18m (3.7)
tan(βy )
This area is about half of the area of sensitivity at the floor level, but it
is characterized by a higher resolution since the occupancy ratio of people
head in the FOV is higher.
61
Chapter 4
Figure 4.1. Plot of the raw data acquired in Experiment 1 from the
ultrasound sensors system
As can be seen, some spikes are present in the reading so it has been neces-
sary to filter them out. In particular, the Hampel Filter function of MATLAB
has been used to detect and remove outliers to both the data of x axis and y
axis. This filter computes the median of a window of k neighbour samples for
each sample of the dataset. It estimates also the standard deviation of each
sample about its window median using the median absolute deviation. If a
sample differs from the median by more than n_sigma standard deviations,
it is replaced with the median. The filter has been applied two times, the
63
4 – Data acquisition and Sensor Fusion Results
Figure 4.2. Plot of the data acquired in Experiment 1 from the ultrasound
sensors system after filtering them with Hampel filter
As it can be seen from the image, this data-set does not cover all the spots
of the room, so in the subsequent experiments, a more random direction of
walking has been used, trying to cover the whole surface of the room.
The normalized data from the capacitive sensor are present in Figure 4.3.
For sensor one some steps that are not related to the movement are present.
The same can be observed at the beginning of the acquisition in sensor four.
This can be due to the condition of the room, some electric devices that were
near sensor one during the execution of the experiment or some faults on the
sensors.
The second experiment has been taken in the same evening, with the
same conditions, but by User two, in order to test if with a different user the
interference seen would have been the same. With respect to the previous
experiment, efforts have been made to cover every spot of the floor going in
a direction parallel to x and y axis, in diagonal and also in a random way.
The filtered data plot from the ultrasound system can be observed in Figure
4.4.
64
4 – Data acquisition and Sensor Fusion Results
Figure 4.4. Plot of the data acquired in Experiment 2 from the ultrasound
sensors system after filtering them with Hampel filter
From the image, it is not possible to see the full path followed during the
experiment but it is indicative of the coverage of the room. The data acquired
from the capacitive sensor are presented in Figure 4.5.
65
4 – Data acquisition and Sensor Fusion Results
The data from sensor one presents the same shifts as in experiment one
so it does not depend on the user or on the clothes he was wearing. Notice
that nothing has been changed in the room or around the room when the
measurements have been repeated for the other two experiments the day
after, but this behaviour did not appear during the other experiments, so
it could be due to external, not controllable sources of noise. Drift at the
beginning of acquisition by sensor 3 can also be observed, but it can be
removed with the technique illustrated in Section 3.1.3.
The data from the infrared sensor for both the experiments are not af-
fected by noise and the difference of temperature between the human and
the background reported by the infrared sensor is around 4°C, as it can be
seen from Figure 4.6, where some of the data from the first experiment are
compared with ground truth in the room.
The difference in temperature is high because the experiment has been
done during the evening so the ambient temperature is much lower than the
user temperature. However when the human moves across pixel boundaries
the sensor reports a lower body temperature since it is divided between mul-
tiple pixels. In this way, the difference in temperature between the body and
the background gets lower but it is still possible to recognize the position of
66
4 – Data acquisition and Sensor Fusion Results
the person.
67
4 – Data acquisition and Sensor Fusion Results
Figure 4.6. Images from the infrared sensor during Experiment 1 and com-
parison with a schematic showing the ground truth. From the top to the
bottom the images are in chronological order
68
4 – Data acquisition and Sensor Fusion Results
Figure 4.7. Plot of the data acquired in Experiment three from the Ultra-
sound sensors system after filtering them with Hampel filter
In Figure 4.8 the data from the capacitive sensors system. In the plot
relative to Sensor four it is possible to observe that at a certain point a drift
69
4 – Data acquisition and Sensor Fusion Results
gives the reading a decreasing trend. This can be explained by the fact that
at that point of the experiment the sunlight entered in the blinds hitting
directly the plate of sensor node four. Except for this deviation, the data set
appears to be less noisy than the day before. Some images taken from the
reading of the infrared sensor are in Figure 4.9 where also a schematic of
the actual situation of the room is present. As can be seen, the reading is
influenced by sunlight. In fact, the pixels in the upper right part of each data
matrix are at a higher temperature w.r.t. the ones at the bottom left because
the sun was hitting that part of the room and not the other. Moreover, it
is possible to spot the presence of the hot water bottle that is detected to
have a slightly higher temperature than the background. Actually, the real
temperature of the water was significantly higher than the room temperature
but since the distance from the sensor is around three meters and the bottle of
water was small with respect to the total field of view, temperature perceived
by the infrared sensor was averaged with the background. The temperature
detected by the sensor for the human is about 2°C higher than the hot water
but, when the human passes in a region of the room that is among two or
more pixels, the detected temperature for the human is comparable with the
one of hot water.
70
4 – Data acquisition and Sensor Fusion Results
Figure 4.9. Plot of the data acquired in Experiment 4 from the infrared
sensors and comparison with ground truth. From the top to the bottom, the
images are in chronological order
The last experiment has been made without the hot object on the floor,
only User 2 was in the room walking both following some pats and going
71
4 – Data acquisition and Sensor Fusion Results
in a random way. The experiment has been done shortly after the third one
but the curtains have been lowered so as to prevent the light from directly
hitting the sensors, having already ascertained in experiment three the effect
it has on the various sensors. The sunlight still heating the room, making
the difference in temperature between the person and the background less
evident compared to the experiment carried out during the evening before.
An image from the infrared sensor system acquisition data set is in Figure
4.10. As can be seen, the difference in the detected temperature between
human body and background is around 2°C.
Figure 4.10. Two samples acquired in Experiment 4 from the infrared sensors
system. In the upper image the room was empty, in the bottom one there was
a person in the position indicated by the scheme
The filtered output data from ultrasound sensors are plotted in Figure
4.11, where it is possible to notice that the room has been fully covered
during the experiment using random and non-random patterns.
The normalized data from the capacitive sensors in Figure 4.12. As can be
seen, this data set is less affected by drift noise w.r.t. the previous acquisition.
72
4 – Data acquisition and Sensor Fusion Results
Figure 4.11. Plot of the data acquired in Experiment four from the ultra-
sound sensors system after filtering them with Hampel filter
73
4 – Data acquisition and Sensor Fusion Results
• The data were sampled by the various sensors with the highest sampling
rate that could be obtained without errors from each type: 8 Hz for the
infrared sensor, 5 Hz for the capacitive sensors and 3.5 Hz for the ul-
trasound sensor. During the acquisition phase, the timestamp related to
the acquired data has been saved. Although the systems communicated
with different computers, since all the computers on which the MAT-
LAB scripts ran were connected to the same internet network, the time
reference, given by the web, is the same for each system.
• After the acquisition, data from all sensors have been checked to verify
that the start and stop moments were common to all three. If not, some
samples have been discarded in order to have the same interval for all
the sensors.
• A vector of timestamps evenly spaced over the specified interval has been
obtained using the linspace() function of MATLAB. Then, using linear
interpolation, the data from the three systems have been resampled at
the exact time present in the vector of timestamps obtained. The final
sampling frequency has been in this way set to 5 Hz. The MATLAB
function interp1() has been used for making the interpolation, generating
the missing samples for each timestamp.
The data have been saved in a CSV file reporting for each row the data
from all the sensors and a unique timestamp. It consists of 23 column ma-
trix (with rows depending on the number of samples obtained during the
experiment). Four columns correspond to the four capacitive sensors. Six-
teen columns contain the data obtained from the IR sensor, and the last
two columns contain X and Y axis information obtained from the ultrasound
sensor. This file will be the input file of a Neural Network that will analyze
74
4 – Data acquisition and Sensor Fusion Results
the data and obtain the best model to identify the final position of a person
in the room.
75
Chapter 5
During this work, a lot of effort has been made to make everything work
in the best way and to try to have a scientific approach to the problems
encountered. The expectation is that this system will be enhanced to be
ready and available to improve the lives of users, especially those who need
care and assistance.
78
Appendix A
Microcontroller code
A.1 Transmitter
The code used for the trasmission of data from the infrared sensor is here
reported.
#include <Wire.h>
#include <WireExt.h>
#define D6T_addr 0x0A // Address of OMRON D6T is 0x0A in hex
#define D6T_cmd 0x4C // Standard command is 4C in hex
/*--------------------------------*/
#include <XBee.h>
XBee xbee = XBee();
uint8_t payload[33] ; // bytes to be transmitted via radio
uint8_t frameId = NO_RESPONSE_FRAME_ID; // xbee variable inizialization
uint8_t option = DISABLE_ACK_OPTION;
Tx16Request tx = Tx16Request(0xDCBA, option, payload, sizeof(payload), frameId);
uint16_t frq;
/*--------------------------------*/
int i;
int MAXREADS=3;
79
A – Microcontroller code
//------------------start--------------------------
void setup()
{
Wire.begin();
Serial.begin(9600);
xbee.setSerial(Serial);
pinMode(13, OUTPUT);
}
void loop()
{
start = millis();// save the information about the starting time of transmission
//digitalWrite(13, 0);
valid_byte=0;
if (WireExt.beginReception(D6T_addr) >= 0)
{
i = 0;
for (i = 0; i < 35; i++)
{
rbuf[i] = WireExt.get_byte(); //read the data from the sensor
}
WireExt.endReception();
}
//----------------------------------------------------------------------------------------
// -----------------------crc calculation-----------------------
crc= calc_crc(0x15);
for (i = 0; i < 34; i++)
{
crc=calc_crc(rbuf[i] ^ crc);
}
tPEC= rbuf[34];
// ---------------------------------------------------------------
80
A – Microcontroller code
//If the so computed crc does not match the tPEC repeat the reading for a maximum of 3 times
if (WireExt.beginReception(D6T_addr) >= 0)
{
i = 0;
for (i = 0; i < 35; i++)
{
rbuf[i] = WireExt.get_byte();
}
WireExt.endReception();
}
crc= calc_crc(0x15);
for (i = 0; i < 34; i++)
{
crc=calc_crc(rbuf[i] ^ crc);
}
tPEC= rbuf[34];
if(crc== tPEC)
{valid_byte=1;}
//-------------------------------------------------------------------
//-----------------------------------------------------------------------------
81
A – Microcontroller code
payload[i*2] = highByte(frq);
payload[i*2+1] = lowByte(frq);
}
payload[32]=valid_byte;
xbee.send(tx);
//-------------------------------------------------------------------------------------------
//------------------------------------------------------------------------------------
//digitalWrite(13, 1);
}
//------------------------------------the end-----------------------
82
A – Microcontroller code
A.2 Receiver
The arduino code used for the reception of data via radio is here reported.
#define highWord(w) ((w) >> 16)
#define lowWord(w) ((w) & 0xffff)
#define makeLong(hi, low) (((long) hi) << 16 | (low))
#include <XBee.h>
uint8_t rec_data[32];
uint16_t add;
uint8_t check_error;
uint16_t value[16];
int i;
String address_frq;
void setup() {
// start serial
Serial.begin(9600);
xbee.setSerial(Serial);
xbee.readPacket();
if (xbee.getResponse().isAvailable()) {
// got something
if (xbee.getResponse().getApiId() == RX_16_RESPONSE) {
xbee.getResponse().getRx16Response(rx16);
check_error = rx16.getErrorCode();
if (check_error == NO_ERROR) {
add = rx16.getRemoteAddress16();
String add_str = String(add, HEX);
83
A – Microcontroller code
}
rec_data[32] = rx16.getData(32);
Serial.println( rec_data[32]);// valid byte
}
}
}
}
84
Appendix B
MATLAB code
B.1 Acquisition of the data through serial port
clear all;
times_to_run=180;
time_vector = zeros(times_to_run,1);
Data=zeros(times_to_run, 16);
valid_byte=zeros(times_to_run, 1);
temperature_dht11=zeros(times_to_run, 1);
MATRIX=zeros(4,4);
counter=0;
minutes_to_run=1;
end_time= minutes_to_run *60;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
serialPort = ’COM5’;
s = serial(serialPort);
set(s,’BaudRate’,9600);
fopen(s);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
start_time= tic;
while(toc(start_time)< end_time )
counter= counter+1;
for i=1:16
arduino(i) = fscanf(s, ’%f \n’); %Read Data from Serial as String
end
end
fclose(instrfind);
85
B – MATLAB code
writetable(T,’./infrared_output_date_time.csv’);
writetable(T2,’./infrared_output.csv’);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
86
B – MATLAB code
saveas(gcf,sprintf(’./images/infrared_%05d’,counter),’jpg’);
end
87
Bibliography
[1] S. Shukri, L. Munirah Kamarudin, and M. Hafiz Fazalul Rahiman,
‘Device-Free Localization for Human Activity Monitoring’, in Intelligent
Video Surveillance, A. J. R. Neves, Ed. IntechOpen, 2019.
[2] T. Kivimäki, T. Vuorela, P. Peltola, and J. Vanhala, ‘A Review on Device-
Free Passive Indoor Positioning Methods’, IJSH, vol. 8, no. 1, pp. 71–94,
Jan. 2014, doi: 10.14257/ijsh.2014.8.1.09.
[3] T. B. Moeslund and E. Granum, ‘A Survey of Computer Vision-Based
Human Motion Capture’, Computer Vision and Image Understanding,
vol. 81, no. 3, pp. 231–268, Mar. 2001, doi: 10.1006/cviu.2000.0897.
[4] Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali
Mahjoub, "Vision-based human action recognition: An overview
and real world challenges", Forensic Science International: Dig-
ital Investigation, Volume 32, 2020, 200901,ISSN 2666-2817,
https://doi.org/10.1016/j.fsidi.2019.200901.
[5] D. Hauschildt and N. Kirchhof, Advances in thermal infrared localiza-
tion: Challenges and solutions Int. Conf. Indoor Positioning and Indoor
Navigation, (2010), pp. 1-8.
[6] Hsu, Hui-Huang Peng, W.-J Shih, Timothy Pai, Tun-Wen Man, Ka.
(2015). Smartphone Indoor Localization with Accelerometer and Gyro-
scope. Proceedings - 2014 International Conference on Network-Based
Information Systems, NBiS 2014. 465-469. 10.1109/NBiS.2014.72.
[7] C. Hsu and C. Yu, "An Accelerometer Based Approach for Indoor Local-
ization," 2009 Symposia and Workshops on Ubiquitous, Autonomic and
Trusted Computing, Brisbane, QLD, 2009, pp. 223-227. doi: 10.1109/UIC-
ATC.2009.90
[8] Scott J., Dragovic B. (2005) Audio Location: Accurate Low-Cost Loca-
tion Sensing. In: Gellersen H.W., Want R., Schmidt A. (eds) Pervasive
Computing. Pervasive 2005. Lecture Notes in Computer Science, vol 3468.
Springer, Berlin, Heidelberg
88
Bibliography
92
Acknowledgements
I would like to dedicate this space to people who, with their support, have
helped me in the realization of this thesis and during my university career.
A heartfelt thanks to my supervisors Mihai Lazarescu and Luciano Lavagno
for their infinite availability, for their indispensable advice and for the knowl-
edge transmitted throughout this thesis work.
Thanks to Osama Bin Tariq for helping and guiding me with practical
tips when running the experiments. The long waits hoping that the sensors
worked well have been less tedious together.
Thanks to the Politecnico di Torino, for welcoming me and providing the
tools and knowledge necessary to train me. Thanks to all the professors I met
during my university experience: each of them gave me the opportunity to
learn and grow. Among them, a special thank goes to Professor Passerone for
his kindness and for giving me the opportunity to see the Politecnico under
the starlight.
Thanks to the Polietnico choir which I had the honour of being part of
during part of my journey.
Thanks to the course colleagues: we supported, helped and encouraged
each other during the long exam sessions. Among these, a heartfelt thanks
to the one who most of all had to support and endure me in recent years,
Nicoletta. In you, I found a good colleague and friend. Thanks also to the
new friends met in Turin and to the old friends spread all over the world. In
particular I would like to thank my bachelor degree friends for being present
despite the distance.
Last but not least, I would like to thank my family and Giuseppe who
have always been by my side, often not physically but with the heart, in
good and bad times. Without your support, I would never have come to this
point. Thanks for being an inexhaustible source of love, support and joy.
93