International Journal of Building Pathology and Adaptation
PurposeIn the field of heritage science, especially applied to buildings and artefacts made by or... more PurposeIn the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme importance. In particular, in many cases, the knowledge of the outdoor/indoor microclimate may support the decision process in conservation and preservation matters of historic buildings. This knowledge is often gained by implementing long and time-consuming monitoring campaigns that allow collecting atmospheric and climatic data.Design/methodology/approachSometimes the collected time series may be corrupted, incomplete and/or subjected to the sensors' errors because of the remoteness of the historic building location, the natural aging of the sensor or the lack of a continuous check of the data downloading process. For this reason, in this work, an innovative approach about reconstructing the indoor microclimate into heritage buildings, just knowing the outdoor one, is proposed. This methodology is ba...
Residue Interaction Networks (RINs) play a main role in protein structures and interactions inter... more Residue Interaction Networks (RINs) play a main role in protein structures and interactions interpretation. They are derived from protein structures based on geometrical and physico-chemical properties of the amino acids and they are a way to represent contacts (non-covalent interactions) in a protein. There are several software that provide RINs such as PIC (2007), NAPS (2016) and RING (2011). In this project, we develop, present and study a software that evaluates the propensity/probability of a contact (residueresidue pair) to belong to each of the six contact types as defined by RING. In particular, our software (BondBNet) is based on a Bayesian Neural Network (BNN) and it is able, by using statistical methods, to predict the RING classification starting from the PDB structure and associating an error to each prediction through uncertainty estimation.
Salt marshes are coastal ecosystems found between land and open saltwater that are regularly floo... more Salt marshes are coastal ecosystems found between land and open saltwater that are regularly flooded by the tides. They play a large role in the aquatic food web and the delivery of nutrients to coastal waters, furthermore they support terrestrial animals and provide coastal protection. Due to their key role in the coastal ecosystem and their recent endangering, monitoring the properties of salt marshes is relevant to check the rapidly evolving landscape of coasts. In particular, in this work, the focus is on the Venice lagoon’s salt marshes. The goal of the project is to build a model able to reproduce water quality in a specific location of Venice’s lagoon, in correspondence of the “Minidot” sensor; more precisely, the focus is on the temporal trend of dissolved oxygen, a key environmental indicator for sea life, by exploiting the measurements of: • Solar radiation; • Temperature; • Salinity; Collected by the group of sensors “Samanet” in the lagoon, tide data from ISPRA, atmospheric measurements taken from the ARPAV database and the activation of the MOSE system. The model is built following the typical phases of ecological modeling, i.e. conceptualization of the model and choice of equations, verification, sensitivity analysis (by calculating the condition number for the state ∂p variable x and the parameter p: ( ∂x x )/( p )) , calibration and validation. Finally, the measurements of the goodness of fit (RMSE, Nash Sutcliffe efficiency) are estimated. In this work we propose a calibrated model that faithfully traces the average trend, but fails to reproduce the high variability in small time scales. We believe this happens due to the simplifying assumptions upstream and the nature of the data collected, which we believe to be one of the main limiting factors in the reconstruction.
This report presents the implementation of an UART (Universal Asynchronous Receiver-Transmitter) ... more This report presents the implementation of an UART (Universal Asynchronous Receiver-Transmitter) interface combined with a FIR (Finite Impulse Response) Filter on an FPGA (Filed-Programmable Gate Array). The FPGA is an Integrated Circuit designed to be configured after manufacturing, for this reason the term 'field-programmable'. It is composed by an array of logic blocks that can be reprogrammed: each block is individually programmed to perform a logic function. It is composed also by reconfigurable interconnections that allow the different blocks to be wired together to create different configurations. The device can be designed with the hardware description language VHDL (Very High Speed Integrated Circuit Hardware Description Language). The use of FPGA components has some advantages over ASICs (Application Specific Integrated Circuits): they are in fact standard devices whose functionality to be implemented is not set by the manufacturer, who can therefore produce on a large scale at a low price. Their generic nature makes them suitable for a large number of applications in the consumer, communications, automotive and other sectors. They are programmed directly by the end user, displaying design times, verification by simulations and field testing of the application. So, the big advantage over ASICs is that they allow the end user to make any change or correct errors simply by reprogramming the device at any time.
Image segmentation is a topic of paramount importance in our society, it finds applications from ... more Image segmentation is a topic of paramount importance in our society, it finds applications from computer vision to medical image analysis, robotic perception, video surveillance, and many others. Currently, there are various algorithms deployed for the semantic segmentation task and the most common are deep learning models such as convolutional neural networks (CNNs). In this report, we present, implement, and study three different algorithms to perform semantic segmentation of aerial images under the constraint of limited data and few classes. The first approach is a fully CNN (FCNN) designed by us taking inspiration from U-Net. The second approach is to adapt the Xception pretrained classifier using transfer learning and fine-tuning (XTFT). The third and last approach is a Random Forest Classifier (RF). The models are trained over the same dataset and in the same environment (same system specifics). Thanks to this, we provide a complete comparison of the three models, seeing how the best approach in our case is a FCNN with a contained number of parameters.
International Journal of Building Pathology and Adaptation
PurposeIn the field of heritage science, especially applied to buildings and artefacts made by or... more PurposeIn the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme importance. In particular, in many cases, the knowledge of the outdoor/indoor microclimate may support the decision process in conservation and preservation matters of historic buildings. This knowledge is often gained by implementing long and time-consuming monitoring campaigns that allow collecting atmospheric and climatic data.Design/methodology/approachSometimes the collected time series may be corrupted, incomplete and/or subjected to the sensors' errors because of the remoteness of the historic building location, the natural aging of the sensor or the lack of a continuous check of the data downloading process. For this reason, in this work, an innovative approach about reconstructing the indoor microclimate into heritage buildings, just knowing the outdoor one, is proposed. This methodology is ba...
Residue Interaction Networks (RINs) play a main role in protein structures and interactions inter... more Residue Interaction Networks (RINs) play a main role in protein structures and interactions interpretation. They are derived from protein structures based on geometrical and physico-chemical properties of the amino acids and they are a way to represent contacts (non-covalent interactions) in a protein. There are several software that provide RINs such as PIC (2007), NAPS (2016) and RING (2011). In this project, we develop, present and study a software that evaluates the propensity/probability of a contact (residueresidue pair) to belong to each of the six contact types as defined by RING. In particular, our software (BondBNet) is based on a Bayesian Neural Network (BNN) and it is able, by using statistical methods, to predict the RING classification starting from the PDB structure and associating an error to each prediction through uncertainty estimation.
Salt marshes are coastal ecosystems found between land and open saltwater that are regularly floo... more Salt marshes are coastal ecosystems found between land and open saltwater that are regularly flooded by the tides. They play a large role in the aquatic food web and the delivery of nutrients to coastal waters, furthermore they support terrestrial animals and provide coastal protection. Due to their key role in the coastal ecosystem and their recent endangering, monitoring the properties of salt marshes is relevant to check the rapidly evolving landscape of coasts. In particular, in this work, the focus is on the Venice lagoon’s salt marshes. The goal of the project is to build a model able to reproduce water quality in a specific location of Venice’s lagoon, in correspondence of the “Minidot” sensor; more precisely, the focus is on the temporal trend of dissolved oxygen, a key environmental indicator for sea life, by exploiting the measurements of: • Solar radiation; • Temperature; • Salinity; Collected by the group of sensors “Samanet” in the lagoon, tide data from ISPRA, atmospheric measurements taken from the ARPAV database and the activation of the MOSE system. The model is built following the typical phases of ecological modeling, i.e. conceptualization of the model and choice of equations, verification, sensitivity analysis (by calculating the condition number for the state ∂p variable x and the parameter p: ( ∂x x )/( p )) , calibration and validation. Finally, the measurements of the goodness of fit (RMSE, Nash Sutcliffe efficiency) are estimated. In this work we propose a calibrated model that faithfully traces the average trend, but fails to reproduce the high variability in small time scales. We believe this happens due to the simplifying assumptions upstream and the nature of the data collected, which we believe to be one of the main limiting factors in the reconstruction.
This report presents the implementation of an UART (Universal Asynchronous Receiver-Transmitter) ... more This report presents the implementation of an UART (Universal Asynchronous Receiver-Transmitter) interface combined with a FIR (Finite Impulse Response) Filter on an FPGA (Filed-Programmable Gate Array). The FPGA is an Integrated Circuit designed to be configured after manufacturing, for this reason the term 'field-programmable'. It is composed by an array of logic blocks that can be reprogrammed: each block is individually programmed to perform a logic function. It is composed also by reconfigurable interconnections that allow the different blocks to be wired together to create different configurations. The device can be designed with the hardware description language VHDL (Very High Speed Integrated Circuit Hardware Description Language). The use of FPGA components has some advantages over ASICs (Application Specific Integrated Circuits): they are in fact standard devices whose functionality to be implemented is not set by the manufacturer, who can therefore produce on a large scale at a low price. Their generic nature makes them suitable for a large number of applications in the consumer, communications, automotive and other sectors. They are programmed directly by the end user, displaying design times, verification by simulations and field testing of the application. So, the big advantage over ASICs is that they allow the end user to make any change or correct errors simply by reprogramming the device at any time.
Image segmentation is a topic of paramount importance in our society, it finds applications from ... more Image segmentation is a topic of paramount importance in our society, it finds applications from computer vision to medical image analysis, robotic perception, video surveillance, and many others. Currently, there are various algorithms deployed for the semantic segmentation task and the most common are deep learning models such as convolutional neural networks (CNNs). In this report, we present, implement, and study three different algorithms to perform semantic segmentation of aerial images under the constraint of limited data and few classes. The first approach is a fully CNN (FCNN) designed by us taking inspiration from U-Net. The second approach is to adapt the Xception pretrained classifier using transfer learning and fine-tuning (XTFT). The third and last approach is a Random Forest Classifier (RF). The models are trained over the same dataset and in the same environment (same system specifics). Thanks to this, we provide a complete comparison of the three models, seeing how the best approach in our case is a FCNN with a contained number of parameters.
Uploads
Papers by Noemi Manara
the tides. They play a large role in the aquatic food web and the delivery of nutrients to coastal waters,
furthermore they support terrestrial animals and provide coastal protection. Due to their key role in the
coastal ecosystem and their recent endangering, monitoring the properties of salt marshes is relevant to check
the rapidly evolving landscape of coasts. In particular, in this work, the focus is on the Venice lagoon’s salt
marshes.
The goal of the project is to build a model able to reproduce water quality in a specific location of Venice’s
lagoon, in correspondence of the “Minidot” sensor; more precisely, the focus is on the temporal trend of
dissolved oxygen, a key environmental indicator for sea life, by exploiting the measurements of:
• Solar radiation;
• Temperature;
• Salinity;
Collected by the group of sensors “Samanet” in the lagoon, tide data from ISPRA, atmospheric measurements
taken from the ARPAV database and the activation of the MOSE system.
The model is built following the typical phases of ecological modeling, i.e. conceptualization of the model
and choice of equations, verification, sensitivity analysis (by calculating the condition number for the state
∂p
variable x and the parameter p: ( ∂x
x )/( p )) , calibration and validation. Finally, the measurements of the
goodness of fit (RMSE, Nash Sutcliffe efficiency) are estimated.
In this work we propose a calibrated model that faithfully traces the average trend, but fails to reproduce the
high variability in small time scales. We believe this happens due to the simplifying assumptions upstream and
the nature of the data collected, which we believe to be one of the main limiting factors in the reconstruction.
the tides. They play a large role in the aquatic food web and the delivery of nutrients to coastal waters,
furthermore they support terrestrial animals and provide coastal protection. Due to their key role in the
coastal ecosystem and their recent endangering, monitoring the properties of salt marshes is relevant to check
the rapidly evolving landscape of coasts. In particular, in this work, the focus is on the Venice lagoon’s salt
marshes.
The goal of the project is to build a model able to reproduce water quality in a specific location of Venice’s
lagoon, in correspondence of the “Minidot” sensor; more precisely, the focus is on the temporal trend of
dissolved oxygen, a key environmental indicator for sea life, by exploiting the measurements of:
• Solar radiation;
• Temperature;
• Salinity;
Collected by the group of sensors “Samanet” in the lagoon, tide data from ISPRA, atmospheric measurements
taken from the ARPAV database and the activation of the MOSE system.
The model is built following the typical phases of ecological modeling, i.e. conceptualization of the model
and choice of equations, verification, sensitivity analysis (by calculating the condition number for the state
∂p
variable x and the parameter p: ( ∂x
x )/( p )) , calibration and validation. Finally, the measurements of the
goodness of fit (RMSE, Nash Sutcliffe efficiency) are estimated.
In this work we propose a calibrated model that faithfully traces the average trend, but fails to reproduce the
high variability in small time scales. We believe this happens due to the simplifying assumptions upstream and
the nature of the data collected, which we believe to be one of the main limiting factors in the reconstruction.