Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Valentina  Lenarduzzi
  • Tampere

Valentina Lenarduzzi

Code smells and architectural smells (also called bad smells) are symptoms of poor design that can hinder code understandability and decrease maintainability. Several bad smells have been defined in the literature for both generic... more
Code smells and architectural smells (also called bad smells) are symptoms of poor design that can hinder code understandability and decrease maintainability. Several bad smells have been defined in the literature for both generic architectures and specific architectures. However, cloud-native applications based on microservices can be affected by other types of issues. In order to identify a set of microservice-specific bad smells, researchers collected evidence of bad practices by interviewing 72 developers with experience in developing systems based on microservices. Then, they classified the bad practices into a catalog of 11 microservice-specific bad smells frequently considered harmful by practitioners. The results can be used by practitioners and researchers as a guideline to avoid experiencing the same difficult situations in the systems they develop. Microservices are currently enjoying increasing popularity and diffusion in industrial environments, being adopted by several big players such as Amazon, LinkedIn, Netflix, and SoundCloud. Microservices are relatively small and autonomous services that work together, are modeled around a business capability, and have a single and clearly defined purpose. 1,2 Microservices enable independent deployment, allowing small teams to work on separated and focused services by using the most suitable technologies for their job that can be deployed and scaled independently. 1,2 Microservices are a newly developed architectural style. Several patterns and platforms such as nginx (www.nginx.org) and Kubernetes (kubernetes.io) exist on the market. During the migration process, practitioners often face common problems, which are due mainly to their lack of knowledge regarding bad practices and patterns. 3,4 In this article, we provide a catalog of bad smells that are specific to systems developed using a microservice architectural style, together with possible solutions to overcome these smells. To produce this catalog, we surveyed and interviewed 72 experienced developers over the course of two years, focusing on bad practices they found during the development of microservice-based systems and on how they overcame them. We identified a catalog of 11 microservice-specific bad smells by applying an open and selective coding 5 procedure to derive the smell catalog from the practitioners' answers. The goal of this work is to help practitioners avoid these bad practices altogether or deal with them more efficiently when developing or migrating monoliths to microservice-based systems. As with code and architectural smells, which are patterns commonly considered symptoms of bad design, 1,6 we define microservice-specific bad smells (called " microservice smells " hereafter) as indicators of situations—such as undesired patterns, antipatterns, or bad practices—that negatively affect software quality attributes such as understandability, testability, extensibility, reusability, and maintainability of the system under development.
Research Interests:
Research Interests:
Context. In recent years, smells, also referred to as bad smells, have gained popularity among developers. However, it is still not clear how harmful they are perceived from the developers' point of view. Many developers talk about them,... more
Context. In recent years, smells, also referred to as bad smells, have gained popularity among developers. However, it is still not clear how harmful they are perceived from the developers' point of view. Many developers talk about them, but only few know what they really are, and even fewer really take care of them in their source code. Objective. The goal of this work is to understand the perceived criticality of code smells both in theory, when reading their description, and in practice. Method. We executed an empirical study as a differentiated external replication of two previous studies. The studies were conducted as surveys involving only highly experienced developers (63 in the first study and 41 in the second one). First the perceived criticality was analyzed by proposing the description of the smells, then different pieces of code infected by the smells were proposed, and finally their ability to identify the smells in the analyzed code was tested. Results. According to our knowledge, this is the largest study so far investigating the perception of code smells with professional software developers. The results show that developers are very concerned about code smells in theory, nearly always considering them as harmful or very harmful (17 out of 23 smells). However, when they were asked to analyze an infected piece of code, only few infected classes were considered harmful and even fewer were considered harmful because of the smell. Conclusions. The results confirm our initial hypotheses that code smells are perceived as more critical in theory but not as critical in practice.
Research Interests:
— Software maintenance has dramatically evolved in the last four decades, to cope with the continuously changing software development models and programming languages and adopting increasingly advanced prediction models. In this work, we... more
— Software maintenance has dramatically evolved in the last four decades, to cope with the continuously changing software development models and programming languages and adopting increasingly advanced prediction models. In this work, we present the initial results of a Systematic Literature Review (SLR), highlighting the evolution of the metrics and models adopted in the last forty years.
Research Interests:
— Software and hardware development organizations that consider the adoption of new methods, techniques, or tools often face several challenges, namely to: guarantee process quality, reproducibility, and standard compliance. They need to... more
— Software and hardware development organizations that consider the adoption of new methods, techniques, or tools often face several challenges, namely to: guarantee process quality, reproducibility, and standard compliance. They need to compare existing solutions on the market, and they need to select technologies that are most appropriate for each process phase, taking into account the specific context requirements. Unfortunately, this kind of information is usually not easily accessible; it is incomplete, scattered, and hard to compare. Our goal is to report on an empirical study with high-level practitioners, to extend our previous work on a classification schema for development technologies in the avionic domain. We investigate the acceptance and the possible improvements on the schema, with the aim to help decision makers to easily find, compare and combine existing methods, techniques, and tools based on previous experience. The study has been carried out with five technical leaders for the development of flight control systems, from Liebherr-Aerospace Lindenberg GmbH and the results show that the schema helps to transfer knowledge between projects, guaranteeing quality, reproducibility, and standard compliance.
Research Interests:
[Background] The effort required to systematically collect historical data is not always allocable in agile processes and historical data management is usually delegated to the developers' experience, who need to remember previous project... more
[Background] The effort required to systematically collect historical data is not always allocable in agile processes and historical data management is usually delegated to the developers' experience, who need to remember previous project details. However, even if well trained, developers cannot precisely remember a huge number of details, resulting in wrong decisions being made during the development process. [Aims] The goal of this paper is to operationalize the Experience Factory in an agile way, i.e., defining a strategy for collecting historical project data using an agile approach. [Method] We provide a mechanism for understanding whether a measure must be collected or not, based on the Return on Invested Time (ROIT). In order to validate this approach, we instantiated the factory with an exploratory case study, comparing four projects that did not use our approach with one project that used it after 12 weeks out of 37 and two projects that used it from the beginning. [Results] The proposed approach helps developers to constantly improve their estimation accuracy with a very positive ROIT of the collected measure. [Conclusions] From this first experience, we can conclude that the Experience Factory can be applied effectively to agile processes, supporting developers in improving their performance and reducing potential decision mistakes.
Research Interests:
Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up... more
Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up to the developers, who split it into user stories by means of different techniques. Objective: We aim to compare the requirements decomposition process of an unstructured process and three Agile processes, namely XP, Scrum, and Scrum with Kanban. Method: We conducted a multiple case study with a replication design, based on the project idea of an entrepreneur, a designer with no experience in software development. Four teams developed the project inde‐ pendently, using four different development processes. The requirements were elicited by the teams from the entrepreneur, who acted as product owner and was available to talk with the four groups during the project. Results: The teams decomposed the requirements using different techniques, based on the selected development process. Conclusion: Scrum with Kanban and XP resulted in the most effective processes from different points of view. Unexpectedly, decompo‐ sition techniques commonly adopted in traditional processes are still used in Agile processes, which may reduce project agility and performance. Therefore, we believe that decomposition techniques need to be addressed to a greater extent, both from the practitioners' and the research points of view.
Research Interests:
[Context]: Communication plays an important role in any development process. However, communication overhead has been rarely compared among development processes. [Objective]: "e goal of this work is to compare the communication overhead... more
[Context]: Communication plays an important role in any development process. However, communication overhead has been rarely compared among development processes. [Objective]: "e goal of this work is to compare the communication overhead and the different channels applied in three agile processes (XP, Scrum, Scrum with Kanban) and in an unstructured process. [Method]: We designed an empirical study asking four teams to develop the same application with the four development processes, and we compare the communication overhead among them. [Results]: As expected, face-to-face communication is most frequently employed in the teams. Scrum with Kanban turned out to be the process that requires the least communication. Unexpectedly, despite requiring much more time to develop the same application, the unstructured process required comparable communication overhead (25% of the total development time) as the agile processes.
—Background. Function Point Analysis is the most used technique for sizing software functional specifications. Function Point measures are widely used to estimate the effort needed to develop software, hence the cost of software. However... more
—Background. Function Point Analysis is the most used technique for sizing software functional specifications. Function Point measures are widely used to estimate the effort needed to develop software, hence the cost of software. However , Function Point Analysis adopts the point of view of the end user, and –consistently– considers a software application as a whole. This approach does not allow for assessing the role of reusable components in software development. In fact, reusing available components decreases the cost of software development, but standard Function Point measures are not able to account for the savings deriving from component reuse. Objective. We aim at modifying the definition of Function Point Analysis so that the role of components can be taken into account. More specifically, we redefine the measurement so that when no components are used the resulting measure is the same yielded by the standard measurement process, but in presence of components, our modified measure is less than the standard measure (the bigger the role of components, the smaller the measure). Method. Components partly support the realization of elementary processes. Therefore, we split elementary processes into sub-processes, such that each sub-process is either totally supported by a component or it is not supported at all by any component; the size of the elementary process is defined to be inversely proportional to the size of sub-processes supported by components. Results. The proposed approach was applied to a Web application , which was developed in two versions: one from scratch and one using available components. As expected, the 'component-aware' measures obtained are smaller than the standard measures. We also compared the reduction in size with the reduction in development effort. Conclusions. The proposed method proved effective in taking into account the usage of components in the development of the considered application. However, the observed decrease in size is smaller than the decrease of development effort. The latter result suggests that this initial proposal needs further experimentation to support accurate effort estimation.
Research Interests:
Context: One of the most important steps of the Lean Startup methodology is the definition of Minimum Viable Product (MVP), needed to start the learning process by integrating the early adopters' feedbacks as soon as possible. Objective:... more
Context: One of the most important steps of the Lean Startup methodology is the definition of Minimum Viable Product (MVP), needed to start the learning process by integrating the early adopters' feedbacks as soon as possible. Objective: This study aims at identifying the common definitions of MVP proposed and the key factors identified to help entrepreneurs efficiently define their MVP, reducing errors due to unconsidered unknown factors. Method: We identified the MVP definitions and key factors by means of a systematic mapping study, defining the research questions and the protocol to be used. We selected the bibliographic sources, the keywords, and the selection criteria for searching the relevant papers. Results: We found 97 articles and, through inclusion and exclusion criteria, removed 75 articles, which reduced the total to 22 at the end of the process. The results are a classification schema for characterizing the definition of Minimum Viable Product in Lean Startups and a set of common key factors identified in the MVP definitions. Conclusion: The identified key factors are related to technical characteristics of the product as well as market and customer aspects. We found a positive improvement of the state of the art of MVP and the definition of Minimum.
Research Interests:
—In SCRUM projects, effort estimations are carried out at the beginning of each sprint, usually based on story points. The usage of functional size measures, specifically selected for the type of application and development conditions, is... more
—In SCRUM projects, effort estimations are carried out at the beginning of each sprint, usually based on story points. The usage of functional size measures, specifically selected for the type of application and development conditions, is expected to allow for more accurate effort estimates. The goal of the work presented here is to verify this hypothesis, based on experimental data. The association of story measures to actual effort and the accuracy of the resulting effort model was evaluated. The study shows that developers' estimation is more accurate than those based on functional measurement. In conclusion, our study show that, easy to collect functional measures do not help developers in improving the accuracy of the effort estimation in Moonlight SCRUM.
Research Interests:
[Context]: Unhandled code exceptions are often the cause of a drop in the number of users. In the highly competitive market of Android apps, users commonly stop using applications when they find some problem generated by unhandled... more
[Context]: Unhandled code exceptions are often the cause of a drop in the number of users. In the highly competitive market of Android apps, users commonly stop using applications when they find some problem generated by unhandled exceptions. This is often reflected in a negative comment in the Google Play Store and developers are usually not able to reproduce the issue reported by the end users because of a lack of information. [Objective]: In this work, we present an industrial case study aimed at prioritizing the removal of bugs related to uncaught exceptions. Therefore, we (1) analyzed crash reports of an Android application developed by a public transportation company, (2) classified uncaught exceptions that caused the crashes; (3) prioritized the exceptions according to their impact on users. [Results]: The analysis of the exceptions showed that seven exceptions generated 70% of the overall errors and that it was possible to solve more than 50% of the exceptions-related issues by fixing just six Java classes. Moreover, as a side result , we discovered that the exceptions were highly correlated with two code smells, namely " Spaghetti Code " and " Swiss Army Knife ". The results of this study helped the company understand how to better focus their limited maintenance effort. Additionally, the adopted process can be beneficial for any Android developer in understanding how to prioritize the maintenance effort.
Research Interests:
Context: several companies, particularly Small and Medium Sized Enterprises (SMEs), often face software maintenance issues due to the lack of Software Quality Assurance (SQA). SQA is a complex task that requires a lot of effort and... more
Context: several companies, particularly Small and Medium Sized Enterprises (SMEs), often face software maintenance issues due to the lack of Software Quality Assurance (SQA). SQA is a complex task that requires a lot of effort and expertise, often not available in SMEs. Several SQA models, including maintenance prediction models, have been defined in research papers. However, these models are commonly defined as " one-size-fits-all " and are mainly targeted at the big industry, which can afford software quality experts who undertake the data interpretation tasks. Objective: in this work, we propose an approach to continuously monitor the software operated by end users, automatically collecting issues and recommending possible fixes to developers. The continuous exception monitoring system will also serve as knowledge base to suggest a set of quality practices to avoid (re)introducing bugs into the code. Method: first, we identify a set of SQA practices applicable to SMEs, based on the main constraints of these. Then, we identify a set of prediction techniques, including regressions and machine learning, keeping track of bugs and exceptions raised by the released software. Finally, we provide each company with a tailored SQA model, automatically obtained from companies' bug/issue history. Developers are then provided with the quality models through a set of plug-ins for integrated development environments. These suggest a set of SQA actions that should be undertaken, in order to maintain a certain quality level and allowing to remove the most severe issues with the lowest possible effort. Conclusion: The collected measures will be made available as public dataset, so that researchers can also benefit of the project's results. This work is developed in collaboration with local SMEs and existing Open Source projects and communities.
Research Interests:
In the last years, cloud-native architectures have emerged as a target platform for the deployment of microservice architectures. The migration of existing monoliths into cloud-native applications is still in the early phase, and only few... more
In the last years, cloud-native architectures have emerged as a target platform for the deployment of microservice architectures. The migration of existing monoliths into cloud-native applications is still in the early phase, and only few companies already started their migrations. Therefore, success and failure stories about different approaches are not available in the literature. This context connects also to the recently discussed DevOps context where development and continuous deployment are closely linked.
Research Interests:
In the last years, cloud-native architectures have emerged as a target platform for the deployment of microservice architectures. The migration of existing monoliths into cloud-native applications is still in the early phase, and only few... more
In the last years, cloud-native architectures have emerged as a target platform for the deployment of microservice architectures. The migration of existing monoliths into cloud-native applications is still in the early phase, and only few companies already started their migrations. Therefore, success and failure stories about different approaches are not available in the literature. This context connects also to the recently discussed DevOps context where development and continuous deployment are closely linked.
Research Interests:
Context: several companies, particularly Small and Medium Sized Enterprises (SMEs), often face software maintenance issues due to the lack of Software Quality Assurance (SQA). SQA is a complex task that requires a lot of effort and... more
Context: several companies, particularly Small and Medium Sized Enterprises (SMEs), often face software maintenance issues due to the lack of Software Quality Assurance (SQA). SQA is a complex task that requires a lot of effort and expertise, often not available in SMEs. Several SQA models, including maintenance prediction models, have been defined in research papers. However, these models are commonly defined as " one-size-fits-all " and are mainly targeted at the big industry, which can afford software quality experts who undertake the data interpretation tasks. Objective: in this work, we propose an approach to continuously monitor the software operated by end users, automatically collecting issues and recommending possible fixes to developers. The continuous exception monitoring system will also serve as knowledge base to suggest a set of quality practices to avoid (re)introducing bugs into the code. Method: first, we identify a set of SQA practices applicable to SMEs, based on the main constraints of these. Then, we identify a set of prediction techniques, including regressions and machine learning, keeping track of bugs and exceptions raised by the released software. Finally, we provide each company with a tailored SQA model, automatically obtained from companies' bug/issue history. Developers are then provided with the quality models through a set of plug-ins for integrated development environments. These suggest a set of SQA actions that should be undertaken, in order to maintain a certain quality level and allowing to remove the most severe issues with the lowest possible effort. Conclusion: The collected measures will be made available as public dataset, so that researchers can also benefit of the project's results. This work is developed in collaboration with local SMEs and existing Open Source projects and communities.
Research Interests:
Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. The web pages of OSS products are the main communication channel with potential users and they should act... more
Open Source Software (OSS) communities do not often invest in marketing strategies to promote their products in a competitive way. The web pages of OSS products are the main communication channel with potential users and they should act as a product's shopping window. However, even the home pages of well-known OSS products show technicalities and details that are not relevant the vast majority of users. So, final users and even developers, who are interested in evaluating and potentially adopting an OSS product, are often negatively impressed by the web portal of the product and turn to proprietary software solutions or fail to adopt OSS that may be useful in their activities.
Research Interests:
—In SCRUM projects, effort estimations are carried out at the beginning of each sprint, usually based on story points. The usage of functional size measures, specifically selected for the type of application and development conditions, is... more
—In SCRUM projects, effort estimations are carried out at the beginning of each sprint, usually based on story points. The usage of functional size measures, specifically selected for the type of application and development conditions, is expected to allow for more accurate effort estimates. The goal of the work presented here is to verify this hypothesis, based on experimental data. The association of story measures to actual effort and the accuracy of the resulting effort model was evaluated. The study shows that developers' estimation is more accurate than those based on functional measurement. In conclusion, our study shows that, easy to collect functional measures do not help developers in improving the accuracy of the effort estimation in Moonlight SCRUM.
Research Interests:
—Software development effort estimation is a very important issue in software engineering and several models have been defined to this end. In this paper, we carry out an empirical study on the estimation of software development effort... more
—Software development effort estimation is a very important issue in software engineering and several models have been defined to this end. In this paper, we carry out an empirical study on the estimation of software development effort broken down by phase, so that estimation can be used along the software development lifecycle. More specifically, our goal is twofold. At any given point in the software development lifecycle, we estimate the effort needed for the next phase. Also, we estimate the effort for the remaining part of the software development process. Our empirical study is based on historical data from the ISBSG database. The results show a set of statistically significant correlations between: (1) the effort spent in one phase and the effort spent in the following one; (2) the effort spent in a phase and the remaining effort; (3) the cumulative effort up to the current phase and the remaining effort. However, the results also show that these estimation models come with different degrees of goodness of fit. Finally, including further information, such as the functional size, does not significantly improve estimation quality.
Research Interests:
Reliability is a very important non-functional aspect for software systems and artefacts. In literature, several definitions of software reliability exist and several methods and approaches exist to measure reliability of a software... more
Reliability is a very important non-functional aspect for software systems and artefacts. In literature, several definitions of software reliability exist and several methods and approaches exist to measure reliability of a software project. However, in the literature no works focus on the applicability of these methods in all the development phases of real software projects. In this paper, we describe the methodology we adopted during the S-CASE FP7 European Project to predict reliability for both the S-CASE platform as well as for the software artefacts automatically generated by using the S-CASE platform. Two approaches have been adopted to compute reliability: the first one is the ROME Lab Model, a well adopted traditional approach in industry; the second one is an empirical approach defined by the authors in a previous work. An extensive dataset of results has been collected during all the phases of the project. The two approaches can complement each other, to support to prediction of reliability during all the development phases of a software system in order to facilitate the project management from a non-functional point-of-view.
Research Interests:
Effort estimation is often influenced by several factors, including social. This study aims at understanding the interactions between social factors and effort during effort estimation. I want to analyze the dynamics that occur when a... more
Effort estimation is often influenced by several factors, including social. This study aims at understanding the interactions between social factors and effort during effort estimation. I want to analyze the dynamics that occur when a developer estimates the effort for a specific task and the influence of the work team and the work conditions. I conducted a semi-structured interview among three different projects with different developers working in Agile and Scrum processes, asking them which factors and social aspects they take in to account when they estimate the effort during the development processes. Results show an important influence of social factors during the effort estimation phase, and call for future works for a large scale Survey for a more accurate identification.
Research Interests:
Abstract: In Italy, the adoption of modern software technologies is strongly limited by the current critical economic situation. The adoption of Open Source Software (OSS) solutions can mitigate this problem because of the nature of OSS... more
Abstract: In Italy, the adoption of modern software technologies is strongly limited by the current critical economic situation. The adoption of Open Source Software (OSS) solutions can mitigate this problem because of the nature of OSS products that helps to cut down on costs by providing modern and flexible products. However, very often school managers and teachers are unaware about the availability of powerful OSS tools that can support teachers and school managers in their daily work.
To help developers during the Scrum planning poker, in our previous work we ran a case study on a Moonlight Scrum process to understand if it is possible to introduce functional size metrics to improve estimation accuracy and to measure... more
To help developers during the Scrum planning poker, in our previous work we ran a case study on a Moonlight Scrum process to understand if it is possible to introduce functional size metrics to improve estimation accuracy and to measure the accuracy of expert-based estimation. The results of this original study showed that expert-based estimations are more accurate than those obtained by means of models, calculated with functional size measures. To validate the results and to extend them to plain Scrum processes, we replicated the original study twice, applying an exact replication to two plain Scrum development processes. The results of this replicated study show that the accuracy of the effort estimated by the developers is very accurate and higher than that obtained through functional size measures. In particular, SiFP and IFPUG Function Points, have low predictive power and are thus not help to improve the estimation accuracy in Scrum.
Research Interests:
Entrepreneurs and Small and Medium Enterprises usually have issues on developing new prototypes, new ideas or testing new techniques. In order to help them, in the last years, academic Software Factories, a new concept of collaboration... more
Entrepreneurs and Small and Medium Enterprises usually have issues on developing new prototypes, new ideas or testing new techniques. In order to help them, in the last years, academic Software Factories, a new concept of collaboration among universities and companies has been developed. Software Factories provide a unique environment for students and companies. Students benefit from the possibility of working in a real work environment learning how to apply the state of the art of the existing techniques and showing their skills to entrepreneurs. Companies benefit from the risk-free environment where they can develop new ideas, in a protected environment. Universities, finally benefit from this setup as a perfect environment for empirical studies in industrial-like environment. In this paper, we present the network of academic Software Factories in Europe, showing how Companies had already benefit from existing Software Factories and reporting success stories. The results of this paper can increase the network of the factories and help other universities and companies to set- up similar environment to boost the local economy
Research Interests:
Context: One of the most important steps of the Lean Startup methodology is the definition of Minimum Viable Product (MVP), needed to start the learning process by integrating the early adopters' feedbacks as soon as possible. Objective:... more
Context: One of the most important steps of the Lean Startup methodology is the definition of Minimum Viable Product (MVP), needed to start the learning process by integrating the early adopters' feedbacks as soon as possible. Objective: This study aims at identifying the common definitions of MVP proposed and the key factors identified to help entrepreneurs efficiently define their MVP, reducing errors due to unconsidered unknown factors. Method: We identified the MVP definitions and key factors by means of a systematic mapping study, defining the research questions and the protocol to be used. We selected the bibliographic sources, the keywords, and the selection criteria for searching the relevant papers. Results: We found 97 articles and, through inclusion and exclusion criteria, removed 75 articles, which reduced the total to 22 at the end of the process. The results are a classification schema for characterizing the definition of Minimum Viable Product in Lean Startups and a set of common key factors identified in the MVP definitions. Conclusion: The identified key factors are related to technical characteristics of the product as well as market and customer aspects. We found a positive improvement of the state of the art of MVP and the definition of Minimum.
Software Quality Assurance is a complex and time-expensive task. In this study we want to observe how agile developers react to just-in-time metrics about the code smells they introduce, and how the metrics influence the quality of the... more
Software Quality Assurance is a complex and time-expensive task. In this study we want to observe how agile developers react to just-in-time metrics about the code smells they introduce, and how the metrics influence the quality of the output.
Research Interests:
Context: SMEs cannot always afford the effort required for software quality assurance, and therefore there is the need of easy and affordable practices to prevent issues in the software they develop. Object: In this paper, we propose an... more
Context: SMEs cannot always afford the effort required for software quality assurance, and therefore there is the need of easy and affordable practices to prevent issues in the software they develop. Object: In this paper, we propose an approach to allow SMEs to access SQA practices, using an SQA approach based on a continuous issue and error monitoring and a recommendation system that will suggest quality practices, recommending a set of quality actions based on the issues that previously created errors, so as to help SMEs to maintain quality above a minimum threshold. Method: First, we aim to identify a set of SQA practices applicable in SMEs, based on the main constraints of SMEs and a set of tools and practices to fulfill a complete DevOps pipeline. Second, we aim to define a recommendation system to provide software quality feedback to micro-teams, suggesting which action(s) they should take to maintain a certain quality level and allowing them to remove the most severe issues with the lowest possible effort. Our approach will be validated by a set of local SMEs. Moreover, the tools developed will be published with an Open Source license.
Research Interests:
Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up... more
Context: Eliciting requirements from customers is a complex task. In Agile processes, the customer talks directly with the development team and often reports requirements in an unstructured way. The requirements elicitation process is up to the developers, who split it into user stories by means of different techniques. Objective: We aim to compare the requirements decomposition process of an un-structured process and three Agile processes, namely XP, Scrum, and Scrum with Kanban. Method: We conducted a multiple case study with a replication design, based on the project idea of an entrepreneur, a designer with no experience in software development. Four teams developed the project independently, using four different development processes. The requirements were elicited by the teams from the entrepreneur, who acted as product owner and was available to talk with the four groups during the project. Results: The teams decomposed the requirements using different techniques, based on the selected development process. Conclusion: Scrum with Kanban and XP resulted in the most effective processes from different points of view. Unexpectedly, decomposition techniques commonly adopted in traditional processes are still used in Agile processes, which may reduce project agility and performance. Therefore, we believe that decomposition techniques need to be addressed to a greater extent, both from the practitioners' and the research points of view.
Research Interests:
[Background] !e effort required to systematically collect historical data is not always allocable in agile processes and historical data management is usually delegated to the developers' experience, who need to remember previous project... more
[Background] !e effort required to systematically collect historical data is not always allocable in agile processes and historical data management is usually delegated to the developers' experience, who need to remember previous project details. However, even if well trained, developers cannot precisely remember a huge number of details, resulting in wrong decisions being made during the development process. [Aims] !e goal of this paper is to operationalize the Experience Factory in an agile way, i.e., defining a strategy for collecting historical project data using an agile approach. [Method] We provide a mechanism for understanding whether a measure must be collected or not, based on the Return on Invested Time (ROIT). In order to validate this approach, we instantiated the factory with an exploratory case study, comparing four projects that did not use our approach with one project that used it a$er 12 weeks out of 37 and two projects that used it from the beginning. [Results] !e proposed approach helps developers to constantly improve their estimation accuracy with a very positive ROIT of the collected measure. [Conclusions] From this first experience, we can conclude that the Experience Factory can be applied effectively to agile processes, supporting developers in improving their performance and reducing potential decision mistakes.
Research Interests:
[Context]: Communication plays an important role in any development process. However, communication overhead has been rarely compared among development processes. [Objective]: "e goal of this work is to compare the communication overhead... more
[Context]: Communication plays an important role in any development process. However, communication overhead has been rarely compared among development processes. [Objective]: "e goal of this work is to compare the communication overhead and the different channels applied in three agile processes (XP, Scrum, Scrum with Kanban) and in an unstructured process. [Method]: We designed an empirical study asking four teams to develop the same application with the four development processes, and we compare the communication overhead among them. [Results]: As expected, face-to-face communication is most frequently employed in the teams. Scrum with Kanban turned out to be the process that requires the least communication. Unexpectedly, despite requiring much more time to develop the same application, the unstructured process required comparable communication overhead (25% of the total development time) as the agile processes.
Research Interests: