Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Industry–Academia Research Collaboration and Knowledge Co-creation: Patterns and Anti-patterns

Published: 07 March 2022 Publication History

Abstract

Increasing the impact of software engineering research in the software industry and the society at large has long been a concern of high priority for the software engineering community. The problem of two cultures, research conducted in a vacuum (disconnected from the real world), or misaligned time horizons are just some of the many complex challenges standing in the way of successful industry–academia collaborations. This article reports on the experience of research collaboration and knowledge co-creation between industry and academia in software engineering as a way to bridge the research–practice collaboration gap. Our experience spans 14 years of collaboration between researchers in software engineering and the European and Norwegian software and IT industry. Using the participant observation and interview methods, we have collected and afterwards analyzed an extensive record of qualitative data. Drawing upon the findings made and the experience gained, we provide a set of 14 patterns and 14 anti-patterns for industry–academia collaborations, aimed to support other researchers and practitioners in establishing and running research collaboration projects in software engineering.

1 Introduction

The relevance of software engineering (SE) research and its potential to make a meaningful impact for industry practitioners has been an ongoing concern in the SE community. Bridging the industry–academia (IA) collaboration gap is a long-standing ambition of the SE community, recently given added attention by some of the prominent researchers in the field. For example, Briand acknowledges the limited impact of SE research on practice [7], discussing some of the root causes for this situation, such as limited focus on real engineering problems, as well as a flawed reward system giving most credit to publication metrics. He also argues that there is a large disconnect between research and practice in SE [4]. Because research is not grounded in a real-world setting, the output produced by research is not applicable or scalable. To overcome this problem, he advocates the case for context-driven research [8], which means that research must focus on the problems driven by concrete needs in specific domains in order to be impactful. Similar to this view, Shneiderman [58] believes that a problem-oriented approach to research, which incorporates both theoretical developments and validated solutions ready for deployment, is the key to bridging the IA gap. Based on four decades of experience in the software industry and SE research, Selic [53] further claims that transferring the output of software research into useful industrial products often fails. If research and its productization are not intertwined processes, it can lead to failures of both the research and the productization.
Many existing approaches to implementing IA collaboration are based on some form of a technology or knowledge transfer process, such as the Gorschek’s model for technology transfer in practice [23]. A typical technology transfer process assumes that the research problem and its solution are (primarily) created by researchers and then transferred to practitioners. We, however, take a different stance on IA collaborations. We believe that research knowledge co-creation is a key to successful IA collaboration. We define co-creation as the process of participative value creation, where industry and academia actively participate in problem definition and solving, aiming to develop more relevant solutions for all participating parties, thus reducing the risk of the research collaboration failing. In this article, we report on our 14-year experience of applying research knowledge co-creation to IA collaboration in SE. Our experience is derived from and validated in three different environments. The first is an 8-year research collaboration project between a research lab in SE and the Norwegian software industry. The second is a 5-year collaboration between a Norwegian start-up company and several international academic partners, conducting industry-relevant research. The third is an ongoing interdisciplinary collaboration in the context of a European H2020 project, involving 12 partners from across Europe. Using a combination of data collection and analysis methods we have collected and analyzed an extensive record of qualitative data on research knowledge co-creation. We synthesize this knowledge in the form of 28 patterns and anti-patterns, as a set of good and bad solutions to the challenges of IA collaboration in SE. Compared to existing work in this area, our patterns and anti-patterns are presented as reusable solutions describing the exact problem they address and its circumstances, a potential negative solution and how to recognize it and understand its implications, and a working solution and its benefits.

2 Background

IA collaborations present opportunities to yield benefits for both participating parties, such as increased access to new knowledge for industry and a validation context for scientific results for researchers [32]. However, a substantial challenge has been associated with building scalable and effective research collaborations between industry and academia in SE [5]. Next, we review some of these challenges, followed by lessons learned on IA collaboration.

2.1 Challenges in Industry–Academia Collaborations

Challenges in IA collaborations exist on both sides of the collaboration. Chimalakonda [11] notices the trend where most practitioners believe that researchers work on dated or futuristic theoretical challenges which are divorced from industrial practice, while researchers believe that practitioners are looking for quick fixes instead of using systematic methods. Researchers seem to be more interested in proposing new techniques and tools, focusing on technical novelty, while practitioners are interested in solutions that work in their context, regardless of novelty. Such views of researchers and practitioners often make the two seem like separate islands. On one side there are practitioners who do not see much benefit of research results and on the other side there is the “ivory-tower” of academia. Runeson [51] draws attention to the challenge of different time horizons of IA collaboration projects, which are generally shorter in industry compared to academia. This circumstance, as he observes, can cause friction and frustration on both sides of the collaboration. Bern [22] further argues that one of the reasons for the disconnect between research and practice lies in the difficulty for practitioners to make use of the knowledge produced by academic researchers. Academic knowledge is typically disseminated through scientific papers, which are difficult to absorb by many practitioners. He suggests that in addition to disseminating research results as scientific papers, researchers could use blogs, videos, and interactive posters to engage and interact with practitioners. Ivanov [27] gives more examples of gaps between research and practice, indicating a wide disconnect between research topics discussed at academic conferences and the problems that practitioners are looking to solve.
A panel at International Conference on Software Engineering 2011 titled “What industry wants from research” tried to identify key challenges and ways forward for bringing theory and practice closer together. Some of the panelists pointed out that potential reasons for the gap between the two lie in the constrained scalability of research solutions, limited interest of researchers for industry needs versus publications, and insufficient attention given to the integration of research into practice. To reduce the gap between industry and academia, suggestions were made for industry to tell researchers their real problems and share their case studies, and for researchers, to work on improving the usability of tools and methods, and to consider how to improve value, especially earning.
Furthermore, Wohlin [62] analyzes success factors for two IA collaboration projects in SE in Sweden and Australia, concluding that the industrial side of collaboration is the key element for successful collaboration. He also discusses top 10 challenges for IA collaborations, some of which include trust and respect, roles and goals, and knowledge exchange instead of technology transfer, which we have also seen as important in our collaboration projects. In addition, he proposes five levels of closeness between industry and academia [13], ranging from less to more close: not in touch, hearsay, sales pitch, offline, and one team.
We have observed many of these (and some other) challenges in our IA collaboration experience. In the collection of patterns and anti-patterns presented in this article, we discuss potential solutions to such challenges. In Table 7, we summarize which of our proposed solutions are relevant for these (and other) challenges reported in the context of IA collaboration projects.

2.2 Lessons Learned on Industry–Academia Collaboration

Lessons learned from IA collaborative projects are often reported in literature. For example, Sandberg [1] reports lessons learned in a long term IA collaboration, describing success factors enabling research activities and results, including continuity, communication ability, industry goal alignment, and deployment impact. They target collaborative practice research (CPR), which is a collaboration practice relying on action research, experiments, and practice studies. While CPR is similar to the collaboration practices explored in our work, their experience involves only one company and an academic research institute, while our experience stems from a much larger context, involving five academic groups and 17 industry partners in total. Besides, while this study presents insightful success factors for IA collaboration projects, in order to apply the success factors, we see a need to systemically describe the context and problems that these success factors can be useful for, as described in our study. Dittrich [63] summarizes the experience of applying Cooperative Method Development (CMD) approach to IA research projects. In their experience, action research, as one of the applied principles, consists of three phases: understanding, deliberating change, and implementation and evaluation of improvement. While these three phases can be generally considered as part of the collaboration framework we applied in our collaboration contexts, we also focus on the importance of building the culture of one team consisting of participants from industry and academia, both equally participating in all stages of research knowledge creation, deployment and use, as one of the differences with CMD. While CMD aims to answer the questions of how software development practitioners tackle their everyday work, and how can methods, processes and tools be improved to address the problems experienced by practitioners, we, in addition, aim to uncover how to involve practitioners in the research knowledge generation process, as the final users of such knowledge. Marijan [36] reports the Certus collaboration model, consisting of seven elements of IA collaboration that enable the culture of participative knowledge creation. These elements include problem scoping, knowledge conception, knowledge and technology development, knowledge and technology transfer, knowledge and technology exploitation, organizational adoption, and market research. While this model provides a systematic way of structuring different phases of collaboration, thus hopefully increasing its success, we further see a need to provide a catalog of patterns and anti-patterns that researchers and practitioners can use in a wide range of practical challenges occurring at different phases in collaboration. Barroca [3] presents lessons learned from three agile project with industry. These lessons include building trust, having written agreements in place, flexibility, providing outputs tailored to different audiences, cash investment from industry, having relevant research expertise, and regular contacts with industry. The concept of agile research targeted in their work is similar to our collaboration practices. However, their work reports overall lessons learned from applying agile research in three case studies, without special focus on explaining how specific collaboration challenges can be addressed by specific solutions. Mathiassen [44] summarizes experiences from an IA collaboration focusing on goals, approaches, and results. He underlines the importance of combining action research, experiments, and conventional practice to balance between relevance and rigor in IA collaborative research. While this work provides useful findings and lessons learned, these findings are lacking evaluation in a longer and larger context. Besides, no mapping is provided between lessons and specific collaboration challenges they address. Sjoo [59] applies a systematic literature review to identify general factors enabling collaborative innovation between industry and academia according to a project timeframe. These factors include resources, Intellectual Property Rights (IPR), boundary-spanning functions, prior collaborative experience, culture, status (reputation), environmental factors (geographical proximity). Some of these factors, such as IPR, experience, and culture have proved essential in our experience. Based on the number of projects describing IA collaborations in SE, Garousi [18, 19] synthesizes a comprehensive list of 63 collaboration challenges and 127 best practices applicable to these challenges. This is, to date, the most comprehensive review and a mapping between various reported challenges and their potential solutions. While we observe many of these 63 challenges in our collaboration experience, our work also presents several novel challenges. There are also few challenges summarized in [18] which we did not have in our experience (see Table 7). In Table 7, we provide an extensive comparison of our presented patterns and anti-patterns with best practices synthesized in [18], relative to the set of challenges coming from both our experience and identified in [18]. From this table, we can also see which challenges are unique in our experience, as well as solutions to particular challenges that differ from previously reported solutions for the same challenges. In another two studies by Garousi [17, 20], the authors analyze 10 projects in two countries they participated in [17] and 101 external projects in 21 country using a survey method [20] to investigates relative impact of the collaboration challenges and related solutions, as reported in [18], on the projects they study. Different from these studies [17, 18, 20], our work presents a collection of reusable solutions, explaining when to use and when not to use a particular solution. The reusability is achieved by describing the problem and its causes, the context and circumstances under which the problem is applicable, the negative solution (solution that seems good but is actually ineffective), the effective solutions that works within the scope of the given context, the benefits and limitations of the solution, the consequences and related solutions (see Figure 2). In another study [21], Garousi reports seven lessons learned on how to conduct successful IA collaborations, based on the experience from 26 projects. The lessons are: working as one team (also recognized by [13]), identifying the research problem, ensuring practicality and applicability of approaches, conducting cost-benefit analysis of approaches, need for maturity of research prototype tools, encouraging further adoptions, and long-term benefits and benefits beyond the involved parties. While providing useful insights, in the description of the lessons the authors do not discuss what specific problems the lessons address, neither they mention alternative but ineffective ways of addressing the challenge, nor how can one recognize if the lesson should be applied, nor what are the benefits and consequences of using the lesson. What makes our study different from this and other existing work is a reusable presentation of solutions to IA collaboration challenges, which enables other researchers and practitioners to reuse the solutions in their context and avoid mistakes. Our findings originate from our own experience of IA collaboration projects, unlike in studies [18] and [20], which report a literature review and survey findings. Furthermore, while [20] recognizes only an industrial impact created by solutions produced in the collaboration, we, in addition, give importance to an academic impact. In our experience, academic impact (in addition to industrial impact) proved to contribute to the commitment and sustainability of IA collaboration projects.

3 Research Knowledge Co-creation

IA collaboration projects often structure the collaboration practice using some form of a technology transfer process, such as [6, 24]. Technology transfer has been traditionally understood as “the process of transferring technology from the person or organization that owns or holds it to another person or organization.”1 We believe that such a practice does not bring forth the best results in IA collaborations. This is because transferring technology “from” an organization “to” an organization depicts a disconnect between the two organizations, with little opportunities for aligning the goals of both parties for the solution being developed and creating a common ground with respect to the desired solution “-ilities” [53], denoting scalability, usability, dependability, integrability, evolvability, maintainability, and security, to name a few. Instead, we argue that research knowledge co-creation, where industry and academia actively participate in all phases of the collaboration project, from problem definition to solution exploitation, creates the culture of co-ownership and stands the best chance of creating benefits for both parties. More specifically, we define research knowledge co-creation, in the context of industry and academia collaboration in SE, as the process of collaboratively solving problems collaboratively defined by academic researchers and industry practitioners to generate both scientific knowledge and innovation in the form of new methodologies and guidelines, software libraries/tools, publications, and patents. The fundamental tenet in co-creation is that ideas are created and improved together instead of individually at each side.
The culture of co-creation is graphically presented in Figure 1. A joint team is created per project, consisting of academic researchers and industry practitioners, who work together through all the stages of co-creation. At beginning, intensive focus group discussions are held to thoroughly understand the industry context. The state-of-the-art is analyzed in relation to the observed industry practice, to identify any existing solution that can assist problem solving. Based on this knowledge, a joint problem is defined, which is being redefined as industry practice changes and as more knowledge is produced in the collaboration. The output of co-creation are new tools and methodologies, patents, know-how, co-authored academic publications, all of which, when applied to industry practice, can improve different aspects of it.
Fig. 1.
Fig. 1. Co-creation process.
While the co-creation concept has been known in the business context for customer relationship building [15, 26], it is not so used in the IA collaboration context. Our motivation for using co-creation to organize IA collaboration is to involve industry practitioners as the final users of research knowledge in the knowledge generation process. As a result, they can own the created knowledge instead of having to invest extra effort to absorb the knowledge from academic publications, which are often in a form targeted towards researchers and not average practitioners [22]. Furthermore, involving industry practitioners as problem owners in the problem definition process helps ensure that the relevant/right problem is addressed and solved during the collaboration.

3.1 Relation to Other Forms of IA Collaboration

We contrast research knowledge co-creation, as considered in this article, to action research, innovation and technology transfer.
Action research [2] is a well-known paradigm in the context of SE research for IA collaboration. Action research is intervention-focused, where an action is introduced by a researcher with the goal of understanding the consequence of the action in the studied context. The researcher also participates in implementing the action and making observations of the action. Action research is iterative, in the sense that the action is improved over multiple cycles. An analogy of action research in public health could be to administer a vaccine within a population and observe infection levels and health. The interventionist approach to research is the center piece of action research. One of the drawbacks of the interventionist mindset often very much depends on the researcher in terms of implementing the action and recording observations since, as the word suggests, it is an intervention with regards to the routine of a practitioner. Co-creation, on the other hand, does not start with an interventionist mindset. Instead, co-creation aims to generate commensurate value for all stakeholders in the team where the researcher works with a practitioner to create artifacts such as tools, publications, and patents. An intervention may also result in artifacts such as tools and publications, but the sponsoring thought behind the intervention is to intervene based on a predetermined hypothesis.
Innovation is the practical implementation of ideas that result in the introduction of new goods or services, or improvements in offering goods or services [52]. Innovation is not tied to a particular approach to implementation such as co-creation. However, co-creation can very well be a means to innovate or create innovative goods and services.
Technology transfer, as the name suggests, takes results and prototypes conceived in a research institute or a university and attempts to commercialize them through various instruments, such as licensing, spin-offs, and patents, often followed by a market study also financed in part by the government. Technology transfer office (TTO) at research institutes and universities is responsible for technology transfer and other aspects of the commercialization of research. On the other hand, the starting point for co-creation is to define a problem together, whose solving may lead to a product, instead of aiming to commercialize a preconceived prototype or approach. The ownership structure in co-creation is very different compared to TTO where ideas from both research and practice, and commercial value of research results are debated in the very beginning. People involved in the co-creation process feel a sense of ownership and are given a fair chance to think about risk and resources at the time of conception. In the context of this article, co-creation is focused on generating SE research knowledge in the form of patents, publications, software tools, services, standards, and the like. In the case of a TTO, there is often a lack of a driving force, as the inventor, who is a researcher, may not be business savvy and may expect someone to take the idea further into the market with only a peripheral and supervisory involvement and a hands-off approach. The reason for this entails the risk involved in taking a leave or quitting a stable academic position. Moreover, many research results from a university may not consider commercial value early in the process of conception. The TTO office has an administrative role in the project, with the goal of ensuring legal rights of ideas and inventions for the university or a research institute through selective investment into projects. Hence, TTOs are poorly suited to facilitate co-creation and foster mutually beneficial ownership structures between researchers and industry practitioners. A summary comparison of co-creation with the other three forms of IA collaboration is given in Table 1.
Table 1.
 Co-creationAction researchInnovationTechnology transfer
Implies how-to (the process of collaboration) \(\checkmark\) \(\checkmark\)   
I and A involved in all stages of collaboration \(\checkmark\)  Can be 
Based on reciprocity \(\checkmark\)  Can be 
Iterative \(\checkmark\) Can beCan be 
Free of pre-determined hypothesis \(\checkmark\)  Can be 
Knowledge creation and exploitation separated Can beCan be \(\checkmark\)
Table 1. Comparison of Different Industry (I) and Academia (A) Collaboration Methodologies

3.2 Output of Co-creation

The output of co-creation is manifested in different forms, including knowledge, products, services, and methods. Co-created knowledge is embodied as (a) know-how contributing to intangible benefits, such as learning and knowledge transfer, as well as increased innovation capacity of industry practitioners and the ability to absorb external scientific knowledge disseminated through academic publications, (b) publications with joint authorship between academia and industry, and (c) patents with some form of joint ownership between legal entities represented by industry and academia. Co-created products, in the context of SE, refer to software libraries, tools, and cloud-based services with documentation, released with a software license that both academia and industry agree to. For instance, a typical approach is to release an open-source community version of a software library or a tool and a closed-source customized version used and improved by industry practitioners. Co-created services are consulting, educational, and certification services created by IA collaboration partners. Such services may include requirements elicitation to fit solutions developed in co-creation to new contexts emanating from within the existing partnership or from new partners from outside the consortium. For instance, a service could be to systematically adapt a software tool for testing highly-configurable systems, which was initially developed for video systems, to testing robotic systems. Similarly, services could involve creation of evidence in the form of validation studies to support certification. Co-created methods are jointly developed systems or ways of doing something in the context of SE, such as requirements specification, software implementation, software architectural design, software evolution, and software validation and testing. The essence of co-creation here is to customize methods and their timeliness to the needs of both industry and academia.

4 Collaboration Contexts

Our experience of research knowledge co-creation stems from three IA collaboration projects. One is a Large-Scale and Long-Term (C1: LS-LT) collaboration, the second is a Small-Scale and Mid-Term (C2: SS-MT) collaboration, and the third is a Large-Scale and Mid-Term (C3: LS-MT) collaboration, henceforth referred to as C1, C2, and C3. The differences and similarities between these three collaborations are described in Table 2.
Table 2.
 C1: LS-LTC2: SS-MTC3: LS-MT
Duration8 years5 years3.5 years
Number of partners8412
DomainSoftware V& VIoTIndustry 4.0
Collaboration experienceExtensiveLittleMedium
Average team size5 persons5 persons6 persons
Team skillsetComputer scientist, software engineer, software tester, configuration engineer, product managerComputer scientist, industrial designer, software developer, hardware engineer, physicistMachine learning engineer, data quality expert, computer scientist, software architect, hardware engineer
Table 2. Three Studied Collaboration Contexts

4.1 A Large-scale Long-term Collaboration Context

A large-scale long-term collaboration context is a Norwegian 8-year research-based innovation project that ran from 2011 to 2019. The project was hosted by Simula Research Laboratory (SRL), engaging a SE research group at SRL consisting of six senior researchers, six PhD students, and one research engineer. On the industrial side, the collaboration involved four large companies in networking, robotics, and oil and gas industry, one Small and Medium-sized Enterprise (SME), and two public sector institutions. Table 3 summarizes the domains and key competences of all eight participating partners. Besides industry and academia, the project also involved the government, the Research Council of Norway (RCN), forming a Triple Helix system [22]. The project was funded by the RCN and an in-kind investment from our industrial partners. The motivation to involve in such a research-collaborative project for the industrial partners was the opportunity to get access to knowledge and resources they did not possess at that time, but which were essential for developing new and improving their existing software V&V tools and frameworks. The motivation for the RCN to support such an IA collaborative project was to foster economic and social impact in Norway, by improving the software quality and thus competitiveness of the Norwegian software industry and public sector services.
Table 3.
PartnerDomain and competenceTypeGoals for collaboration
SIMULAResearch on software V& VRIDevelop, validate, and deploy novel technologies for cost-effective software V& V
CISCOVideo communication softwareLIReduce regression test costs with test optimization
ABBRobot control softwareLIReduce the cost of test selection and scheduling
FMC TECHNOLOGIESSubsea control softwareLIReduce the costs of system (re)configuration
KONGSBERG MARITIMESubsea control softwareLIImprove the cost-effectiveness of safety analysis and certification
NORWEGIAN CUSTOMSCustoms accountingPSAutomate non-regression testing. Improve the quality of tests
ESITOSW tools for domain driven developmentSMECreate new market opportunities with research-based innovation
NORGES KRAFTREGISTERTCancer registry systemPSTransform a manual system into an ICT-based system, while ensuring system quality
Table 3. C1: LS-LT Collaboration Context
Type: RI = research institute, LI = large industry, PS = public sector service, SME = small to medium enterprise.
The main research direction of the collaboration project was software testing and V&V, targeting: real-time embedded software systems, highly-configurable software systems, and data-intensive software systems. Research output in the project consists of scientific publications, methodology and process guidelines, and software prototype tools and services. The project was structured into six scientific sub-projects, each dealing with a specific thematic area of a broader field of software V&V, and three governing sub-projects, dealing with overall project management and dissemination. Scientific sub-projects addressing V&V problems common to several partners crosscut all partner domains where there is interest. In this way, we achieved an increased interaction and communication between the partners, relating to concrete problems and their technical solutions, which allowed for common areas of interest to arise and create synergies.

4.2 A Small-scale Mid-term Collaboration Context

Our small-scale mid-term collaboration context involves an Internet of Things (IoT) startup Sweetzpot (SZ) and several academic partners. SZ develops a sensor to monitor breathing patterns from ribcage and abdominal incursions and excursions. SZ also develops web and mobile applications to interface with sensors to receive raw breathing data and extract relevant information such as breathing rate, respiratory flow, and breathing patterns of interest, such as apneas. At the point of data collection, SZ had about 12 employees with very diverse skills, consisting of computer scientists, a physicist, an industrial designer, software developers, a hardware engineer, along with people in sales and marketing. The partners in the collaboration are listed in Table 4, along with their competencies and interest areas.
Table 4.
PartnerDomain and competenceTypeGoals for collaboration
SWEETZPOTSensor hardware and softwareSMEDevelop and test breathing sensor hardware and software applications for research
RITMO CENTERResearch on time and motionRIResearch breathing rhythm during musical performance and listening
UNIVERSITY OF OSLOResearch on sleep apneaUResearch detection of obstructive sleep apnea with low-cost sensors
UNIVERSITY OF LEEDSResearch on pollutionUResearch effect of pollution on respiratory flow
Table 4. C2: SS-MT Collaboration Context
Type: SME = small to medium enterprise, RI = research Institute, U = university.
The main research direction for the collaboration was to create knowledge and insight from sensor data. The motivation for SZ to involve in collaborative research projects with academic teams stems from the creation of mutual value in the form of (a) scientific validation of the SZ’s technology in diverse scenarios, (b) creation of open-source software for community building around the SZ’s technology, and (c) development and bi-directional transfer of skills between a startup and academic teams. The motivation for academia to participate in the collaboration with a startup is supported by various funding instruments and encouraged by funding agencies. These collaborations create skills necessary for improving the innovation capacity of companies and further the economic development of countries.
The scientific output of the collaboration are open-source code repositories, mobile apps for scientific data collection, mutual development of skills, and several scientific publications. The collaboration between some of the academic groups happened when results such as open source software or data quality studies from one group were of relevance to another group. Data collection in this collaboration context was performed during a 5-year period, from 2015 to 2020.

4.3 A Large-scale Mid-term Collaboration Context

Our large-scale mid-term collaboration context is an interdisciplinary 40-month European project entitled Interlinked Process, Product and Data Quality framework for Zero-Defects Manufacturing (InterQ) that had a kick-off in 2020 during the COVID-19 pandemic. In this article, we discuss the collaborative context in a specific work package in the project called InterQ-Data that is concerned with improving data quality in Industry4.0. The collaborative context involved partners from Spain, Italy, Norway, Greece, France, and Germany. The objective of the collaboration is to perform different tasks on sensor data obtained, such as in-motion data quality validation, historical data quality validation, erroneous data repair, data quality as a service, while observing a manufacturing process. Sensor data are acquired from Computer Numerical Control (CNC) milling, turning, and broaching machines in addition to sensors in harsh environments, such as thermo-couples in machine tool tips and acoustic emission sensors on a noisy shop-floor. The data acquired from sensors at sub-microsecond sampling periods can have missing data due to connection losses, can be duplicated (a.k.a. time collision), and can be outside the operating range of a sensor due to high temperatures. The project needs to ensure high accuracy and completeness of the sensor data, as well as to perform erroneous data repair when low quality data are detected.
The project consists of both academic and industrial partners with complementary roles, with competence in SE, machine learning, artificial intelligence (AI), data quality management, and advanced manufacturing. Table 5 summarizes the domains and key competences of all 10 participating partners.
Table 5.
PartnerDomain and competenceTypeGoals for collaboration
IDEKO S COOPAdvanced manufacturingMIIn-motion data validation from machine tools
DNV GL ASData quality as a serviceLIDevelop data quality as a service
SINTEFIoT and AI systemsRIVirtual sensing for erroneous data repair
RENAULT ESPANA SACylinder heads for electronic carsLIProvide data from sensors during head milling
COMAU FRANCE SASMachine service providerLIData acquisition from CNC machines
DANOBATManufacturer of machine toolsLIProvide machine tools for manufacturing
INLECOMDigitization technologiesMIAnomaly detection in sensor data
FUNDACION TEKNIKERAdvanced manufacturing researchMLAchieve data quality in the energy sector
TU DARMSTADTData science in productionUUncertainty estimation
PREDICT SASPredictive maintenance in machiningMITemporal clustering of historical data from machining processes
Table 5. C3: LS-MT Collaboration Context
Type: MI = medium-size industry, LI = large industry, RI = research institute, U = university.
One of the hallmarks of this collaboration is the fact that it has been forced to be remote and where all communication has been organized using a communication platform called Microsoft Teams due to the COVID-19 pandemic. This implies that partners are not physically meeting each other and visiting industrial sites.

5 Research Method

In this article, we are interested in answering the question “What practices to follow and what to avoid to enable successful IA collaboration in SE?” Given a certain IA collaboration challenge (e.g., industry practitioners not committed to collaboration), we are interested to know what practices are useful in avoiding or mitigating that challenge.
To answer this research question, we applied the participant observation and interview methods to collect a comprehensive record of qualitative data pertaining to three co-creation-based research collaborations spanning 14 years in total. We analyzed and correlated the collected data using a grounded theory approach, and observed that many of the findings are consistent across multiple projects. Consequently, we synthesized 28 patterns and anti-patterns for guiding the process of building and running successful IA collaborative projects in SE.

5.1 Data Collection

Our goal for data collection was to capture both positive and negative experience of IA collaboration, and to find out what worked and what did not, and why, in different collaboration settings. We used a combination of research methods and data sources to collect qualitative data for our study. The primary data source were field notes from participant observation and transcripts from interviews. Other data sources included emails exchanged between the two observers and other project members on the topic of IA collaboration. Participant observation is a method of a systematic and unobtrusive data collection through social interaction between an observant and informants [14, 30, 60], while an interview is a method of listening and talking to people to gain knowledge from individuals [31]. Throughout the process of data collection, as more data would become available from different sources, we would apply data triangulation, combining evidence coming from different sources, to increase the precision of our empirical study.
Data collection took place is cycles, interwoven with the data analysis cycles. In the first phase, we observed collaboration contexts C1 and C2. The observations took place during a number of occasions, such as project meetings, workshops and focus group discussions, and daily technical interactions among project members. As the two observers, who are also the authors of this article, were active researchers in the two collaboration projects, they observed at least 90% of all meetings, workshops, focus group discussions, and technical interactions (the other 10% corresponds to the time they were off from work). The informants consisted of industry practitioners and academic researchers. The size of the group under observation varied from two to seven informants for project meetings, and up to 30 informants for workshops. The informants’ background was in SE, product management, hardware design, scientific research, data science, computer science, and machine learning. The average work experience of each informant in their respective areas of expertize was 9 years. Note taking during observation was performed as unobtrusively as possible, with an observer acting as a “normal participant,” to not make informants feel observed. The notes taken included diverse information including interaction patterns between the participants, expectations from the collaboration, terminology, decisions about technical choices, and frequency of status updates. Immediately after each observing session, the observers would read the notes and subsequently augment them with more details and reflections of the observer. After the first cycle of data collection, we analyzed the data using the process described in Section 5.2. Next, we continued with another cycle of data collection using the semi-structured interview method. Specifically, the two observers interviewed five participants from each collaboration context, C1 and C2. The interviewees were selected purposively, based on the observer’s judgement of which participants will provide the richest information relative to the questions that arose during the previous cycle of data analysis. In some data collection cycles, we also used a maximum variation sampling strategy, to ensure a wide range of different backgrounds and expertize. Industry practitioners and academic researchers were equally represented among the selected interviewees, consisting of software engineers, software testers, machine learning engineers, data scientists, product managers, researchers. The average work experience of each interviewee in their respective areas of expertize was seven years. We performed semi-structured interviews discussing specific questions pertaining to IA collaboration challenges and desired ways of working, but also probing deeper into IA collaboration aspects that the respondents seemed to have strong opinion about. Some examples of the interview questions are “What was the most critical challenge for the project?”, “What solution, if any, worked for that challenge?”, “What were the benefits of that solution?”, “What were the limitations of that solution?”, “Was there any solution you tried but it did not work?”, “What is the most important learning on IA collaboration from the project?” Immediately after each interview, the observers would make interview transcripts. After the second cycle of data collection, we analyzed the data again, where newly collected data was providing additional information needed for a full understanding of good and bad IA collaboration practices we set out to study. The cycles of data collection and analysis continued interweaving until newly collected data started yielding redundant information. The observers could clearly see the patterns and different categories of findings, which is when we started approaching saturation. At that point, we applied the member checking technique [33], presenting the findings to 15 stakeholders from the two studied collaboration contexts, where the goal was to get their opinion on the validity of the findings. They confirmed the validity of the findings, confirming that data saturation has been reached. In total, we performed six cycles of data collection, three of which used participant observation, and other three using semi-structured interviews. The process yielded 10 patterns and 14 anti-patterns. In the second phase, we observed the collaboration context C3, replicating the process of data collection and analysis described for the first phase. Specifically, after each data collection cycle, we would augment the corpus previously collected from C1 and C2, and then go back and forth between the data points of the entire corpus, comparing them to find interesting findings. The goal of this process is to ensure that both already-defined and newly-defined patterns and anti-patterns hold for all collaboration contexts. After saturation has been reached (i.e., new data stopped adding new insights to the patterns), we augmented the set of previously defined patterns with additional four (P11–P14), which also held for C1 and C2 contexts. Finally, we applied member checking with stakeholders of all three collaboration contexts, confirming the validity of the complete set of 14 patterns and 14 anti-patterns. In total, we interviewed 45 interviewees, and observed 90 informants.

5.2 Data Analysis

For data analysis, we used an approach motivated by grounded theory, where the two observers first manually converted field notes from the observations and transcripts from the interviews into excerpts using open coding. Next, we compared and contrasted the excerpts with each other, looking for sets that relate to the same concepts. For example, the following two excerpts looked similar: “When we explain our daily practice to our academic partner, it is very difficult to use the same language they do, so they can understand us” and “Researchers often complicate things with their scientific language.” We grouped together such sets of excerpts (in the example above, forming a code “practitioners have difficulty communicating with academia”). After we have built a set of codes, we started comparing them one to another. For example, the following two codes “practitioners have difficulty communicating with academia” and “academics have difficulty communicating with practitioners” came under a category named “difficulties in communication” (later translated into the AP6 name The incomplete tower of Babel). The codes and categories were built by the observer who collected the data (one for each collaboration context), and then reviewed by the other observer, to prevent bias and reduce the subjectivity of coding. In case of dilemmas and opposing opinions, which did not happen, we would involve another project member to resolve the tie. Each cycle of data analysis was followed by a new cycle of data collection, as described in Section 5.1, after which we analyzed more excerpts and compared them with codes and categories to see whether they contradict, expand or support them (the latter denoted saturation). At the end, we had 28 distinct categories, which we present next.

6 Patterns for Industry–Academia Collaboration

Patterns are reusable solutions to commonly occurring problems. In SE, they are often used in the context of design patterns [16]. Patterns encapsulate best practices that can be easily reused or adapted for solving recurring problems in a variety of situations. To support reusability, patterns are specified using templates. We use the following template for describing patterns: Problem ♦ Context ♦ Pattern Solution ♦ Benefits and Consequences ♦ Related Solutions, illustrated in 2. A pattern relates a particular problem (in IA collaboration). Context describes the circumstances under which the problem occurs or is applicable. A solution describes how the problem can be resolved or eased in the scope of the context. The solution provides benefits, but also may have remaining unresolved issues called consequences. Finally, the solution can be related to other solutions.
Fig. 2.
Fig. 2. Patterns and anti-patterns.

6.1 P1: Reciprocity Between Stakeholders

Problem: Stakeholders do not reciprocate in terms of effort.
Context: Research collaboration process often requires investment of time, ideas, and effort from different stakeholders, to make the collaboration mutually beneficial. We quote author James Baldwin who said, “Allegiance, after all, has to work two ways; and one can grow weary of an allegiance which is not reciprocal,” as well as a researcher in C1 who said, “It is so frustrating when our industry partner does not prepare what we asked for in the last meeting.” Reciprocation can be hindered due to many factors such as lack in clarity of tasks, lack of resources, or external factors that take away focus from the project.
Solution: To develop a norm for reciprocity between stakeholders, encourage an open and unambiguous communication. The give and take [25] psychology can help construct a reciprocal relationship. Identify the motivations and needs of all participants. Promote active dialog and frequent sharing of progress. It is important to acknowledge that not all project members are able to contribute equally to all project phases. The goal is to encourage a regular and well-balanced exchange of knowledge where everyone contributes to the best of their capacity with some knowledge that other members of the project need. Reciprocity is often observed when the stakeholders reach a tipping point in collaboration. The tipping point of mutual trust is established when the joint activity is clearly linked to a well-defined and manageable goals that stakeholders manage to realize. These goals may include a publication to a specific conference or journal with a deadline, or it can be an application for project funding with a deadline, and it can also be a demo of a software tool to a third party at an event such as a conference or a trade fair. The time boundedness and the concreteness of the activity greatly helps establish reciprocity. Reciprocity can be measured in terms of commitment to tasks and attendance to meetings, co-authorship of research results (e.g., in terms of the number of words written by a stakeholder), time spent discussing a problem by all stakeholders. Our approach to address the reciprocity problem is to have, on the one hand, researchers who provide research artifacts that simplify daily routine of practitioners, and on the other hand, practitioners who can then effortlessly provide useful feedback for researchers on the practical usefulness of their research results. For instance, in C1, researchers provided a tool Depict [56] that Tool Customs used to verify customs declarations in their daily work. “Depict seamlessly connected to our test database and automatically generated reports that made it easy for us to present what we have been working on with minimal effort. This allowed us to keep focus on our priority task, while integrating research ideas to simplify our routine activities.” SRL’s strategy was to build tools that could minimize effort from practitioners such that they could seamlessly use a tool in a matter of minutes to produce a useful contribution that could be presented and used for reasoning. In C2, reciprocity was an issue for the partner with fewer resources. SZ had several customers with diverging requirements but only few personnel to handle the customer needs. For a commercial business, the extra hours required to reciprocate requests from stakeholders, such as academia, had to be compensated as a paid consulting activity or by involvement in national and European projects to receive partial public funding. SZ also took the role of a sensor vendor with third-party support in many of these publicly funded projects. In C3, reciprocity between stakeholder is built into tasks where partners are required to co-create together, often even without prior joint work experience. For instance, advanced manufacturing companies are required to acquire high velocity sensor data from the machining process and provide it to scientists working on computing data quality hallmarks and performing erroneous data repair for missing values.
Benefits: Mutual learning based on active exchange of knowledge, which creates more meaningful collaborations, increases enthusiasm, creates synergies, and helps the project progress faster.
Related Solutions: P8, AP4.

6.2 P2: Standardized Data Exchange

Problem: Limited or cumbersome sharing of potentially reusable artifacts.
Context: Participants in a co-creation project dealing with related problems often have the possibility of exchanging (and reusing) datasets, algorithms, code repositories, and tools. However, this seldom happens because these artifacts are not easily interfaced with each other.
Solution: These artifacts can be seen as services. It is worthwhile implementing standard protocols for interfacing between these services, so to avoid reinventing the wheel. Seeing artifacts as services implies that an artifact is associated with a standard for information exchange that allows interaction with a larger ecosystem. In the case of a dataset, such an associated standard could be a database technology based on SQL standards, such as ISO/IEC 9075:2016.2 When the dataset is stored in relational database management systems (RDBMS) supporting the Structured Query Language (SQL) 2016 standard,3 then SQL can be used to read, write and modify the data. Similarly, algorithms can be encapsulated as a RESTful webservice where functions to get/set variables and perform some form of computation obey the REST constraints. RESTful web Application Programming Interface (API) are typically based on Hypertext Transfer Protocol methods to access resources via URL-encoded parameters and the use of JSON or XML to transmit data. Stakeholders from industry and academia adhere to such standards to ensure their data and algorithms can be plugged into a larger ecosystem with minimal effort, increasing business opportunities. However, concerns with intellectual property rights and making artifacts easily understandable and usable may hinder the flow of artifact exchange across stakeholders. In C1, SRL developed tools that adhered to industry standards, along with simple and user-friendly documentation, to help stakeholders use artifacts in less than 30 minutes. The tool Depict [56], for instance, could be deployed by Toll Customs in a matter of minutes on their large test and production databases. Depict could interface to several standard RDBMS such as Sybase, Oracle, PostgreSQL, and MySQL to verify certain properties in the data. This allowed Toll Customs to easily transition from Sybase to Oracle during their migration. In C2, the use of standard protocols greatly facilitated interfacing of SZ’s sensors with many other devices. For instance, the standardization activity between SZ and RITMO was based on the Open Sound Control protocol. This created the possibility for RITMO to interface the SZ breathing sensor to all other sensors and actuators in their laboratory and simultaneously gather a large amount of research data. In C3, there is no standard interface to connect tools developed by computer scientists in the project and data generated by machine tool developers. However, streaming data from a machine tool are available 24/7 from a cloud service in JSON format for a specified input time period. The machine tools often work around the clock to manufacture parts. Computer scientists have access to this data stream through a REST API. All computer scientists that perform tasks such as anomaly detection, erroneous data repair, and data profiling have developed a common understanding of the type of data they can develop their tools on based on the REST API. Docker containers are used as a common approach to build tools that can be exchanged and used by partners without special installation requirements and deployed on both edge devices and the cloud.
Benefits: Greatly increased the ease of reuse between different projects and the possibility to benefit from other participants’ experience. A software engineer participant in C3 pointed out that “The use of standards to exchange data combinatorially increases the possibilities in terms of how collaboration can occur between multiple data providers.” Generic tools built based on standard data exchange formats can be downloaded and used by projects around the world. This can increase the impact of tools from scientific research and help secure both publicly and privately funded projects.
Related Solutions: AP2.

6.3 P3: Quantifying Impact

Problem: Difficulties of measuring diverse impacts of an IA collaboration. Typical impact metrics include a publication and patent count. However these metrics do not capture a multifaceted impact of IA collaborations.
Context: In a simple interpretation, making an impact in IA collaboration means, for researchers, changing the current state-of-the-art for the better, and for practitioners, changing the current state-of-the-practice for the better. There is also a necessity to go beyond organizational impact and quantify how collaborations can have an impact on one or more of UN’s 17 goals on sustainable development.4 Accurately measuring the factors contributing to that change is challenging [48], yet important because as Peter Drucker said “if you cannot measure it, you cannot improve it.”
Solution: Measuring impact in IA co-creation requires public recognition of reciprocal actions resulting in the creation of tangible and intangible assets that benefit the industry and society. Tangible assets can be new or improved technology, such as open source software, algorithms, methods, tools, process guidelines, and scientific publications. While intangible assets are innovation capacity, human relations, shared management, visibility and brand value of the collaboration. For tangible assets, impact can be determined by measuring return of investment or other quantitative metrics, for example, economic benefits. For intangible assets, impact could be gauged based on the timeliness, completeness and follow-up, consistency, relative quality, and complementarity of reciprocal actions for a successful collaboration. The latter can often lead to breakthroughs in science when partners involved are from diverse domains and are able to make small sacrifices to explore uncertain problem domains. In C1, we measured an impact by computing the amount of time saved in completing a testing activities in a company. In C2, one way to measure impact can be to look at the diversity and extent of the use of the Flow sensor developed by SZ. The sensor found application in health, sports, and music. The sensor was used by the University of Oslo as a low cost alternative to predict obstructive sleep apnea [34]. A future impact of the technology can be quantified in the form of percentage of the target market of solutions for obstructive sleep apnea captured by the SZ sensor. In C3, the impact of improved data quality is quantified in the form of improved auditability of data quality in manufacturing and fostering a quality culture. Auditability is measured as a traceability of data quality hallmarks or statistical properties of sensor data quality, observing the manufacturing process. Quality culture is indirectly measured through the reduction of scrap rate in manufacturing. Maintaining a trace of data quality is valuable feedback to the entire manufacturing ecosystem, helping improve performance of operators and engineers. Thinking about societal impact, sustainable development is a principal factor in governance, and the impact of the co-creation process needs to be in alignment with UN’s sustainable development goals [45]. These goals include no poverty, zero hunger, good health and well-being, quality education, gender equality, clean water and sanitation, affordable and clean energy, decent work and economic growth, industry, innovation, and infrastructure, reduced inequalities, sustainable cities and communities, responsible consumption and production, climate action, life below water, life on land, peace, justice, strong institutions, and partnerships. In C1, we collaborated on the improvement of video conferencing systems developed by Cisco systems. This effort is indirectly connected to the minimization of the need for air travel (number of trips avoided) and consequently addressing UN’ goal of responsible consumption. In C2, a project manager at SZ said, “We always adhere to responsible production aiming to minimize defects, which consequently minimizes waste in the production of our Flow sensors. This was achieved by keeping a close eye on the inventory (number of sensors) and using low cost production technology such as a pick-and-place machine for local production of electronic printed circuit boards.”
Benefits: Calculating quantitative and qualitative metrics corresponding to different types of impact (value), tangible and intangible, enables more comprehensive impact measurement and effort recognition. It is often an intellectual exercise to connect our seemingly negligible local contributions to a larger goal such as sustainability. It is increasingly requested by both public funding bodies and investors to quantify and present such an impact as part of their work.
Related Solutions: AP13.

6.4 P4: Active Dialog

Problem: Goal misalignment between industry and academia, as well as limited transparency regarding intermediate results produced at different stages of collaboration ultimately lead to a lacking interest and disengagement.
Context: IA collaborations often suffer from inadequate level of communication, both between and within industrial and academic teams [12, 47, 50] on different aspects including goals, expectations, results, time frames, and responsibilities. As industry and academia are generally two culturally different environments, insufficient communication between the two leads to the lack of commitment and further deepens the gap between them.
Solution: Promote active dialog between industry and academia at all stages of the collaboration, from problem understanding and definition to solution validation. Active dialog implies words and actions occur in close succession or concurrently. Regular dialog does not necessarily mean words acted upon. Hence, the emphasis on active dialog. Such a practice helps build a trusting environment in which a participative and mutually beneficial value creation can be implemented. Promote active dialog across academic teams, as well as across industry partners, which will lead to a higher level of reuse of research results. Active dialog can be promoted by providing different channels of interaction, such as physical and virtual collaboration and meeting space, as well as different communication and knowledge exchange tools. In C1, SRL created several platforms to maintain active dialog. Bi-annual partner workshops were organized at a conference location to assemble all stakeholders in one place for a day and half of discussions to gain clarity and define problems that are both interesting for researchers and useful for industrial partners. Long term projects can get weary and require constant re-invigoration. Therefore, SRL invited several external keynote speakers, and organized hands-on workshops to invigorate stakeholders with new ideas and increased clarity. The keynotes also helped keep the stakeholders updated on the state of the art. In C2, the projects were of shorter duration, leading to intense bursts of joint work. However, it was essential for SZ to keep stakeholders in the loop and build a network between customers for sharing of ideas and artifacts, and also for the establishment of new projects. In C3, we have an additional challenge of remote work, due to COVID-19, where active dialog is maintained by weekly meetings over Microsoft Teams and through close follow-up on research ideas and subsequent action points in these meetings. Due to a large size of the project, there is a hierarchical communication structure in the project where smaller working groups work on specific topics and the results are summarized and communicated upwards in a leaders’ meeting. The different working groups also help manage potential conflicts of interest between competing companies.
Benefits: Continuous goal alignment and progress update between the collaborating partners, which leads to the shared ownership of both the problem and the solution, which is further crucial to prevent the drop of interest and commitment. A researcher in C2 pointed out, “I was amazed to see how quickly we converged on which feature should be developed first, once we started talking details with the engineers.” Maintaining active dialog and keeping people in the loop is a continual process. However, there is a thin line between annoying stakeholders with unnecessary emails and actually engaging in a communication that is mutually beneficial. It is necessary to adjust the volume and the velocity of the communication to maintain a healthy active dialog.
Related Solutions: P7, P14, AP6, AP14.

6.5 P5: Power of Synergy

Problem: Confined research value creation due to a dissociate efforts made by industry and practice separately.
Context: The growing complexity of industrial software systems developed by the majority of our partners requires a combination of knowledge, skills, and efforts to develop cost-effective solutions for software engineering and testing. Such combination of knowledge and skills is hardly found in any individual or a team whose members are of the same background. Moreover, co-creating with experts from different domains can push engineering and validation of software systems to new horizons. Still, it happens that IA collaboration projects lack collective value creation, as each side of collaboration works in isolation.
Solution: Enact a managed research value co-creation, which entails a project plan with deliverables and milestones, and a project timeline, analysis of risks and how to mitigate them. A project leader conducts regular meetings with all stakeholders to hear about updates and find synergy during co-creation. Partners have regular internal meetings with individual stakeholders in order to meet the deadline and develop satisfactory results. Activities need to be performed consistently, ideally avoiding a last minute dash. All meeting notes and decisions are documented for easier onboarding of new members. In such a managed co-creation environment, individuals with different background, expertise and perspectives are brought together to share knowledge and work on a joint problem whose solving brings benefits to all participating parties. Consequently, there is a great potential for the identification of additional common areas of interest and synergies between the partners, which can lead to breakthroughs in research and practice. For instance, in C1, the testing of high-end video conferencing systems developed by Cisco Systems with several variable parameters and requirements for high video throughput pushed software testing research at SRL to handle a very large combinatorial problem. This led to novel algorithms published as scientific articles [35, 38, 39, 40, 41, 43] and software. In C2, SZ’s Flow sensor was used as a wearable in health (obstructive sleep apnea), sports (force in a rowing oar), and music (breathing rhythms while performing). All these application domains pushed the scientists, hardware and software developers in the collaboration to improve many aspects of the product: increasing battery life through low energy consumption needed for overnight recordings, long distance Bluetooth communication and re-connection logic due to uncertain connectivity, and machine learning models to predict variables of interest such as respiratory flow and apnea events from raw sensor data. These innovative activities were not foreseen during the conception phase of the project. As a hardware engineer at SZ mentioned, “I would have never thought that we will get this far with all these new features we developed for Flow sensor. It’s because we all worked together bringing ideas from so many different fields.” In C3, one example of synergy is when computer scientists and electronic engineers co-create to enable data quality management at different levels of abstraction (real-time and aggregated data on the cloud) and across different domains of expertise (signal processing and machine learning). For example, electrical engineers from IDEKO manage data quality during analog to digital conversion in a real-time controller, by data conversion and aggregation. Computer scientists use this aggregated data from the real-time controller to perform data profiling. They use machine learning models to create virtual sensors, which are used to repair erroneous or missing data in one sensor based on data available from other sensors.
Benefits: Collective creativity leads to more effective problem solving, often resulting in unexpected and non-obvious ways of value creation. The synergistic influences from different domains can make a tool or a product far more robust and appealing to many users.
Related Solutions: P1.

6.6 P6: Minimum Viable Tools

Problem: Lack of objective evidence showing the practical value of the knowledge created in early stages of a collaborative project.
Context: Practitioners engage in research collaborations with the objective of applying in their context the knowledge created in the collaboration and relevant to their problems, for solving practical problems. Researchers on the other hand are typically concerned with creating generalizable knowledge, and it may take a while before first results of contextualizing such a knowledge are available. Meanwhile, practitioners are kept in dark, which makes them not invested in the collaboration effort, hampering long-term success. As a SE practitioner in C1 described, “Every time researchers present an update, they explain some new scientific stuff, but I never seem to quite get what are they talking about.”
Solution: Develop a minimum viable proof of concept (MVPC) and tools able to demonstrate practical benefits of even simple research concepts in early stages of collaboration. Such MVPCs help get started with co-creation and create a sense of concreteness in a joint activity. MVPCs enable feedback provision from industry to academia in the early stage of the solution development, which increases the probability of creating the outcomes of mutual benefit. A minimal viable tool could be an artifact that allows automation of tasks that are normally performed by humans, or older and less optimal software tools. We emphasize the co-creation of tools as a way to encapsulate a solution for a given problem context, as opposed to CMD by Dittrich, in order to emphasize the creation of an asset that can be configured and executed. CMD on the other hand is less tangible and not an asset that is a manifestation of ideas into a software or physical object such as a sensor system. In C1, tools like TITAN and Depict were developed over continual interaction and testing with the respective industry partners. This lean approach to development gave a sense of ownership to the industry practitioners over the tools developed, and the tools adapted to suit their work environment. However, it is also very important to make sure the lean approach of development extends to more than just one partner. For instance, after being initially developed with the Norwegian Toll Customs, Depict was also improved due to tests with the database systems at the Cancer Registry of Norway. In C2, the flow sensor developed by SZ provided only raw data from forces measured from the expansion and contraction of the ribcage. This was a minimal signal that could be used to create many new value propositions with many partners. This included prediction of sleep apnea, respiratory flow, and breathing patterns for stress reduction, to name a few. It would have been much more expensive and uncertain to launch a product in the market if SZ had to secretively develop it for a specific target over several months. In C3, researchers collaborate with practitioners to develop a MVPC on a publicly available dataset to predict tool wear in manufacturing. The publicly available dataset protects the industrial practitioner from sharing intellectual property while testing out ideas along with a researcher. The MVPC is developed to be generalizable to new datasets that will eventually be available from partners in the manufacturing industry. One such example of a co-created MVPC is the erroneous data repair system available on Github,5 which was demonstrated on the public dataset on CNC milling tool wear.6
Benefits: Research value co-creation from the beginning of the collaboration, where industry gets to understand practical benefits of abstract research concepts developed at first by researchers, as well as to influence solution development by providing timely feedback.
Related Solutions: P7, AP2, AP3.

6.7 P7: Frequent Iteration

Problem: Progress on technical developments are reported sparsely, on each side of the collaboration. Consequently, useful feedback to guide further development is captured sparsely.
Context: In technology projects where software tools are key artifacts in a collaboration, it is important to demonstrate progress frequently, and receive feedback to direct technology improvement efforts. Lacking feedback from researchers to practitioners makes practitioners disengaged. Lacking feedback from practitioners to researchers makes researchers base their solution on possibly inaccurate working assumptions (about the requirements and constraints for the solution needed by industry). Inaccurate working assumptions in software tool development can lead to technical choices which may not be easily correctable later on. For example, we encountered such an issue while developing TITAN in collaboration with Cisco Systems, where inaccurate working assumptions led to a limited choice of how the tool can be used. TITAN was designed as a standalone tool, while Cisco’s preference was to use it as a service.
Solution: Demonstrate progress and intermediate results of technical developments often. Researchers can obtain feedback from practitioners on the soundness of technology choices. Practitioners can keep researchers up to date on their technical developments, which is necessary for smooth integration of all technical artifacts developed. Frequency should be relatively high, daily or weekly, until a tipping point is reached, when the stakeholders using a co-created knowledge become self-sufficient. Ideally, the shorter this time is the better. The mutually acceptable frequency of iteration often reflects the interest level and investment of stakeholders in a project. It is necessary to take it as a cue and adapt accordingly. For instance, in C1, SRL was wary about introducing too many meetings that do not appeal to stakeholders or sometimes feel like a waste of time. Instead, the frequency was adjusted according to the level of intensity experienced in the collaboration. We recollect that the frequency increased when it was about trying tools hands-on and obtaining results, while it decreased when it was about writing scientific papers. In C2, the frequency was very high before an event where the sensors were demonstrated, such as the MusicLab organized by the University of Oslo or the World Rowing Masters regatta. The exhibition of results to a larger audience or public often creates more intensity of collaboration and frequent improvements. In C3, there is a high frequency of iteration in requirements specification and task allocation processes, due to the diversity in the partners. The partners are experienced in their respective domains and have tools available to treat manufacturing sensor data. Hence, frequent iterations entailed defining requirements and tasks for each partner, while minimizing conflicts of interest and ensuring the protection of intellectual property for commercial businesses. As a researcher at SINTEF pointed out, “I think just because we are meeting so frequently, we avoid any conflicts over who is working on what.”
Benefits: Bidirectional feedback provided often helps validate results often and ensure that potential misinterpretations and missed requirements are identified and fixed when the time is appropriate. This can be early on or when the tool needs to be demonstrated to a larger audience. Outcome technology developed becomes applicable, usable and scalable to the relevant context.
Related Solutions: P4, P6.

6.8 P8: Two-faceted Problem and Impact

Problem: Problems addressed in a collaboration are not relevant for one side of the collaboration.
Context: Academic researchers and industry practitioners have different motives for collaborative projects. Researchers are driven by scientific challenges and practitioners by practical solutions. Although their drives are different in nature, for the collaboration to succeeded, all stakeholders need to have the opportunity to reach their objectives and create impact that matters to them.
Solution: Define the problems with two sides to them i.e., as a practical problem and deriving from it, a research problem, which are closely linked. A practical problem is a need to improve unsatisfactory aspects of industry practice, caused by a concrete condition, for example, the lack of test automation. The practical problem caused by this condition may be the high cost of testing. A research problem is a gap in knowledge whose understanding is crucial for solving a practical problem. Thus by solving a research problem we create ground for solving its related practical problem. In C1, SRL defined the problem of test case selection and prioritization based on the needs of its industry partners ABB Robotics and Cisco partners. For researchers, this meant researching test optimization techniques, and for practitioners, this meant developing a tool that can alleviate the problem of manual test selection. As manual test selection was tedious, some of these tests were redundant and there was a need to research the problem of automatically selecting a minimal set of test cases and prioritizing them based on what changes were made in the code base. A researcher in C1 indicated that “A two-faceted problem definition allowed us to, on the one hand, advance the state of test optimization for highly-configurable software, while on the other develop a tool TITAN [42] that reduced the effort of manual testing for Cisco Systems Norway.” Similarly, in C2, researchers at the University of Oslo developed algorithms to predict sleep apnea from the breathing data obtained by the flow sensor. This helped SZ define problems such as being energy efficient about overnight recordings, addressing issues with Bluetooth connectivity and reliable data storage. In C3, the problem initially defined was to perform erroneous data repair. IDEKO did not see a benefit in solving this problem, although it was written as part of the description of work (DoW). Therefore, the problem specifications evolved slightly beyond the initial DoW. For instance, when SINTEF presented some of their previous work on data profiling with constraints on data, IDEKO became interested in using that technology in their manufacturing setup, as they saw it being immediately useful compared to erroneous data repair. SINTEF, however, initially followed the DoW, but soon realized that they can bring more value to the project through data profiling with constraints. Therefore, it was necessary for SINTEF to be flexible and address the real industrial problem with higher priority that addresses a challenging research problem that was not immediately useful. Once trust is established through solving simpler and more pressing problems, it becomes easier to address more scientifically stimulating problems later on.
Benefits: Both sides of IA collaboration see the benefits of solving joint problems, and consequently creating impact, each from their own perspective, i.e., academic and industrial impact. This condition greatly contributes to active commitment to the collaboration from all partners.
Related Solutions: P1, P3, P11.

6.9 P9: Joint Authorship of Scientific Articles

Problem: Industry practitioners are often reluctant to participate in joint authorship of scientific articles.
Context: Authoring articles based on scientific methods combined with insights from industry experts can bring increased credibility and reusable knowledge from the co-creation process. However, an important challenge is to involve industry professionals in the cumbersome writing process that requires several rounds of proofreading, editing, precision in data and statistics followed by peer-review and publication. Industry practitioners are highly sensitive to the risk of revealing trade secrets and flaws in industrial systems through publications that could bring the business into bad light. Some have a strict internal publication vetting process before any form of external communication of scientific results. This often can discourage all stakeholders in performing thorough scientific studies. Nevertheless, transparency through published scientific articles about technology or methodology, such as high quality software testing practices, can bring trust in a company.
Solution: Writing and peer-review helps clarify complex concepts that companies often grapple with. Joint authorship of scientific articles must be seen as a way to distill complexity into a sequence of words and illustrations that clearly communicate the outcome of a co-creation process. Industry practitioners should ideally be engaged in describing their viewpoints in ways they are initially comfortable with. This can be in the form of user stories, answering a multiple-choice survey where the questions are carefully crafted, or interviews of experiences such as with software tools developed by scientists. The peer-reviewed publication and its acceptance by the larger community of researchers and practitioners can develop increased enthusiasm of industry practitioners in the scientific publication process. Increased motivation can lead to practitioners taking up industrial PhD positions in collaboration with a research group. For instance, in C1, an employee from ABB Robotics decided to realize an industrial PhD while on partial leave. During the PhD he published scientific articles [46] in conferences and journals in the software engineering and testing community. Another practitioner from C1 pointed out that “It was only after we wrote the first paper together with researchers that we started appreciating such a writing exercise, as it helped us understand scientific writing, which is often a great source of innovative ideas.” In C2, there were no instances of co-authorship, however, the industrial partner SZ has been acknowledged in several publications made by research groups, which gave credit to SZ and hopefully increased the company visibility. For instance, a study at the University of Oslo helped validate the accuracy of the Flow sensor developed by SZ in comparison to other low-cost sensors for sleep apnea studies [34]. In C3, some of the commercial businesses have an interest in publishing results from the project. For instance, PREDICT approached SINTEF to work on temporal clustering of manufacturing data with a publication in sight. The publication can document approaches implemented by PREDICT, who has limited resources for communicating their research to the public. Often ideas are lost in source code among companies that do not have the capacity to document and test their ideas. This, unfortunately, increases their technical debt in the project in the long term.
Benefits: The consequences of joint authorship can have many advantages such as: (a) documented industrial impact of scientific research, (b) skill upgrade for industry practitioner, such as through industrial PhD programs, (c) higher credibility for industrial products and processes due to transparency and evaluation in published scientific articles, (d) more efficient use of resources because researchers can help practitioners write papers of high rigor, while practitioners can help strengthen the experimental evaluation by providing data and case studies, and (e) increased ability of industry practitioners to absorb academic knowledge disseminated in the community typically through scientific papers.
Related Solutions: P5, P10.

6.10 P10: Managed Intellectual Property Rights

Problem: The rights to the intellectual property created during a research knowledge co-creation need to be managed professionally to increase trust between partners.
Context: Intellectual property is a category of property that includes tangible creations of human intellect. They can be categorized in the form of patents, copyrights, and trade secrets. For instance, in C2, SZ patented a sensor for respiratory inductance plethysmography. Similarly, partners in C1, such as Cisco Systems have several trade secrets in the domain of video conferencing systems. Academic partners, such as SRL, developed software programs that are automatically protected under a copyright as a defense mechanism against copyright infringement. Every stakeholder in a collaboration has an interest in intellectual property and the outputs of a co-creation process can generate property that some partners can exploit, as per a clearly specified agreement. However, as the observer in C1 noted down her observation from the observation sessions, “Managing intellectual property is difficult, as it is very hard to trace where ideas originate from and how they evolve into tangible products.”
Solution: A collaboration needs to be established based on a consortium agreement, about who the producers and consumers of intellectual property will be, as well as how the intellectual property will be maintained, if required in the future. For instance, in C1, academic partners such as SRL are financed by the Norwegian Research Council through agreements. The consortium agreement stated that some partners, such as SRL, will publish research articles about their work with the industry after a careful scrutiny of the results together with the industry partners. Esito, for instance, was identified as an exploitation partner that would commercialize software, handle licensing agreements with other interested industrial partners, and provide maintenance support, if needed in the future. In C2, SZ obtained an US patent on their invention of the breathing sensor Flow. The patent allowed SZ to not only attract investors, but also have the possibility to license its technology to other companies. In C3, a consortium agreement has been signed by all partners on how intellectual property will be shared. All research results and ideas that are performed on the data made available by industry partners and that are accepted for publication in scientific journals can be used by all partners. However, there are unwritten restrictions and a mutual understanding among competing partner companies to not reveal too much to each other. Neutral partners, such as research institutes, play an important role in brokering the interaction and protecting the interests of competing industrial partners in a large project.
Benefits: Establishing a legal framework such as a consortium agreement or a patent can simplify the collaboration and stem doubts that may arise around ownership of intellectual property.
Related Solutions: P9

6.11 P11: Deriving Research and Business Critical Questions Come First and Data Sharing Later

Problem: Business critical questions for partners in a co-creation process need to be vetted out before seeking to curate data to address the questions.
Context: Often researchers are eager to work on available data and generate insights. However, very often industrial partners need a good reason to share data, which is considered to be intellectual property and often can take several years to curate. Similarly, researchers themselves need to ask questions about which business questions are of scientific value and worth addressing before seeking to use whatever data are available to generate research questions and answer them. This also creates a bias to specify obvious questions that can be addressed based on available data.
Solution: Deriving research and business critical questions should ideally be performed before extraction of data. Here deriving questions entails discussions to understand which variables and metadata from a stakeholder can be of relevance to address a problem. This is in contrast to requesting all possible data and then formulating questions based on that. Data are usually an asset in a company and the sharing of data needs not be useful, unless the intention is clear. This also relates to the data minimization principle of the General Data Protection Regulation (GDPR). Data minimization is a principle that states that data collected and processed should not be held or further used unless this is essential for reasons that were clearly stated in advance to support data privacy. In GDPR, this is defined as data that are adequate. To derive research and business critical questions, workshops where research and business questions are formulated, re-formulated and prioritized by stakeholders can help. However, it is assumed that stakeholders know what data are available and can help guide discussions towards formulating research and business questions that are within a given scope. What is the most important is the satisfaction of stakeholders in the formulation of the questions both for its business and scientific value. In C1, SRL organized workshops between stakeholders to come up with research questions relevant to businesses. These workshops intended to specify questions for research-based innovation. The questions need to be arguable with previous scientific evidence, if possible have quantifiable answers, and can be answered using prototypical development and experimentation. The questions should be leading a narrative, but also be mutually independent of one another. Once the questions were agreed on during these workshops, the rest of the half-year was used to co-create software prototypes to answer the questions. For instance, Toll customs asked the question: Can we verify that all customs declarations are sent to the statistical bureau of Norway before being archived? This led to the development of the tool Depict that was used to model and verify this requirement. In C2, SZ and UiO were interested in detecting obstructive sleep apnea from abdominal breathing data. Before the data collection took place it was very important to define research questions that needed to be answered. One of the first questions was about the quality of the breathing data and how good it is for clinical decision making addressed in [34]. Similarly, in C3, a research question of interest is what is the data quality of sensor data from machine tools. In particular, the completeness and consistency of the data. Addressing this question entails the development of data acquisition systems that ensure at least 99% data completeness and consistency. At the beginning of the project, there has been a phase of gaining trust through presentations and refining research questions that can bring new value to industrial partners. For instance, an engineer from IDEKO explained “We use real-time systems to acquire high frequency vibration data from machine tools, but do not perform data quality checks. This is where we use the help of SINTEF to specify assertions on the data and to verify them in near real-time on data arriving at a frequency of 1Hz.” This need had to be identified before SINTEF could gain access to their REST API to access data.
Benefits: The benefit of deriving research and business critical questions first is to agree on topics and the use of time on something that is truly beneficial to stakeholders in co-creation. Also, it places much less pressure on stakeholders with data to give away the data without seeing the immediate benefits from it.
Related Solutions: P8.

6.12 P12: Minimize Moving Parts

Problem: A co-creation project can have many moving parts such as a team size, partnerships, technology changes, and evolution of use cases leading to many source of uncertainty.
Context: Both short and long term co-creation projects can have partners that suffer from employee churn leading to changes in team size, partners joining and leaving consortia, and evolution of use-cases with time. These moving parts can lead to uncertainty in terms of human resources and objectives of the project. If one partner is more prone to frequent changes this can also amount to deterioration of trust with other more stable partners.
Solution: Minimizing moving parts in a project is typically necessary for its stability and longevity. When dialogues are established between specific people it is best to select the compatible people and maintain the dialogue instead of replacing people. Changing personnel in a dialogue can entail a large cost in on-boarding and re-establishing trust. If a company is large enough it may even be necessary to have at least two employees following up on a project (bus factor of 2), such that the project stays on track even if one of them leaves the company. Similarly, in terms of use cases and their owners it is ideal if partners and problem definitions stay stable during the course of the project with minor variations. Choice of technological solutions to implement in a project should have a good mix of stable and experimental elements. The experimental aspects with high activity could be carried out in parallel, without affecting the long term goals of a project hinged on more stable technological solutions. In C1, SRL relied on developing tools based on the stable Eclipse framework. The experimental elements were built as extensible plugins in the project without affecting the base software. Both TITAN [42] and Depict [54, 56] were tools developed on the Eclipse platform. Most employees recruited on the team were PhD students or postdoctoral fellows with a work contract of 2–3 years enabling long terms stability in the project. In C2, a handful of critical members were hired full-time by SZ to ensure sustained development and customer support for a resource constrained setup in a startup. These included two software developers, one industrial designer, one electronics engineer, and the CEO. The other employees were only hired part time and some of them served roles in other organizations. In C3, efforts from partners are bounded by the number of person-months. There are usually two or three members from each partner attending all meetings in the project. The meetings minutes are documented and many of the meetings are recorded after the start of COVID-19. The dialogue and relationships between partners have matured due to conversations among the same people for a long period of time.
Benefits: As a project manager in C3 summarized, “Minimizing moving parts brings stability to a collaboration and allows going deeper into solving a problem by a team. It also helps minimize the unpredictable costs from on-boarding new members, learning new and experimental software tools, and dealing with changes in use cases under study.”
Related Solutions: None.

6.13 P13: Anticipate Unavailability of Data

Problem: Researchers are eager to obtain data and perform statistical analysis, machine learning and generate visualizations to bring valuable insight. However, very often the data obtained from partners may not be conducive to statistically significant conclusions or adequate for tasks such as machine learning.
Context: When data are collected from an observation, they are not always stored with a clear purpose. Some variables can be observed and their evolution over time is stored for potential future use. For instance, data can be in the form of logs while executing a software program. These logs represent daily use, but are not necessary obtained from a controlled experiment. Using such data for making scientific conclusions may not be easy, as it does not allow clear comparison between two different scenarios, such as in a randomized controlled trial. Many companies may collect years of such data and expect some value from it. Similarly, researchers seek high quality data amenable to making scientific conclusions with strong statistical significance. The mismatch in data made available and its expected utility for generating scientific evidence is a problem all parties in a co-creation should be wary about.
Solution: It is important to anticipate unavailable data necessary to generate scientific evidence and plan for it. This can be by: (1) defining scientific problems that can be addressed satisfactorily with existing stream of data, or (2) specifying and carrying out a new data collection protocol under controlled conditions amenable to high statistical significance. In C1, researchers decided to define scientific problems based on available data. For instance, in a collaboration with Cisco, researchers were provided with daily logs from automated testing. In [57], SRL researchers use deep learning to help find the most failure-prone test cases in continuous engineering processes at Cisco Systems, without enforcing a new data collection protocol. On the other hand, studying the impact of the mobile game intervention FightHPV [49], developed in collaboration with the Cancer Registry of Norway, required a stricter data collection protocol, where data about people using the app was collected based on informed consent, using their electronic national ID, also called BankID. Data about how people used the app and the learning from it were used as part of an observational study comparing the uptake of screening and HPV vaccination before and after the launch of FightHPV. In C2, SZ required data from the Norwegian Institute of Sports Science to create physical and deep learning models to predict respiratory minute ventilation from ribcage movement data. Since the data collection process is cumbersome, the development of the deep learning models was based on maximizing the use of deep learning techniques that can generalize reasonably well from small data and discuss the risks of faulty prediction for unforeseen data [55]. In C3, sensor data from machine tools such as CNC milling, turning, broaching, and grinding is needed to develop tools to validate and improve data quality for Industry 4.0. The sensor data from machine tools are sometimes not available due to technical issues such as variable frequencies, and sometimes due to security issues preventing data tampering. Our strategy to address this lack of data is to summarize data based on top five frequencies and discrete machine states derived from a high frequency acquisition of accelerometer data at the edge that is unavailable. This summary data are secure and contain adequate information to derive data quality hallmarks by other partners in the team.
Benefits: As a researcher in C3 mentioned, “Anticipating the unavailability of data is very important to manage expectations in a team.” It is also important for thinking creatively about generating value from available data or establishing a data collection protocol very early in the process, in order to obtain statistically significant results.
Related Solutions: None.

6.14 P14: Define Long-term Tasks that Enable Remote Collaboration

Problem: Continuous engineering practice used in the industry with short cycles to release is also often expected from research activities.
Context: Continuous engineering practices in the industry are based on rapid releases of product updates with a customer base and utility that is relatively well-defined compared to a co-creation process involving research. This agile approach is beneficial when goals are clear and tasks are well-defined. However, problems that require long-term research and experimentation are often expected to be broken down into small time-bounded tasks, as seen in continuous engineering best practices. This is, however, not always feasible for every problem where ideas need to mature from uncertain and unpredictable experiments carried out by collaboration between researchers and industry practitioners.
Solution: Researchers need to study existing practices and read recent research and think in isolation to have mature revelations that can help industrial stakeholders think out of the box. Research activities need to be formulated as long-term tasks running in parallel and should be enabled by remote collaboration to draw on minds and expertise from around the world. Remote collaboration is even more pronounced during the COVID-19 pandemic which has forced many teams to collaborate remotely and has become the norm. This is especially necessary for members of the collaboration obtaining a PhD degree or undergoing postdoctoral training. On the other hand, industrial partners focus on short-term tasks that have a granularity small enough to be completed in a two-week sprint. This makes it difficult for industry to think long-term and hence it is a perfect marriage and a win-win situation to work with neutral researchers who can look into topics that require long-term thinking and experimentation. In C1, SRL researchers explored long-term research topics such as the use of deep learning for test case prioritization and selection in continuous integration testing at Cisco Systems [57]. The researcher was a PhD student whose work helped optimize nightly build and test runs of Cisco’s conferencing systems. In C2, SZ had the long term goal of developing a mathematical model to predict respiratory flow from ribcage movement data obtained from their Flow sensor. This work required collaboration with the Norwegian School of Sports Science to obtain data to verify the validity of a physical model and also test new techniques in deep learning [55]. Data acquisition from an exercise spirometer and the Flow sensor developed by SZ was a long-term task that was performed in parallel and remotely by the sports school. Meanwhile, SZ was focused on selling their sensor devices to stakeholders who could build their own applications. In C3, most manufacturing companies have a continual production of parts and have been engaged with researchers to use the data from sensors monitoring the manufacturing process. For instance, RENAULT produces 1,500 cylinder heads for their Cleon engine per day. Their CNC machines can generate several hours of high-frequency sensor data while the production is ongoing. Researchers from SINTEF use this stream of multivariate sensor data to do long-term research, i.e., compute metrics for data quality to ensure that it has a completeness and consistency close to hundred percent. The data quality research is running in parallel, without interrupting the manufacturing process. The data quality research is in turn used to attain the long-term goal of zero defect manufacturing. As a researcher at SINTEF said, “Long-term tasks facilitate remote work, as we work remotely in Norway and collaborate with partners in France and Spain to acquire relevant data for long-term research.”
Benefits: Defining long-term tasks allows stakeholders to perform activities that require maturity. It also aligns well with the goals of young researchers who wish to establish themselves over a period of 2–3 years (PhDs and postdocs), not just as problem solvers but also thinkers and philosophers in their domain. Long-term tasks also facilitate remote work that is necessary for isolated thinking and reflection, and has become inevitable during the COVID-19 pandemic.
Related Solutions: P4, P7.

7 Anti-patterns for Industry–Academia Collaboration

Anti-patterns are an extension to patterns. Anti-patterns are frequently used practices to commonly occurring problems that are ineffective [10]. They are useful because they describe the causes of failures, which are important to understand, so that the corresponding failures are not commonly repeated but are avoided and mitigated. Similar to patterns, anti-patterns are specified using templates, which supports their reusability. We use the following template: Synopsis ♦ Context ♦ AntiPattern Solution ♦ Symptoms and Consequences ♦ Refactored Solution ♦ Benefits and Consequences ♦ Related Solutions, illustrated in Figure 2. Anti-patterns are related to a specific problematic solution, which occurs in a specific context. They could be patterns which have over time become problematic, or which are not working in a context other than initially proposed for. It is important to document how the problematic solution became problematic by incorrectly resolving underlying problems that exist in the context. Symptoms are useful for recognizing a specific problematic solution (anti-pattern), and understanding its implications. Consequences describe the implications of the problematic solution. Refactored solution is an effective method of resolving the problematic solution. Refactored solution provides benefits, but it may also have remaining unresolved issues called consequences. Finally, the refactored solution can be related to other related solutions.

7.1 AP1: Miss the Forest for the Trees

Synopsis: Focus on individual evaluation parameters and expect the emergence of impact.
Context: The success of IA collaboration project is typically measured on each side of a collaboration using specific metrics. Such measurements help organizations evaluate the benefits of investing resources into such types of projects and serve as compass for future such investments.
AntiPattern Solution: During collaboration, scientists are asked to produce output such as high number of publications, successful grants, and a number of graduated PhD students. The term h-index is often used to measure impact as the maximum h number of publications cited h times. However, there are many ways to boost h-index, which include self-citation, or obliging authors to cite papers during a blind review process, or collaborating with mature authors, to name a few. Similarly, industry practitioners are interested in sales and profit, as their bonuses are linked to such numbers. In addition, they overlook the ethical and environmental impact of their products and services.
Symptoms and Consequences: Focusing on parameters that evaluate scientists and industry practitioners individually often takes the focus away from the real expected impact that can be achieved as a team. Francis Darwin, the son of Charles Darwin once said, “In science, the credit goes to the man who convinces the world, not to whom the idea first occurs.” This quote succinctly summarizes the difference between impact and output. Individual evaluation parameters may be well perceived for an individual’s career progression, but it is not necessarily a good indicator of impact in our society. Boosting individual parameters in academic excellence may overshadow innovative ideas among younger researchers. Many potentially world-changing inventions taking birth in academia end up in the so-called technological valley of death as researchers lack the skills or support to convince funding agencies that their inventions are worth investing in and can change the world. Similarly, many companies die both slow and quick deaths by focusing only on short term goals, such as sales, while ignoring ideas developed in research labs and startups or societal trends.
Refactored Solution: High impact is defined by the Cambridge English dictionary as the ability to withstand great force. Although this refers to materials, the same concept can be applied to scientific research or industrial products. Research that can stand the query and test of the brightest mind outside the echo chambers of confined research communities has the potential for impact. Similarly, products and services offered by companies need to be able to beat their competition in several aspects to achieve real impact. High impact can be achieved when academia and industry can pinpoint the improvement or creation that end users see as a welcome change. For example, in C1, one of the industrial partners, Cisco Systems, was interested in building the highest quality of video conferencing systems with special emphasis on security for high-end conversations. This drove them to make their testing methods of world class standard, in collaboration with SRL researchers. A number of articles published on testing video conferencing systems increased the trust in the quality of Cisco’s products. In another C1 project, a project manager at Toll Customs said, “We call this a high impact project; we see the improvement in our daily practice by using Depict, and we have joint publications showing this.” In C2, the impact at the startup phase of SZ translated to increased sales in sensors. SZ achieved impact by collaborating with numerous partners and having them purchase sensors and use them in their research. The published results and communication led to increased contacts from more individuals, companies and universities worldwide for sales of sensors. SZ collaborated with artists and studios to promote the technology through installations in galleries and the public space. These activities generated media coverage and direct interaction of SZ’s technology and the general public. In C3, it is important for individual partners to have well-defined problems to solve. In monthly meetings, we constantly re-align the partners by asking them how their work contributes to the key performance indicators (KPIs) of 99% average completeness and consistency in manufacturing data quality. These metrics can be realized individually; however, continuous re-alignment of the partners’ metrics increases the quality culture in the entire manufacturing line. As operators are made aware of the quality of the data that is acquired, it gives them higher confidence in perceiving quality. This helps enable the ultimate goal of zero defect manufacturing and reduction of industrial waste addressing UN’s sustainability goal 12 on responsible consumption and production.
Benefits: Greater impact creation as a result of focusing on the bigger picture of how co-creation in a team is bringing real change in the society, rather than on individual evaluation metrics.
Related Solutions: P3, AP13.

7.2 AP2: Non-reusable Minimal Viable Solutions

Synopsis: Developing solutions with minimal features for problem owners to validate the feasibility of ideas quickly, but overlooking solution extensibility and sustainability.
Context: It has become a common mantra to develop minimal viable software products in projects to impress and engage stakeholders.
Symptoms and Consequences: Many IA collaboration projects need to be realized in short periods of time, resulting in the development of minimal viable products. These products often end up becoming project-specific. There may be similarities between such projects that run simultaneously, but since they last for a limited time, there is very little time to abstract from the different repositories and develop coherent (and possible more generalized) artifact that could be reused. Therefore, there is a risk of reinventing the wheel for each project in the worst case scenario.
Anti-pattern Solution: A minimal viable product or MVP is quickly developed by an academic group to address an immediate industrial need. It is often done with impressive speed by reusing open source solutions and existing long standing libraries and tools. The development of an MVP is a well-known concept in the lean philosophy for startups and has found its way into academia.
Refactored Solution: Research groups need to spend time in developing and maintaining a vision and software artifacts that can stand the test of time. It is important to develop a core engine that can interface to many sources and sinks, and can be deployed on many operating platforms, with features that represent building blocks for constructing solutions for projects with different requirements and stakeholders. The abstraction of what can be relevant for a long time requires continuous dialog and exchange of ideas between different project that execute simultaneously. The goal is to enable abstracting from artifacts such as prototypical source code developed in projects. However, such communication between projects, especially spread in time, is also contingent on the clauses of an intellectual property rights agreement (if one exists). It is also necessary to build software artifacts with an architecture that is easy to use and extend. In C1, a core engine for test case optimization was developed that was applied to problems presented by several industry partners, including Cisco systems and ABB Robotics. In C2, SZ developed a skeleton app with an extensible architecture that could be easily adapted and tested for the needs of various stakeholders in a short period of time. A software developer at SZ indicated, “Developing an extensible app saved us so much time when reusing the solution for different applications.” In C3, all software artifacts are developed with extensibility in mind. For instance, SINTEF developed a system for erroneous data repair that is based on a data pipeline that can be improved based on new machine learning models or new components, developed by other project partners, such as for explainable AI.
Benefits: Moving from the idea of a minimal viable product to the concept of developing a minimal extensible engine can greatly improve reuse and applicability to a number of diverse problem domains. This mindset helps a research group position itself as the maintainer of a constantly reused software artifact, giving the group recognition in its community.
Related Solutions: P6, AP3, AP8.

7.3 AP3: Premature Results in Short-term Collaborations

Synopsis: Short-term IA collaborative projects may pose lower risk for industry investments, but inevitably lead to limited practical and scientific impact.
Context: Industry side of IA collaboration typically prefers short-term projects compared to long-term ones [9, 29], as a way to, for example, cater to requirements for a short-horizon investment of resources.
Anti-pattern Solution: The short term may be aligned well with the goals of most industry companies, for instance, it speeds up learning about whether investing in the collaboration is worthwhile. However, short-termism is not always in alignment with a long term research vision of the project.
Symptoms and Consequences: In such short-term collaborations research goals are often left yearning for scientific abstraction and maturity. A researcher at C1 indicated, “Our students often struggle with publishing the results coming from short-term projects, because they have not reached the maturity required for publication.”
Refactored Solution: Projects aiming to achieve both practical and research impact should aim for a mid-term duration. This will give enough time for the thinking and abstraction process that is necessary in scientific research, but will also allow the creation of time-tested tools that industry can effectively apply and assimilate. From our experience, both C1 and C2 collaboration contexts benefited from structuring longer projects in sprints defined over a period of maximum three months, before they are re-evaluated and adjusted. This was an optimal way to keep the project partners engaged in short bursts of a few months. Scientific maturity was attained through incremental publication and going through a peer-review process every 6 months or so. The scientific publication process gave us external validity for our chosen direction, while also helping us address questions that were not part of our foresight. This led to more maturity in the scientific aspects of the collaborations. In C3, certain deliverables have a very short time to be developed and accepted in the early stages of the project. It is often important to communicate the premature nature of some of these deliverables to reviewers. Moreover, the project is planned in an incremental manner, such that new versions of deliverables are produced every year and the quality and maturity of the results from the project improves with time.
Benefits: Achievement of satisfactory levels of both scientific maturity and practical readiness of the results created in a collaboration.
Related Solutions: P6, AP2.

7.4 AP4: No Skin in the Game

Synopsis: “Free” IA collaboration projects may attract industry partners, but are amenable to the risk of insufficient investment from industry partners and overall dissatisfaction.
Context: Funding programs for IA collaboration projects involve academia and industry at varying levels of investment from each partner.
AntiPattern Solution: In some IA collaboration projects, no investment is required from industry, thus making such projects “free” for industry. Researchers spend several months to identify a relevant problem and come up with a solution that can benefit the industry partner. However, the industry partner makes limited involvement in the project.
Symptoms and Consequences: The consequences of a partner not having skin in the game is less intensity and involvement in a project and a constant feeling of fragility. The partner that has more skin in the game will have to go out of the way to make sure the partnership holds with partners that have less at stake. The success and longevity of a relationship is also in jeopardy when partners give less and get services for free.
Refactored Solution: “Skin in the game” is to have incurred risk (monetary or otherwise) by being involved in achieving a goal. It is important for partners to have skin in the game in a co-creation project, such that they incur a cost for, for example, leaving the project midway. However, it is difficult to enforce such costs for long-term projects spanning several years because relationships, trends, and enthusiasm can greatly vary. Hence, such costs should only be associated with short-term projects of 3 to maximum 6 months. The simplest approach to achieve skin in the game is by introducing “in-cash” investment into a collaboration, so that all parties have a vested interest in the project. However, in-cash investment also has the potential of souring relationships between partners, as trust may be lost and one partner may want to pull the collaboration in a different direction. In C1, SRL established a consortium agreement where partners had to commit to at least one person-year every year in a project for an 8-year project. In C2, partners were required to purchase sensors developed by SZ in order to obtain support. The alternative of lending sensors did not always work very well, as some partners did not have a concrete plan on how to use them. A project manager at SZ noted, “Once we decided to put a price on the sensor, we could clearly see who of our collaborators is truly interested in collaboration.” In C3, most partners are invited into a consortium based on prior experience of working together. Partners who do not meet their commitments often undergo a natural selection process of being excluded from funding opportunities in subsequent projects. Hence, it is of paramount importance for both industry and research to attain a level of satisfaction in the collaboration by delivering what is promised. Skin in the game is ensured through the monitoring of efforts made in deliverables, attendance lists and the level of participation in meetings.
Benefits: Having skin in the game will change the way partners collaborate, taking joint ownership of the project. It may also mean that there can be intellectual property disputes for the work put into the project.
Related Solutions: P1, P5, P8, AP5.

7.5 AP5: Inequitable Value Creation

Synopsis: Optimizing KPIs to create value, and thus engendering equitable value creation.
Context: Researchers and industry stakeholders in a project are measured based on KPIs that quantify the value they generate.
AntiPattern Solution: Value creation for partners is an essential aspect in a research based co-creation process. Value creation is any process that creates outputs that are more valuable than its inputs. This is the basis of efficiency and productivity and is often quantified as KPIs. Value for businesses can be quantified in the form of KPIs such as number of improved processes, products, sales, and services. Value for researchers can be quantified in the form of KPIs, such as publications in journals with high impact factor, creation of open-source tools and open datasets for research. It can also be seen as enhancement of knowledge or well-being in the workforce for all partners.
Symptoms and Consequences: There can easily be a discord in which path creates value for each partner. For instance, researchers may be too focused and isolated while writing scientific publications, neglecting the need for value creation for an industrial partner, which may entail improving a product that is used on a daily basis. Similarly, industry partners may not see the value in a validation study of their products and services, which can create more trust in their company among shareholders and customers.
Refactored Solution: Value creation must be discussed with clarity in the very start of a project. Value creation can take place both in the short-term of a few months and the long-term over a few years. Value creation must be quantified such that each partner comes out of the project feeling that their inputs resulted in a something of value to themselves. There must be an equitable balance in value creation for all partners. For instance, if researchers obtain a high impact publication, enough effort and time must be spent in ensuring that the result also increases sales or leads to improved products and services for industrial partners. Quantifying such creation of value over a common index is a possible way forward. In C1, SRL defined value creation in terms of a balance in invested effort in the project, along with metrics about artifacts created by different partners, which include reports, research papers, software, or new sale leads. These metrics, when seen together, gave the consortium an idea on where to invest more effort to keep the value creation equitable. In C2, equitable value creation was measured in terms of markets made available to SZ based on the research collaboration. For instance, prediction of obstructive sleep apnea by University of Oslo opened a very large market that helped SZ refine its business strategy. In C3, making value creation equitable entailed finding a balance between creating data quality tools for manufacturing partners and publication of scientific results. The scientific results were essential for the research institutes such as SINTEF and TU DARMSTADT, while the tools developed during the project would eventually become prototypical components in the product lines of partners in the manufacturing sector, such as IDEKO, PREDICT, and DANOBAT.
Benefits: Equitable value creation in a co-creation project can increase its sustainability and strengthen the cohesion in generated value. For instance, establishing a causal link between high-impact research resulting in increased visibility and sales, and improved stock valuation for an industry stakeholder can be a mutually acceptable win-win situation. A researcher at C1 said, “Equitable value creation is a must for sustainable collaboration, otherwise, one side of the collaboration will give up.”
Related Solutions: AP4.

7.6 AP6: The Incomplete Tower of Babel

Synopsis: Industry and academia speak different languages, especially differing on degree of mathematical formalism, which leads to less clarity in co-creation.
Context: Stakeholders do not speak the same language, like in the biblical story of the Tower of Babel, which stayed uncompleted because the workmen could not understand one another’s language. Industry practitioners and academic researchers are groomed in different environments, from which they inherited different vocabulary, with different degrees of formality. However, when practitioners and academics are involved in a coordinated effort to solve a problem together, it becomes important to communicate ideas in the same terminology, in order to avoid confusion and ambiguity, and to make sure that arguments and idea are conveyed in a clear and convincing way. For instance, from our experience, the term testing for academics is less about physical and real-world testing (which is how industry usually interprets it) and more about generating variations of test cases in a constrained domain. Moreover, what are software tools for academics, are only prototypes for practitioners, which lack robustness and many important -ilities [61].
AntiPattern Solution: Academics like to strip technology off the hype and specify a well-defined problem in the language of mathematics, while industry practitioners build their careers on technologies that are contemporary and cool. As a SE practitioner in C2 pointed out, “When we explain our daily practice to our academic partner, it is very difficult to use the same language they do, so they can understand us.” In a collaborative project, academics talk to industry practitioners, make notes and create a mental image of the problem. After that, academics typically isolate themselves and transform the mental model to a model in a mathematical terms and formulas. They essentially convert an ill-conditioned problem from the real-world to a well-conditioned problem using their preferred mathematical formalism. This formal specification is then used to exchange data and perform experiments, to solve a problem that concerns the industry practitioner.
Symptoms and Consequences: The differences of terminology and language between academics and industry practitioners distance the stakeholders over time. A lack of dynamic and business-focused communication between academia and industry often leads to scientific research straying too far away from the changing needs of industry practitioners.
Refactored Solution: Understanding provides the foundation for effective collaboration. Therefore, both parties in collaboration must make an effort, from the early stages of collaboration, to understand the terminology used by the other party and be synchronized. This can be done spontaneously in a day-to-day interaction, but also focused on business purpose, through lightning talks on specific topic. Such approach helps readjust academic language to continually evolving industry needs, while practitioners learn to think clearly in terms of well-defined mathematical formalisms. For example, in C1, SRL researchers made short talks on the topics relevant for the work being done, such as combinatorial testing or multi-objective optimization along with a presentation of how a real-world problem may be formalized. Practitioners, such as from ABB Robotics and CISCO Systems, demonstrated how they test product features, which was eventually formalized as a combinatorial interaction testing problem. The important aspect of such knowledge exchange is asking frequent questions and clarifications, instead of assuming what the term means. SRL and its partners spent a large amount of time understanding and defining what testing really meant for each partner. For instance, testing for the Norwegian Toll Customs entailed verification of databases consisting of customs declarations. While testing for ABB Robotics meant adherence of the operation of a painting robot to a trajectory specification. In C2, there were differences in terminology between machine learning experts and those who were performing signal processing on the sensor data. For instance, signal processing experts use techniques for transforming signals in a time domain to a frequency domain, to obtain different features for reasoning about sensor data, while machine learning experts directly use raw time series data and a neural network to obtain outputs of interest with minimal feature engineering. We bridged the gap between the two schools of thought by comparing results from both approaches, as part of the software pipeline from early stages of the project. In C3, partners in the manufacturing domain and those in ICT often speak different languages. For instance, SINTEF formulated the problem of erroneous data repair in manufacturing sensor data as a machine learning and SE problem, involving concepts such as data version control, data pipelines, convolutional neural networks, epochs, training and test sets, and feature engineering. Despite the mathematical formulation of the machine learning problem, it was made quite clear to the stakeholders in the manufacturing domain how the output of a neural network will contribute to improved data quality.
Benefits: Continuous synchronization between academia and industry to improve mutual understanding of concepts, models, and experimental results can help both academics and practitioners to achieve a higher level of clarity in co-creation.
Related Solutions: P4.

7.7 AP7: Immeasurable Objectives

Synopsis: Objectives serve the purpose of informing the partners of what can be expected from the collaboration. However, immeasurable objectives serve no purpose, other than to contribute to discontent of the stakeholders.
Context: Measuring the progress of a project at different stages of an IA collaboration is a good practice that helps reach the project goals faster.
AntiPattern Solution: When the problem analysis is conducted at a too high level and requirements of the target solution collected casually, it is difficult to define clear and measurable objectives. The example of an unclear objective is to develop an effective tool for test case prioritization. The problem with this objective definition is that it is too vague and unspecific, providing no means to measure when the objective has been met.
Symptoms and Consequences: Ill-conceived objectives nearly always guarantee dissatisfaction with the outcome for the problem owner.
Refactored Solution: It is necessary to analyze the problem to be solved thoroughly, to capture detailed requirements, and to understand the desired outcome. This information should then be translated to an objective in a formalism with well-defined syntax and semantics (e.g., logic, mathematical function). Objectives need to be specific and measurable. In C1, for instance, we defined the objective as to develop a tool that applies test case prioritization based on high fault detection as an objective function. The effectiveness of the tool can be measured in terms of fault detection capability of prioritized test cases, compared to manual test selection, using historical test data. In C2, we specified an objective as a machine learning problem. Given overnight breathing patterns with manual annotation of apnea events, create a machine learning model to automatically predict sleep apnea from new overnight breathing patterns. The effectiveness of the objective can be measured in the form of prediction accuracy and recall, and represented as a receiver operator characteristics diagram. As a researcher in C2 observed, “These measurements give clarity and confidence to a collaboration, as they serve as scientific evidence.” In C3, it was necessary to break down the high-level KPIs of achieving 99% data completeness and consistency to smaller manageable and measurable goals. SINTEF organized presentations for each partner in the group and produced a synthesis of who does what with the manufacturing data. This information was then used to break down the work into time-bound goals for each partner, to improve data quality and help achieve the goal of 99% completeness and consistency. For instance, SINTEF will develop a module for data profiling and erroneous data repair, INLECOM will develop a module for anomaly detection, PREDICT will develop a module for temporal clustering of manufacturing data, and DNV GL will develop the architecture for data quality as a service. All these individual efforts have a measurable impact on the KPIs.
Benefits: Well-defined objectives lead to measurable progress and more satisfaction with the results produced.
Related Solutions: P11.

7.8 AP8: Confound Lab Setup with the Real World

Synopsis: In SE, only the research knowledge and prototypes tested in the wild stand the chance of success when deployed to operation.
Context: To prove useful in practice, the results of IA collaborative projects need to be thoroughly tested for its envisioned application scenarios.
AntiPattern Solution: SE researchers often work to solve practical problems. For example, in our collaboration with Cisco, we used test logs to improve the cost-effectiveness of testing video conferencing systems. In the collaboration with ABB, we used operational data to predict maintenance activities of ABB robots. When designing solutions to such practical problems, researchers may overlook the requirements and constraints of the solution to be deployed in practice, such as the volume of data being produced daily, or they may test the solution in overly simplistic scenarios. Failing to take real requirements into consideration results in a limited scalability of the candidate solution. What seemed a promising technology when tested in the lab suddenly cannot be transferred into a working solution in practice.
Consequences: Knowledge developed and tested in the lab does not generalize to the real-world environment.
Refactored Solution: During early stages of collaboration, researchers need to understand as much of the domain as possible, to identify the technical and non-technical requirements of the solution sought. Because the requirements form the basis for deriving working assumptions that define what the target solution should look like, early requirements capture is crucial. In addition, later, during the solution testing, requirements can be used to generate a range of more realistic test scenarios to be checked, to validate the robustness of the systems once deployed in operation. In C1, SRL came close to recreate a real world setup, given budgetary restrictions. The research lab acquired a video conferencing system from CISCO Systems and an UR3 collaborative robot from Universal Robotics to perform testing research. This industrial equipment follows the same standards as higher-end versions in the product line of the partners. Performing experiments with equipment was transferable to a real-world setup in a factory, due to the use of the same standards and software programs. In C2, sensors were tested at low-temperature ice baths, and for waterproofing we used a small pressure to ensure that its connectivity and thermal sensitivity is understood and managed before being deployed to a partner. These experiments were repeated several times to ensure that there were no confounding factors in the measurements. In C3, initial experiments on data quality validation were performed using a publicly available dataset on CNC milling.7 However, demonstrating the algorithms for data quality validation and repair on this dataset led to obtaining access to IDEKO’s live CNC machine with real data from production operating 24/7.
Benefits: Research knowledge and prototypes, when deployed in practice, scale to the complexity of the real world environment. A SE practitioner in C1 said, “We seamlessly deployed the tool in our test framework, largely because the tool was developed and tested by working with real video conferencing system and data.”
Related Solutions: AP2.

7.9 AP9: Sunken Cost Fallacy in Public Research Grants

Synopsis: Investments in IA collaboration projects should not be contingent on previously invested resources, but on a clear sign of the collaboration sustainability and ability to create industrial and societal impact in longer term.
Context: Academia aims to obtain long-term public grants (3–10 years) from national research funding agencies or the European Research Council for financial stability.
AntiPattern Solution: Academic researchers create consortia with industrial partners and apply for funding from national and European research councils. These grants are aimed to support long-term value creation and knowledge transfer from research labs to the industry. The research councils often tend to reinvest in the same people and companies and research groups over several years or sometimes decades.
Symptoms and Consequences: The model of supporting businesses and academia with public grants has been applied for decades in several countries with reasonable success in terms of exploitable intellectual property and improving training of young professionals prepared for industry. However, many research programs continue to get funded because they have received funding previously. The funding is often obtained by the same person growing in a leadership role. This prevents new and innovative ideas from getting enough attention and funding to develop, because the priorities, vision and tone are set by the leader. Moreover, there is little accountability in terms of industrial and societal impact that is financially sustainable without public grant support.
Refactored Solution: Research groups should address industrial and societal impact and its financial sustainability during a long cycle of funding. This can involve taking risk such as spinning off a business with private investments to demonstrate adequate skin in the game from various stakeholders, innovation through patents, and job creation. This effort should be recognized by the public grants bodies and used as evidence for a new round of research funding. In C1, SRL had one attempt of spinning off a software testing startup based on the technology developed through the research collaborations. However, due to lack of resources, the initiative did not continue. A researcher in C1 who was leading the spin-off initiative reflected that “While eventually we did not manage to get the startup off the ground due to the lack of resources, through the venture with investors we proved the innovation potential underlying the technology, which was followed by some seed money.” In C2, maintaining financial stability required SZ to pivot from a consumer product company to a consulting firm that allows white labelling of its sensor product to larger actors. SZ did not actively seek public grants due to large effort and relatively low chances of success. It is important to also note that the creation of business entities requires that people with business, marketing, sales and design acumen meet scientific researchers and that such a meeting place is provided. In C1, SRL established the Simula Garage, an incubator to facilitate such interactions. In C3, the ongoing project is a follow-up of another EU project MC-Suite8 that was of smaller size than InterQ, in terms of budget and the number of partners. InterQ had to mitigate the perception of financing the same partners in MC-Suite and involve new partners that they have not worked with earlier. The addition of new partners, such as TU Darmstadt, INLECOM, and DNV GL has helped bring in many new ideas into the project that were not foreseen by the older members of the project.
Benefits: The benefits of nudging use-inspired research and co-creation towards financial sustainability will help minimize the dependence of research groups on purely public funding. Public grant bodies on the other hand can diversify their investments and avoid the sunken cost fallacy.
Related Solutions: AP11.

7.10 AP10: Adhering to Traditional Reward Mechanisms

Synopsis: Adherence to traditional reward mechanism may be suited to individual partners, but only adherence to bilateral reward mechanisms can bring about successful IA research co-creation.
Context: All partners in a co-creation project are driven by a traditional reward systems internalized from years of operation in specific contexts. These reward mechanisms are followed to achieve a lot of activity and quantification of numbers that help individuals advance in their careers.
AntiPattern Solution: Industry partners are rewarded for product quality, the number of customers acquired and customer satisfaction, while researchers are rewarded for publications in high impact factor journals. In C1, for instance, for SRL researchers, one metric of interest was publications in high-impact journals and top conferences, while for industry partner Cisco the main metric has been maintaining high product quality through rigorous testing. Similarly, in C2, the main reward mechanism for SZ was high volume sales of their sensors and simplification of customer support, while for their academic partners it was about high quality publications.
Symptoms and Consequences: Mismatches in rewards mechanisms can create isolation of partners, as each partner tries to optimize metrics that interest their organization.
Refactored Solution: Each partner is the quintessential frog in its own well, unless they understand the reward mechanisms of each other. For instance, it is important for a research organization to realize that companies can only thrive financially when they obtain sales of their products and services. One solution is to have researcher work part-time in a company to understand the reward mechanisms. Similarly, it is also worthwhile for an industry practitioner to spend some time in a research group and understand the complexity of publishing high quality scientific articles and its benefits for the society. In C1, we had an industrial PhD student from ABB robotics who spent some time in a research group and published several papers. In addition, he facilitated the political decisions to incorporate research artifacts produced in the collaboration for testing ABB’s painting robots. A researcher at SRL pointed out that “This factor greatly streamlined the collaboration between SRL and ABB robotics.” Similarly, in C2, the collaboration between SZ and University of Oslo was better aligned after the University of Oslo included SZ in research grant proposals as a partner and a vendor, to help increase the sales for SZ. In C3, reward mechanisms are defined as KPIs as part of consortium agreement, before the commencement of the project. These KPIs are something all partners in the project agree to and see as feasible goals to attain, despite their own reward systems in their companies. These KPIs are often specified to be simple and easy to achieve and often amenable to interpretation. For instance, in InterQ, a KPI is to achieve 99% data completeness in acquiring manufacturing data and what is most relevant to the project is demonstrating this goal.
Benefits: Matching reward mechanisms can make it easy to develop a win-win situation for collaborators in a project. A collaboration can flourish if the partners understand each others reward mechanisms and the need for it in the society. Ideally, companies that can gain an edge from research will have increased sales of their products and services, while researchers can create high impact from their research work.
Related Solutions: P3, P8, AP13.

7.11 AP11: Arranged Marriage

Synopsis: Long-term IA collaboration projects need to welcome change in different aspects of the collaboration (leadership, resources), as an opportunity to improve defective parts and be able to co-create value.
Context: National and European councils that fund scientific research set criteria that require the establishment of long-term co-creation projects between diverse stakeholders, many of who may not have ever worked together in the past. It is in the best political and financial interests of the research councils to ensure that the long-term projects succeed. Very much like arranged marriages in India.
Anti-Pattern Solution: Whenever a governmental research council decides to allocate funds to a project involving academic and industry partners, a consortium agreement is signed that outlines how time will be spent by each partner and how the intellectual results will be exploited. It also outlines how conflicts will be managed and mitigated.
Symptoms and Consequences: The stakeholder with responsibility of running the project is often asked to keep the funding bodies and research council happy and satisfied about the ongoing collaboration. This often can lead to phrasing results and outcomes of a project with a steady streak of positiveness. The non-engagement of some partners and poor outcomes are not presented with full clarity, since it is in nobody’s interest, especially when the money comes from public funding with no real ownership. Reviewers are appointed to the job of evaluating projects where their role becomes limited to going through a checklist, as they do not really have “skin in the game.” Elected politicians may sometimes be seen as in charge of setting priorities and allocating grants. However, very often their own terms are shorter than the length of long-term co-creation projects. It has become very common to smooth communication of results to hide bigger issues underlying a project. This often leads to tremendous loss of resources and public tax payer money over several years. It also leads to sub-optimal and forced relationships between stakeholders in some projects that lack the necessary level of synergy that can stand the test of time and be beneficial to the society.
Refactored Solution: Negative results should be given as much importance as positive results in co-creation projects. When partnerships do not reach a mutually beneficial agreement it should be easy to replace partners in long running projects. Intellectual property agreements should ensure that partners are fairly compensated for their contributions in case they are replaced. This may be seen analog to a divorce in a marriage. Similarly, change of personnel and leadership should be taken into account in research projects when conflicts arise frequently and are not resolved. These collaborations should be treated similar to change in leadership in businesses, where a board elects a CEO. All in all, the funding bodies should learn to embrace both positive and negative results as their own and see the value in the honesty. In C1, SRL has had to part ways with Norwegian Toll Customs and FMC technologies midway in the project, as the companies’ strategy did not align with the goals of the project. They did not see value in spending more time and effort in the project after a few years of effort. This was communicated to the Norwegian Research Council and a new partner, the Cancer Registry of Norway, was added to the project. In C2, most public grants were secured by the University of Oslo and the collaboration with SZ lasted for short periods of time, whenever it was relevant. Here, the person of contact or the role of leader in a project varied from depending on who was best suited to manage the collaboration. In C3, the criteria enforced to have at least three European countries in a project, which can be seen as a peace project that fosters collaboration, while countries are getting to know each other despite cultural differences. A good approach to a successful collaboration is to have a solid foundation of partners who have worked with each other, and then add new partners in the project. For instance, SINTEF, IDEKO, DANOBAT, and TEKNIKER as a group and Renault, Predict and Comau as another group have previously worked together. While INLECOM, DNVGL, and TU Darmstadt are newcomers to the consortium. Despite the strong collaborative foundation, some of the new members have had reservations for the collaboration, due to potential conflicts of interest in intellectual property and in business partnerships. We mitigate the effect of such reservations in collaboration by creating separate groups of industrial partners in conflict of interest and neutral researchers who ensure that project results are equally shared by both parties.
Benefits: There are many benefits of being flexible with agreements in long-term collaboration projects. When there is a lack of synergy, then partners can leave and be replaced. As a project manager at SZ reflected, “It is in the best interest of all parties to say when things are not working, instead of hiding it, so that we can focus energy on what is working.” Similarly, leadership of a publicly funded project should be mutable like in businesses. This can inspire honesty in projects and hope for change when projects are not running optimally and need a fresh breath of energy. The flexibility of course comes at a cost that needs to be clearly quantified before major structural changes are made. The funding bodies should take an active role in evaluating alternate paths when they see that projects are not bringing the benefits to the society they initially hoped for.
Related Solutions: P1, AP4, AP9.

7.12 AP12: Repackaging Ideas

Synopsis: Stakeholders repackage old ideas and concepts into terminology developed for new trends and hype.
Context: New technologies find themselves on the Gartner hype cycle every year and are much discussed in news and media. Funding agencies prepare call texts based on what is up and coming. The technologies move quickly on the hype cycle and hence, it is necessary for stakeholders, both academia and industry, to repackage something they have been working on for years to the demands and pressure of new trends and terminology.
AntiPattern Solution: Industrial partners want to stay relevant by staying abreast of and use the latest technological innovations. In an IA co-creation process, industry partners often rely on researchers to repackage what is being done for several years using a dated technology into a new technology. This is mainly due to the lack of resources to experiment with new technology in the industry. For instance, industry would like to experiment with NoSQL databases instead of relational databases, to become scalable to more data and users. Researchers on the other hand repackage their old concepts with new terminology. For instance, as an industry practitioner in C1 said, “Constraint programming has been around for a couple of decades, but in contemporary times a preferred term would be symbolic AI to stay relevant and possibly increase a chance of obtaining funding. But in essence, if constraint programming did not work in practice because of scalability issues, neither will symbolic AI.”
Symptoms and Consequences: Repackaging old ideas into new terms and technologies creates the following issues: (a) new technologies may not be relevant to the problem at hand, (b) the use of new technology may only be superficial if the real advantages of using new technology is not understood, (c) new technologies and terms may not pass the baptism time and may die out too early, and (d) a lot of money and person-hours can be used in pursuit of repackaging and reselling old ideas in a new framework.
Refactored Solution: It is often important to embrace the emergence of a new technological trend, as it is a culmination of what is made possible with growing computational power and storage for instance. Both academia and industry want to stay relevant and up to date in their respective domains. Instead of repackaging old ideas using new terminology and technologies, we suggest that the stakeholders identify the core problem in a mathematical formalism that is independent of new trends and technology, as a first step. This will help the team think clearly without being blinded by hype. This is also a place where old and new concepts have many aspects in common. The implementation of a solution however can use new technologies and terminology. In C1, researchers in SRL stripped a complex problem into mathematical symbols and statements to understand the real complexity. For instance, the problem of test selection and prioritization of video conferencing software at Cisco was specified as a learning problem [37], as a first step, followed by the choice of technologies, such as TensorFlow or PyTorch for machine learning, at a larger scale. In C2, the problem of predicting respiratory minute ventilation from ribcage movement data was initially attempted by creating a biophysical model of the respiration. This had its limitations, as ribcage movement measurements were varying from person to person. Therefore, it was necessary to attempt to address the same problem with contemporary deep learning technology. Scientists in the co-creation project specified the same problem as a deep learning problem, based on data collected from different people. The deep learning model was eventually trained and tested using novel machine learning frameworks, such as TensorFlow and PyTorch, with better results. In C3, the idea of controlling the manufacturing process using data from observing is not new. However, the use of deep learning models is trendy and hence the idea of using deep neural networks was repackaged for tasks such as improving data quality in faulty sensors.
Benefits: The simplification of a problem to its bare bones mathematical formulation makes it trend-, newfangled-terminology- and technology-agnostic. This brings clarity to the co-creation process, and all stakeholders are aligned to the core problem, instead of placing a lot of focus on finding a new wrapper as a solution to a problem. Eventually, once the problem is well-defined, stakeholders can make an informed decision on the type of technology to choose, to best solve the problem.
Related Solutions: AP6.

7.13 AP13: McNamara Fallacy

Synopsis: When decisions are made in a co-creation process, it is only quantifiable results, outcomes, and observations that matter.
Context: A co-creation process between researchers and practitioners needs to be evaluated on a regular basis by a grant-giving body. For a project spanning several years, such evaluation often takes place annually, midway, and at the end.
AntiPattern Solution: The evaluation of a co-creation project is often performed using selected metrics. These metrics include number of publications, citation index, number and monetary value of grants accepted by researchers. While industry practitioners are evaluated based on sales figures and monetary value of projects acquired in the case of consultancy firms.
Symptoms and Consequences: Robert McNamara was an MBA from Harvard University in 1939. He was the first president of the Ford Company from outside the Ford family, and he eventually became the US Secretary of Defense during the Vietnam war. McNamara during his time in Ford was known to select data points and ruthlessly optimize the efficiency, costs, and quality at Ford. He brought the same approach to the Vietnam war where his metric for success was body count. This was a poor measure of how a war is progressing, because it reduced the deeply human process to a mere figure, known as the McNamara fallacy. Similarly, co-creation between researchers and industry practitioners can be a deeply human process spanning several months to years. A researcher in C2 reflected, “Measuring outcomes based on quantifiable metrics make us forget about the social and human process behind the success or failure, which is wrong, because this process can be incredibly insightful and valuable to learn from.” Very little time is spent on understanding the human experience of running a project. The sole focus on academic productivity based on metrics was associated with poorer physical health, increased burnout, and reduced productivity [25].
Refactored Solution: The human process of co-creation needs to be recognized for its beauty and depth and also for the costs incurred. The process of co-creation should be documented with narrative, because it is valuable and insightful to know that behind technological innovations there are humans with flaws and challenges. The writing and documentation of the process helps externalize the non-quantifiable aspects that lead to quantifiable outcomes, such as a publication or a software artifact. For instance, in C2, there is a need to create awareness about improvement in physical health through the focus on breathing using the Flow sensor developed by SZ. This was achieved through simpler popular science communication in a press note.9 The press note led to increased awareness about breathing as a metric that we often ignore in sports. In C3, gaining trust between research institutes and commercial partners in the manufacturing industry is of paramount importance, because researchers are seen as neutral between two or more companies competing in the same sector. An important thing that is not quantifiable is demonstrating that researchers are successfully able to balance the trade-off between openness needed for research and protecting the intellectual property of individual commercial partners and their interests in the project.
Benefits: Focusing on not only quantifiable results helps lay emphasis on the often challenging human process that results in a technological innovation. Knowledge of the human process can help improve co-creation and enhance the state of flow among collaboration participants. This can drastically mitigate burnout rates. The management of stakeholders by neutral parties such as research institutes is never quantified, but the ensuing trust developed during the project helps such research institutes be invited to new collaborations in future.
Related Solutions: P3, P8.

7.14 AP14: Not Invented Here

Synopsis: Collaboration partners show bias against internalizing the knowledge that originates from a different field of expertise.
Context: Collaborative projects between industry and academia are developed with an idea that each side of the collaboration has an expertise required for addressing the project challenges, but not possessed by the other side. For example, in C1, SRL researchers had a unique expertise in combinatorial test optimization, which Cisco engineers did not have, and which was needed for optimizing the testing of video-conferencing systems at Cisco. On the other hand, Cisco possessed video-conferencing systems, and had a deep knowledge of the challenges and constraints for testing such systems.
AntiPattern Solution: In a typical scenario of a collaborative project, industry and academia initially meet to discuss industry practice and identify the challenges that industry is looking to solve. Discussions lead to the identification of a set of requirements for the solution to be developed. Afterwards, these two teams part ways and start working on the project with sparse interaction and limited opportunity to update each other on the direction of their work. A few months forward, academia is ready to have industry deploy the research prototype they have developed in the lab. However, industry avoids to use the prototype, believing that if it has not been developed in-house, it is not as valuable.
Symptoms and Consequences: Bias against external knowledge by industry is developed as a consequence of not having insights in the knowledge development process, understanding of the underlying research concepts, nor the access to decision making about technical choices during the prototype development. In C1, in the example of Cisco video-conferencing system testing, the initial research prototype developed by SRL researchers had a similar destination, because there were technological incompatibilities between the prototype and the testing toolchain where the prototypes should have been integrated. Other reasons for having “not invented here syndrome” could include intellectual property concerns, costly absorption due to a steep learning curve of the core research concepts, or simply risks associated with the concepts unproved in the real world.
Refactored Solution: It is critical to establish co-creation as a form of collaboration, where industry and academia involve in a process of continuous interaction during all stages of collaboration, from problem definition to solution deployment. Co-creation is able to build mutual trust and partnership between collaborating parties that leaves no room for the notion of “external” knowledge. There is only one knowledge developed, and it is equally owned by everyone involved in the project. For example, in C1, a co-creation was established between SRL researchers and ABB engineers through several strategies. Researchers spent a lot of time at the industry site, working side by side the ABBs engineers. There was an industrial PhD student employed by ABB and supervised by a SRL researcher, which helped catalyze co-creation, as this person was able to connect knowledge from both teams and make them realize the value of the expertize of one another. In C2, one challenge for the startup SZ was to demonstrate that their invention was cheaper and better in accuracy compared to existing solutions. The UiO research group on obstructive sleep apnea, in fact, tested the Flow sensor with several other low cost sensors [34], to demonstrate that the flow sensor indeed could be used for clinical studies. This external validation in a clinical study helped overcome the bias of not-invented-here and positioned SZ’s flow sensor for medical health applications, going beyond the initial focus on sports. In C3, it was not easy for the research institutes to obtain access to the data from the manufacturers for AI research, as the benefits of using AI was not very immediate, although advocated initially. Access to a data stream required several months of presentations and convincing with experiments on publicly available datasets. Yet, the most useful application for the manufacturing sensor data turned out to be data profiling and the applications of AI models were only a bonus. The use of AI is not very widespread in manufacturing and it is hard for industry actors to pause a bit and try to comprehend the benefits, as production runs 24 hours a day. Renault for instance mills 1,500 cylinder heads per day in the Valladolid factory in Spain and has to see how AI benefits their process while in operation. Making an effort to understand how other partner’s expertize can improve one’s processes is a key to mitigate the not-invented-here syndrome.
Benefits: More efficient use of time and resources, less reinvention and duplication. There is no negative attitude towards using the knowledge developed in the collaboration project. Instead, there is a strong sense of ownership of such knowledge, as it was created by a joint effort from industry and academia. A researcher in C1 mentioned that “At the beginning of collaboration, we often see a disinterest of practitioners for the ideas we present. But once we establish trust and start interacting frequently, this bias completely disappears.”
Related Solutions: P4, P5, P7, AP6.

8 Discussion

In this section, we suggest how to use the patterns and anti-patterns, and discuss their implications for practice. Next, we draw a line between our patterns and anti-patterns and best practices for IA collaboration suggested by previous studies, relative to the set of IA collaboration challenges observed in our experience and identified by others. Finally, we discuss limitations and threats to validity of our findings.

8.1 Implications for Practice

Applying the patterns and avoiding the anti-patterns provided in this article has shown to improve the success of IA collaborations, as observed across three projects in SE (see Section 4). We believe that reusing these patterns in another IA collaboration project in SE will have a positive effect on the course of collaboration, especially if the patterns and anti-patterns are adapted to the collaboration context at hand. For example, in P4: Active dialog, as a general advice we recommend to foster active dialog between industry and academia, which entails meeting and discussing regularly, and following up on the agreed actions in the agreed time-frame. However, we expect that the optimal frequency and volume of interaction will be different for different projects, and is, therefore, up for adjustment by individual participants.
The patterns and anti-patterns provided in this article are recommended and not prescriptive. The more of the patterns applied and anti-patterns avoided, the lower the risk of IA collaboration challenges, and the higher the probability of a successful collaboration. The lack of the patterns does not mean failure, nor applying them does guarantee success. It may be that even after applying the patterns and avoiding the anti-patterns the collaboration still fails. For example, due to the unstable financial situation, a company participating in an IA collaboration may need to prioritize other projects and pull resources out of the IA collaboration project, despite the established reciprocity between industry and academia, synergy in place, functioning active dialog and clear goals for the collaboration.
The set of 28 patterns and anti-patterns is not complete; it only captures our 14-year experience collected in three IA collaboration projects. Had we continued observing some of these projects further or started observing new collaboration projects, the set of patterns and anti-patterns might have been extended.
Furthermore, among the 28 patterns and anti-patterns presented in this article, there are some which are more general and thus applicable to a wider range of collaboration contexts and others which are more context-dependent. For example, a more general pattern is P1: Reciprocity between stakeholders, which entails that all participants must both give and receive in a collaborative project, for the project to be successful. We strongly believe that without the practice of reciprocity any collaboration will be derailed, not just the one in SE. On the other hand, P2: Standardized data exchange, which promotes enabling the reuse of data, algorithms, and software code, is a pattern more relevant for the SE domain, and less for non-computer science domains. Other such SE-relevant patterns and anti-patterns may be P6: Minimum viable tools, P11: Deriving research and business critical questions comes first and data sharing later, P13: Anticipate unavailability of data, AP2: Non-reusable minimum viable solutions, and AP8: Confound lab setup with the real world.
The risk of applying the patterns wrongly or applying them when they do not fit is minimal since they are presented in a format conducive to reproducibility. The pattern description includes the context, explaining the circumstances under which the problem occurs, and the solution, explaining how the problem may be solved within that context. The anti-pattern description includes the symptoms, useful for recognizing a problematic solution, and consequences, explaining the implications of the problematic solution. If still the patterns and anti-patterns are used wrongly, the risk will be similar as if they were not used.
Finally, the presented patterns and anti-patterns are equally representative of both practitioners’ and researchers’ experience of IA collaboration. In Table, 6 we show which of the patterns and anti-patterns were initially mentioned by researchers and which by practitioners. However, all the patterns and anti-patterns were discussed with both researchers and practitioners, in order to develop the initially-collected findings into the final set of 28 solutions presented in this article.
Table 6.
Initially mentioned by |Solution
PractitionersP1, P2, P3, P4, P5, P6, P7, P9, P10, P11, P12, AP1, AP5, AP6, AP7, AP8, AP9, AP11, AP12
ResearchersP1, P3, P4, P5, P7, P8, P9, P13, P14, AP1, AP2, AP3, AP4, AP5, AP6, AP7, AP10, AP13, AP14
Table 6. Origin of Patterns and Anti-patterns

8.2 Relation to Existing Evidence

In 2016, Garousi [19] reviewed 33 studies discussing IA collaborations in SE, and synthesized a set of 127 best practices addressing 63 challenges identified across these studies. In Table 7, we map our patters (P) and anti-patterns (AP) to the challenges summarized in [19] and other challenges identified in our experience (denoted with * mark in front of the challenge), as well as previously suggested related best practices synthesized in [19]. If we have not observed a specific challenge in our experience, and thus do not provide a solution, this is shown as ✗ in the column “Our solutions.” If there are no best practices addressing a related challenge identified in [19], this is shown as ✗ in the column “Comparison with other solutions.”
Table 7.
Challenges (adapted from [19] and identified by us ✥)Our solutionsComparison with other solutions (synthesized in [19])
Research results are not relevant for practiceP8, P11We suggest making a two-faceted problem definition, i.e., starting from a practical problem and deriving a research problem from it, as well as to focus on business critical questions at the beginning of collaboration. Whereas others suggest to make long-term commitments, provide frequent access for researchers, work in a team, use a use case study method, and pilot a solution with industry practitioners.
Research results are not measurable and exploitableP3, AP7
Researchers do not understand relevant problems from an industry point of viewP4We suggest building and maintaining active dialog throughout the collaboration to understand as much of the domain knowledge as possible, by means of workshops and seminars, which is in line with solutions proposed by others.
University education not focused on industrial needs
Research topic selection not driven by relevanceP8, P11
Validity of research not properly addressedP6, AP8
Running a flexible research project is challengingP12, AP11
Research in its nature is riskyAP9
Difficult to assess if research addresses future industry needs making it challenging to decide on solutionP7
Integrating new/improved solutions in existing contextP5, P7
Deficiencies in software engineering educationWhile we have not encountered this challenge in our experience, others suggest to work as a team, to make long-term commitment such that industry gets involved in education, and to use a use case study method for spreading knowledge.
Lack of training, experience, and skills
Deficiencies in research skills for practitionersP4, P7
Lack of commitment and difficultly to assess research results and forumsP9
Deficiencies in domain knowledge for researchersP4, P7We suggest building an active dialog at all stages of collaboration using physical and virtual channels of interaction, as well as ensuring frequent interaction to demonstrate progress and intermediate results of technical developments often. Whereas others suggest to co-locate researchers on industry sites, to use established guidelines and data collection methods, and to employ researchers.
Lack of commitment to invest money
Lack of commitment to provide access and timeP1, AP4, AP5We suggest to identify the motivations and needs of all participants and develop a norm for reciprocity between stakeholders, as well as to ensure that all participants have “skin in the game.” Whereas others focus on champions, making long-term commitments, proper presentation by researchers in early meetings, proper topic selection, showing benefits of the research solution for the industrial partner, collocating researchers on industry side, ensuring frequent interaction through meetings, managing intellectual property rights, and piloting the solution with industry practitioners.
Lack of commitment due to human factorsAP14
Lack of commitment due to competitive businessP1, AP4
Different time horizons for industry and academiaP14
Different interests and objectivesP4, P8
Different perception of what solutions are usefulP4, AP1, AP13
Different terminology and ways of communicatingAP6We suggest making an effort from early stages of collaboration to understand and synchronize terminology, through day-to-day interaction and lightning talks on specific topic. Whereas others suggest having prior positive experience to facilitate communication, as well as to personally interact with practitioners during data collection.
Different reward systemsAP1, AP10
Different communication channels and directions of information flowP4, AP6We suggest building an active dialog between industry and academia at all stages of collaboration, while focusing on a common vocabulary. Whereas others focus on workshops and seminars to increase visibility show relevance, strength and ability.
Different culturesP4, P8, AP1
Different expectations on quality of evidence in researchP6, AP3
Different focus on scale of solutionsAP2, AP3, AP8
Different types of knowledge availableP5, P7
Technology push from academia greater than technology pull from industryAP14
Different contextsP4, P8, P9
Different business models
Different perception of challengesP4, P8, AP8
Different requirements on noveltyP14, AP3, AP12
Communication gaps between researchers and practitionersAP6
Difficulty of managing multiple research partners
Difficulty to elicit information from developersP4, P7
Communicating on time-frames, topics, and responsibilitiesP4, P7
Lack of prior relationships between a company and academia
Resistance to change and inflexibilityAP11, AP14We suggest to welcome change in different aspects of the collaboration as an opportunity to improve defective parts, as well as to embrace openness for knowledge that originates from a different field of expertise. Whereas other suggest to use the case study method.
Difficulties in training practitioners due to high training cost and lack of time
Lack of organizational stability and continuityWe have not seen this challenge in our experience, while others focus on ensuring management engagement on industry side.
Intangible human factors with organization-wide impactAP13
Competition between industrial and external researchersP5, AP14
Hard to find championsAP4
Difficulty to achieve clear and realistic goalsP4, P7Our solutions include active dialog and frequent iteration to align goals, which is in accordance with solutions by others, which focus on proper presentation and communication by researchers in early meetings.
Solution incompatible with organizational cultureP7
Lack of willingness to invest time/effortP1, P6, AP4, AP5We suggest to develop a norm of reciprocity between partners, develop a minimum viable proof of concept in early stages of collaboration, to demonstrate practical benefits of research concepts, to have incurred risk for all partners by being involved in achieving a goal, and to discuss value creation with clarity in the start of a project. Solutions proposed by others, which include showing benefits of the research solution for the industrial partner, are in line with our experience.
Difficult to find the right project infrastructure (management, collaboration environments)
Difficulty to integrate external competenceP7, P10, AP14
Time-critical windows of opportunity for product research
Lack of openness to disclose weaknessesAP11
Loss of champions in projects
Lack of resources due to over-investment
Financial investment risky from academic side
Licensing restrictions on toolsP10
Lack of resources to provide technical support for research solutionsP10
Intellectual property rights and privacy access to dataP10, P13
Difficulty in managing intellectual property rightsP10
Missing trust and respectP4, P7, P12, AP4We focus on active dialog and frequent iteration to build trust and align goals, as well as to minimize moving parts to increase project stability, and ensure everyone has incurred risk in the project. Whereas others suggest to establish common and simple terminology, show benefits of research solutions for the industrial partner, to work in a team, and personally interact with practitioners during data collection.
Incorporating new methods and solutions in research contacts
✥ Limited sharing of potentially reusable artifactsP2
✥ Lack of objective evidence showing the practical value of the knowledge created in early stages of a collaborative projectP6
✥ Difficult to transfer agile practices from industry to research knowledge creationP14, AP3
✥ Developing and testing research results in the lab vs real worldAP8
✥ Keeping abreast of latest technology innovationsAP12
Table 7. Mapping of Patterns and Anti-patterns to Different Challenges and Previous Solutions for IA Collaborations
✥ denotes a novel challenge identified by us. ✗ denotes either a challenge we have not observed in our experience (2nd column) or that no best practices have been mapped to the specific challenge in [19] (3rd column).

8.3 Limitations and Threats to Validity

Construct Validity: To reduce a potential threat to construct validity, during the interviews, each time we asked a question, we made sure that the interviewee understands the question and interprets it the same way as the interviewer. Another threat to construct validity is related to the observer-expectancy effect in the observation sessions, which means that the observer’s presence influences the informants’ behavior. To mitigate this threat, the observers acted as “normal” participants, taking notes on a laptop, trying to be as unobtrusive as possible. Another validity threat may be related to the accountability of the data collection period. During interviews, our respondents may have forgotten to mention an important aspect of IA collaboration, as it occurred long before the interviews were conducted. To mitigate this threat, during interviews, we did not rush the discussion, and after interviews, we encouraged the respondents to contact us later if they think of any feedback they would like to provide related to the interviews.
External Validity: The reported patterns and anti-patterns are empirically tested in three different collaboration contexts: a large-scale long-term context, small-scale mid-term context, and large-scale mid-term context. They are not tested in a wide variety of IA collaboration setups, which could reduce the generalizability of our findings. However, as mentioned by Neill [28], an important aspect of patterns and anti-patterns is the “rule-of-three,” which means that patterns and anti-patterns must have been used successfully in practice three times, to be called patterns and anti-patterns. Furthermore, out patterns and anti-patterns have been tested in different countries (two of the tree collaboration contexts are international), in different organizations and different industries (software, hardware, manufacturing, and public sector). To further improve the external validity of our findings, we will revisit, expand and improve the set of 28 patterns and anti-patterns based on collected experiences from the collaboration projects. Another threat to external validity is that our results may be less representative of the SE areas not covered by our three collaboration contexts. However, our studied collaboration contexts cover a wide range of SE areas, mitigating this particular threat. Furthermore, although the patterns and anti-patterns are derived from the three collaboration contexts studied, they address IA collaboration challenges found in other contexts (See Table 7, and thus we believe the patterns and anti-patterns could be transferable to other contexts. However, the list of patterns and anti-patterns is not exhaustive. It is reflective of only the experiences we have from the three collaboration contexts in SE. Observing another collaboration context may lead to new patterns.
Internal Validity: Participant observation used for data collection has the limitation of being subjective, as it represents a perspective of an observer. To minimize this internal validity threat, we had two participant observers who collected data, and during data analysis, we applied observer triangulation, where conclusion made by one observer were checked by another. Furthermore, we applied data source triangulation, combining and relating data coming from different sources, to increase the validity of our empirical study. Next, our selected respondents may not represent all relevant participants in the collaboration. However, we conducted participant observation such to involve participants with different experience and expertise. For example, software engineers and researchers were observed as part of joint teams, and managers were observed in focus group discussions and workshops. In interviews, we applied purposive sampling, based on the observer’s judgment, selecting participants who may provide the most useful information related to the questions that arose during the previous cycle of data analysis, as well as to ensure that both positive and negative aspects of the collaboration are captured. As there is a risk of the selected sample not being representative to the population, in some data collection cycles, we applied a maximum variation sampling, selecting interviewees such to vary their backgrounds and expertize. Some of the interview participants expressed negative experience of some phases of collaboration, for example, the lack of information flow between industry and academia. We captured such hurdles with special interest, probing deeper into the aspects that did not work, their effect on the collaboration, and possible ways of resolving the hurdles. The negative experience collected provided a valuable input for anti-pattern definition. Further factors that can limit the internal validity of our study are external factors that could affect the cause effect relationship between patterns and anti-patterns and results of co-creation. For example, the background of the researchers and practitioners, the timing of co-creation, the financial status of companies, market dynamics, churn rate in employees or force majeure such as corona epidemic, which changed the ways of interacting between industry and academia, from a real-world to virtual interaction.
Conclusion Validity: To reduce the threat of reaching the wrong conclusions from the data, we triangulated using multiple sources of data, such as field notes from participant observation, findings from interviews, emails and feedback from the stakeholders about the derived patters and anti-patterns. We also applied observer triangulation, where two observers collected and analyzed data, while reviewing each other’s findings, to reduce conclusion bias. Furthermore, feedback from the stakeholders was collected using a member checking technique, where the patterns and anti-patterns were presented to the stakeholders to get their opinion.

9 Conclusion

In this article, we discuss our experience of co-creation, as means to IA collaboration, gained in three collaboration setups in the area of SE research. Throughout this experience, we have observed a set of 28 recurring best practices and issues to avoid, which we provide as the reported patterns and anti-patterns. We exemplify the patterns and anti-patterns using three different IA collaboration projects. Such exemplified insights into recurring patterns of successes and failures can positively contribute to other IA collaboration projects in SE.

Footnotes

1
Wikipedia.org, Technology transfer.

References

[1]
T. Arts A. Sandberg, L. Pareto. 2011. Agile collaborative research: Action principles for industry–academia collaboration. IEEE Software 28, 04 (Jul. 2011), 74–83. DOI:
[2]
David E. Avison, Francis Lau, Michael D. Myers, and Peter Axel Nielsen. 1999. Action research. Communications of the ACM 42, 1 (1999), 94–97.
[3]
Leonor Barroca, Helen Sharp, Dina Salah, Katie Taylor, and Peggy Gregory. 2018. Bridging the gap between research and agile practice: An evolutionary model. International Journal of System Assurance Engineering and Management 9, 2 (Apr. 2018), 323–334. DOI:
[4]
V. Basili, L. Briand, D. Bianculli, S. Nejati, F. Pastore, and M. Sabetzadeh. 2018. Software engineering research and industry: A symbiotic relationship to foster impact. IEEE Software 35, 5 (2018), 44–49. DOI:
[5]
Jan Bosch. 2014. Continuous Software Engineering: An Introduction. Springer International Publishing, Cham, 3–13.
[6]
Samantha R. Bradley, Christopher S. Hayter, and Albert N. Link. 2013. Models and Methods of University Technology Transfer. UNCG Economics Working Papers 13-10. University of North Carolina at Greensboro, Department of Economics. Retrieved from https://ideas.repec.org/p/ris/uncgec/2013_010.html.
[7]
L. Briand. 2012. Embracing the engineering side of software engineering. IEEE Software 29, 4 (2012), 96–96. DOI:
[8]
L. Briand, D. Bianculli, S. Nejati, F. Pastore, and M. Sabetzadeh. 2017. The case for context-driven software engineering research: Generalizability is overrated. IEEE Software 34, 5 (2017), 72–75. DOI:
[9]
L. C. Briand. 2011. Useful software engineering research - leading a double-agent life. In 2011 27th IEEE International Conference on Software Maintenance (ICSM’11). 2–2. DOI:
[10]
W. J. Brown, R. C. Malveau, H. W. McCormick III, and T. J. Mowbray. 1998. AntiPatterns, Refactoring Software, Architectures and Projects in Crisis. Wiley Computer Publishing.
[11]
Sridhar Chimalakonda, Y. Raghu Reddy, and Rakesh Shukla. 2015. Moving beyond: Insights from 1st International Workshop on Software Engineering Research and Industrial Practices (SER-IPs 2014). SIGSOFT Software Engineering Notes 40, 2 (Apr. 2015), 28–31. DOI:
[12]
A. M. Connor, J. Buchan, and K. Petrova. 2009. Bridging the research-practice gap in requirements engineering through effective teaching and peer learning. In 2009 6th International Conference on Information Technology: New Generations. 678–683. DOI:
[13]
C. Wohlin. 2013. Software engineering research under the lamppost. In 8th International Joint Conference on Software Technologies.
[14]
R. Emerson, R. Fretz, and L. Shaw. 2001. Participant observation and fieldnotes. In Handbook of Ethnography. SAGE Publications Ltd., 352–369.
[15]
M. Falvagno and D. Dalli. 2014. Theory of value co-creation: A systematic literature review. Managing Service Quality 24, 6 (2014), 643–683.
[16]
E. Gamma, R. Helm, R. Johnson, and J. Vlissides. 1994. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley.
[17]
Vahid Garousi, Matt M. Eskandar, and Kadir Herkiloğlu. 2017. Industry–academia collaborations in software testing: Experience and success stories from Canada and Turkey. Software Quality Journal 25, 4 (2017), 1091–1143.
[18]
Vahid Garousi, Michael Felderer, João M. Fernandes, Dietmar Pfahl, and Mika V. Mäntylä. 2017. Industry–academia collaborations in software engineering: An empirical analysis of challenges, patterns and anti-patterns in research projects. In 21st International Conference on Evaluation and Assessment in Software Engineering (EASE’17). 224–229. DOI:
[19]
Vahid Garousi, Kai Petersen, and Baris Ozkan. 2016. Challenges and best practices in industry–academia collaborations in software engineering: A systematic literature review. Information and Software Technology 79 (2016), 106–127. DOI:
[20]
V. Garousi, D. Pfahl, and J. Fernandes. 2019. Characterizing industry–academia collaborations in software engineering: Evidence from 101 projects. Empirical Software Engineering 24 (2019), 2540–2602.
[21]
Vahid Garousi, David C. Shepherd, and Kadir Herkiloglu. 2020. Successful engagement of practitioners and software engineering researchers: Evidence from 26 international industry–academia collaborative projects. IEEE Software 37, 6 (2020), 65–75. DOI:
[22]
B. Gislason Bern. 2018. From theory to practice: Experiences of industry–academia collaboration from a practitioner. In 2018 IEEE/ACM 5th International Workshop on Software Engineering Research and Industrial Practice (SER IP’18). 22–23.
[23]
Tony Gorschek, Per Garre, Stig Larsson, and Claes Wohlin. 2006. A model for technology transfer in practice. IEEE Software 23, 6 (2006), 88–95. DOI:
[24]
Tony Gorschek, Per Garre, Stig Larsson, and Claes Wohlin. 2006. A model for technology transfer in practice. IEEE Software 23, 6 (Nov. 2006), 88–95. DOI:
[25]
Brad Hodge, Brad Wright, and Pauleen Bennett. 2020. Balancing effort and rewards at university: Implications for physical health, mental health, and academic outcomes. Psychological Reports 123, 4 (2020), 1240–1259.
[26]
N. Ind and N. Coates. 2013. The meanings of creation. European Business Review 25, 1 (2013), 86–95.
[27]
Vladimir Ivanov, Alan Rogers, Giancarlo Succi, Jooyong Yi, and Vasilii Zorin. 2017. What do software engineers care about? Gaps between research and practice. In 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’17). 890–895. DOI:
[28]
Colin J. Neill, Philip A. Laplante, and Joanna F. DeFranco. 2012. Antipatterns: Managing Software Organizations and People (2nd. ed.). Taylor and Francis.
[29]
S. Jain, M. Ali Babar, and J. Fernandez. 2013. Conducting empirical studies in industry: Balancing rigor and relevance. In 2013 1st International Workshop on Conducting Empirical Studies in Industry (CESI’13). 9–14. DOI:
[30]
B. Wayland K. M. DeWalt. 2001. Participant Observation: A Guide for Fieldworkers. AltaMira Press.
[31]
D. Kvale. 1996. Interviews. SAGE Publications, London.
[32]
Y. S. Lee. 2000. The sustainability of university–industry research collaboration: An empirical assessment. The Journal of Technology Transfer 25, 2 (2000), 111–133.
[33]
Y. Lincoln and E. Guba. 1985. Naturalistic Inquiry. SAGE Publications, Thousand Oaks, California.
[34]
Fredrik Løberg, Vera Goebel, and Thomas Plagemann. 2018. Quantifying the signal quality of low-cost respiratory effort sensors for sleep apnea monitoring. In 3rd International Workshop on Multimedia for Personal Health and Health Care (HealthMedia’18). 3–11. DOI:
[35]
Dusica Marijan. 2015. Multi-perspective regression test prioritization for time-constrained environments. In 2015 IEEE International Conference on Software Quality, Reliability and Security. 157–162. DOI:
[36]
D. Marijan and A. Gotlieb. 2021. Industry–academia research collaboration in software engineering: The certus model. Information and Software Technology 132 (2021), 106473.
[37]
Dusica Marijan, A. Gotlieb, and Marius Liaaen. 2019. A learning algorithm for optimizing continuous integration development and testing practice. Software: Practice and Experience 49 (2019), 192–213.
[38]
Dusica Marijan, Arnaud Gotlieb, and Sagar Sen. 2013. Test case prioritization for continuous regression testing: An industrial case study. In 2013 IEEE International Conference on Software Maintenance. 540–543. DOI:
[39]
Dusica Marijan and Marius Liaaen. 2016. Effect of time window on the performance of continuous regression testing. In 2016 IEEE International Conference on Software Maintenance and Evolution (ICSME’16). 568–571. DOI:
[40]
Dusica Marijan and Marius Liaaen. 2017. Test prioritization with optimally balanced configuration coverage. In 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE’17). 100–103. DOI:
[41]
Dusica Marijan and Marius Liaaen. 2018. Practical selective regression testing with effective redundancy in interleaved tests. In 2018 IEEE/ACM 40th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP’18). 153–162.
[42]
D. Marijan, M. Liaaen, A. Gotlieb, S. Sen, and C. Ieva. 2017. TITAN: Test suite optimization for highly configurable software. In 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST’42). 524–531. DOI:
[43]
Dusica Marijan, Marius Liaaen, and Sagar Sen. 2018. DevOps improvements for reduced cycle times with integrated test optimizations for continuous integration. In 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC’18), Vol. 01. 22–27. DOI:
[44]
Lars Mathiassen. 2000. Collaborative Practice Research. Springer US, Boston, MA, 127–148. DOI:
[45]
Roger Miller and Serghei Floricel. 2004. Value creation and games of innovation. Research-Technology Management 47, 6 (2004), 25–37. DOI:
[46]
Morten Mossige, Arnaud Gotlieb, and Hein Meling. 2017. Deploying constraint programming for testing ABB painting robots. AI Magazine 38, 2 (Jul. 2017), 94–96. DOI:
[47]
Kai Petersen and Emelie Engström. 2014. Finding relevant research solutions for practical problems: The serp taxonomy architecture. In 2014 International Workshop on Long-Term Industrial Collaboration on Software Engineering (WISE’14). 13–20. DOI:
[48]
Teade Punter, René L. Krikhaar, and Reinder J. Bril. 2006. Sustainable technology transfer. In 2006 International Workshop on Software Technology Transfer in Software Engineering (TT’06). 15–18. DOI:
[49]
Tomás Ruiz-López, Sagar Sen, Elisabeth Jakobsen, Ameli Tropé, Philip E. Castle, Bo Terning Hansen, and Mari Nygård. 2019. FightHPV: Design and evaluation of a mobile game to raise awareness about human papillomavirus and nudge people to take action against cervical cancer. JMIR Serious Games 7, 2 (2019), e8540.
[50]
Per Runeson and Sten Minör. 2014. The 4+1 view model of industry–academia collaboration. In 2014 International Workshop on Long-Term Industrial Collaboration on Software Engineering (WISE’14). 21–24. DOI:
[51]
Per Runeson, Sten Minör, and Johan Svenér. 2014. Get the cogs in synch: Time horizon aspects of industry–academia collaboration. In 2014 International Workshop on Long-Term Industrial Collaboration on Software Engineering (WISE’14). 25–28. DOI:
[52]
Joseph A. Schumpeter. 1982. The theory of economic development: An inquiry into profits, capital, credit, interest, and the business cycle. Transaction Publishers 244 (1982), 1912–1934.
[53]
B. Selic. 2015. The iceberg effect: On technology transfer from research to practice. In 2015 IEEE/ACM 2nd International Workshop on Software Engineering Research and Industrial Practice. 58–61. DOI:
[54]
Sagar Sen, Stefano Di Alesio, Dusica Marijan, and Arnab Sarkar. 2015. Evaluating reconfiguration impact in self-adaptive systems – an approach based on combinatorial interaction testing. In 2015 41st Euromicro Conference on Software Engineering and Advanced Applications. 250–254. DOI:
[55]
Sagar Sen, Pierre Bernabé, and Erik Johannes B. L. G. Husom. 2020. DeepVentilation: Learning to predict physical effort from breathing. In 29th International Joint Conference on Artificial Intelligence (IJCAI’20). Christian Bessiere (ed.). ijcai.org, 5231–5233. DOI:
[56]
S. Sen, D. Marijan, C. Ieva, A. Grime, and A. Sander. 2017. Modeling and verifying combinatorial interactions to test data intensive systems: Experience at the Norwegian Customs Directorate. IEEE Transactions on Reliability 66, 1 (2017), 3–16. DOI:
[57]
Aizaz Sharif, Dusica Marijan, and Marius Liaaen. 2021. DeepOrder: Deep learning for test case prioritization in continuous integration testing. In IEEE International Conference on Software Maintenance and Evolution (ICSME’21). 525–534. DOI:
[58]
Ben Shneiderman. 2018. Twin-win model: A human-centered approach to research success. Proceedings of the National Academy of Sciences 115, 50 (2018), 12590–12594. DOI:
[59]
Karolin Sjoo and Tomas Hellstrom. 2019. University–industry collaboration: A literature review and synthesis. Industry and Higher Education 33, 4 (2019), 275–285. DOI:
[60]
B. S. Taylor and R. Bogdan. 1984. Introduction to Qualitative Research Methods. John Wiley and Sons, New York.
[61]
J. Voas. 2004. Software’s secret sauce: The “-ilities” [software quality]. IEEE Software 21, 6 (2004), 14–15. DOI:
[62]
C. Wohlin, A. Aurum, L. Angelis, L. Phillips, Y. Dittrich, T. Gorschek, H. Grahn, K. Henningsson, S. Kagstrom, G. Low, P. Rovegard, P. Tomaszewski, C. van Toorn, and J. Winter. 2012. The success factors powering industry–academia collaboration. IEEE Software 29, 2 (2012), 67–73.
[63]
J. Eriksson C. Hansson O. Lindeberg Y. Dittrich, K. Rnkk. 2008. Cooperative method development: Combining qualitative empirical research with method, technique and process improvement. Empirical Software Engineering 13, 3 (2008), 231–260.

Cited By

View all
  • (2024)Spenningsfelt i et innovasjonssamarbeid mellom forskere og gründereNordic Journal of Innovation in the Public Sector10.18261/njips.3.1.53:1(1-17)Online publication date: 8-Feb-2024
  • (2024)In the Walled Garden: Challenges and Opportunities for Research on the Practices of the AI Tech IndustryProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658918(456-466)Online publication date: 3-Jun-2024
  • (2024)From Idea to Impact: Survival Guide for Successful ProductsIEEE Software10.1109/MS.2024.336354041:3(20-25)Online publication date: 5-Apr-2024
  • Show More Cited By

Index Terms

  1. Industry–Academia Research Collaboration and Knowledge Co-creation: Patterns and Anti-patterns

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Software Engineering and Methodology
    ACM Transactions on Software Engineering and Methodology  Volume 31, Issue 3
    July 2022
    912 pages
    ISSN:1049-331X
    EISSN:1557-7392
    DOI:10.1145/3514181
    • Editor:
    • Mauro Pezzè
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 March 2022
    Accepted: 01 October 2021
    Revised: 01 October 2021
    Received: 01 June 2021
    Published in TOSEM Volume 31, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Industry-academia collaboration
    2. research collaboration
    3. research co-creation
    4. software engineering
    5. technology transfer
    6. knowledge transfer
    7. collaboration gap
    8. collaboration model
    9. patterns
    10. anti-patterns

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • Research Council of Norway

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2,243
    • Downloads (Last 6 weeks)265
    Reflects downloads up to 15 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Spenningsfelt i et innovasjonssamarbeid mellom forskere og gründereNordic Journal of Innovation in the Public Sector10.18261/njips.3.1.53:1(1-17)Online publication date: 8-Feb-2024
    • (2024)In the Walled Garden: Challenges and Opportunities for Research on the Practices of the AI Tech IndustryProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658918(456-466)Online publication date: 3-Jun-2024
    • (2024)From Idea to Impact: Survival Guide for Successful ProductsIEEE Software10.1109/MS.2024.336354041:3(20-25)Online publication date: 5-Apr-2024
    • (2024)Towards a Taxonomy of Industrial Challenges and Enabling Technologies in Industry 4.0IEEE Access10.1109/ACCESS.2024.335634912(19355-19374)Online publication date: 2024
    • (2024)Adoption of automated software engineering tools and techniques in ThailandEmpirical Software Engineering10.1007/s10664-024-10472-629:4Online publication date: 10-Jun-2024
    • (2023)Bringing Project-Based Learning into Renewable and Sustainable Energy Education: A Case Study on the Development of the Electric Vehicle EOLOSustainability10.3390/su15131027515:13(10275)Online publication date: 29-Jun-2023
    • (2023)MANAGING THE CO-CREATION PROCESS: WHEN THE CAKE DOES NOT RISEInternational Journal of Innovation Management10.1142/S136391962340008X27:05Online publication date: 26-Oct-2023
    • (2023)The knowledge, attitude and practices (KAP) of Industry 4.0 between construction practitioners and academicians in Malaysia: a comparative studyConstruction Innovation10.1108/CI-05-2022-010924:5(1185-1204)Online publication date: 20-Feb-2023
    • (2023)Successful Practices in Industry-Academy Collaboration in the Context of Software Agility: A Systematic Literature ReviewEnterprise Information Systems10.1007/978-3-031-39386-0_14(292-310)Online publication date: 28-Jul-2023
    • (2023)Success in Industry-Academia Collaboration: A Design Science Approach for Industry 4.0 Research ProjectsFlexible Automation and Intelligent Manufacturing: Establishing Bridges for More Sustainable Manufacturing Systems10.1007/978-3-031-38165-2_66(563-572)Online publication date: 25-Aug-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media