6 Patterns for Industry–Academia Collaboration
Patterns are reusable solutions to commonly occurring problems. In SE, they are often used in the context of design patterns [
16]. Patterns encapsulate best practices that can be easily reused or adapted for solving recurring problems in a variety of situations. To support reusability, patterns are specified using templates. We use the following template for describing patterns:
Problem ♦ Context ♦ Pattern Solution ♦ Benefits and Consequences ♦ Related Solutions, illustrated in
2. A pattern relates a particular
problem (in IA collaboration).
Context describes the circumstances under which the problem occurs or is applicable. A
solution describes how the problem can be resolved or eased in the scope of the context. The solution provides
benefits, but also may have remaining unresolved issues called
consequences. Finally, the solution can be
related to other solutions.
6.1 P1: Reciprocity Between Stakeholders
Problem: Stakeholders do not reciprocate in terms of effort.
Context: Research collaboration process often requires investment of time, ideas, and effort from different stakeholders, to make the collaboration mutually beneficial. We quote author James Baldwin who said, “Allegiance, after all, has to work two ways; and one can grow weary of an allegiance which is not reciprocal,” as well as a researcher in C1 who said, “It is so frustrating when our industry partner does not prepare what we asked for in the last meeting.” Reciprocation can be hindered due to many factors such as lack in clarity of tasks, lack of resources, or external factors that take away focus from the project.
Solution: To develop a norm for reciprocity between stakeholders, encourage an open and unambiguous communication. The give and take [
25] psychology can help construct a reciprocal relationship. Identify the motivations and needs of all participants. Promote active dialog and frequent sharing of progress. It is important to acknowledge that not all project members are able to contribute equally to all project phases. The goal is to encourage a regular and well-balanced exchange of knowledge where everyone contributes to the best of their capacity with some knowledge that other members of the project need. Reciprocity is often observed when the stakeholders reach a tipping point in collaboration. The tipping point of mutual trust is established when the joint activity is clearly linked to a well-defined and manageable goals that stakeholders manage to realize. These goals may include a publication to a specific conference or journal with a deadline, or it can be an application for project funding with a deadline, and it can also be a demo of a software tool to a third party at an event such as a conference or a trade fair. The time boundedness and the concreteness of the activity greatly helps establish reciprocity. Reciprocity can be measured in terms of commitment to tasks and attendance to meetings, co-authorship of research results (e.g., in terms of the number of words written by a stakeholder), time spent discussing a problem by all stakeholders. Our approach to address the reciprocity problem is to have, on the one hand, researchers who provide research artifacts that simplify daily routine of practitioners, and on the other hand, practitioners who can then effortlessly provide useful feedback for researchers on the practical usefulness of their research results. For instance, in C1, researchers provided a tool Depict [
56] that Tool Customs used to verify customs declarations in their daily work.
“Depict seamlessly connected to our test database and automatically generated reports that made it easy for us to present what we have been working on with minimal effort. This allowed us to keep focus on our priority task, while integrating research ideas to simplify our routine activities.” SRL’s strategy was to build tools that could
minimize effort from practitioners such that they could seamlessly use a tool in a matter of minutes to produce a useful contribution that could be presented and used for reasoning. In C2, reciprocity was an issue for the partner with fewer resources. SZ had several customers with diverging requirements but only few personnel to handle the customer needs. For a commercial business, the extra hours required to reciprocate requests from stakeholders, such as academia, had to be compensated as a paid consulting activity or by involvement in national and European projects to receive partial public funding. SZ also took the role of a sensor vendor with third-party support in many of these publicly funded projects. In C3, reciprocity between stakeholder is built into tasks where partners are required to co-create together, often even without prior joint work experience. For instance, advanced manufacturing companies are required to acquire high velocity sensor data from the machining process and provide it to scientists working on computing data quality hallmarks and performing erroneous data repair for missing values.
Benefits: Mutual learning based on active exchange of knowledge, which creates more meaningful collaborations, increases enthusiasm, creates synergies, and helps the project progress faster.
Related Solutions: P8, AP4.
6.2 P2: Standardized Data Exchange
Problem: Limited or cumbersome sharing of potentially reusable artifacts.
Context: Participants in a co-creation project dealing with related problems often have the possibility of exchanging (and reusing) datasets, algorithms, code repositories, and tools. However, this seldom happens because these artifacts are not easily interfaced with each other.
Solution: These artifacts can be seen as services. It is worthwhile implementing standard protocols for interfacing between these services, so to avoid reinventing the wheel. Seeing artifacts as services implies that an artifact is associated with a standard for information exchange that allows interaction with a larger ecosystem. In the case of a dataset, such an associated standard could be a database technology based on SQL standards, such as ISO/IEC 9075:2016.
2 When the dataset is stored in
relational database management systems (RDBMS) supporting the
Structured Query Language (SQL) 2016 standard,
3 then SQL can be used to read, write and modify the data. Similarly, algorithms can be encapsulated as a RESTful webservice where functions to get/set variables and perform some form of computation obey the REST constraints. RESTful web
Application Programming Interface (API) are typically based on Hypertext Transfer Protocol methods to access resources via URL-encoded parameters and the use of JSON or XML to transmit data. Stakeholders from industry and academia adhere to such standards to ensure their data and algorithms can be plugged into a larger ecosystem with minimal effort, increasing business opportunities. However, concerns with intellectual property rights and making artifacts easily understandable and usable may hinder the flow of artifact exchange across stakeholders. In C1, SRL developed tools that adhered to industry standards, along with simple and user-friendly documentation, to help stakeholders use artifacts in less than 30 minutes. The tool Depict [
56], for instance, could be deployed by Toll Customs in a matter of minutes on their large test and production databases. Depict could interface to several standard RDBMS such as Sybase, Oracle, PostgreSQL, and MySQL to verify certain properties in the data. This allowed Toll Customs to easily transition from Sybase to Oracle during their migration. In C2, the use of standard protocols greatly facilitated interfacing of SZ’s sensors with many other devices. For instance, the standardization activity between SZ and RITMO was based on the Open Sound Control protocol. This created the possibility for RITMO to interface the SZ breathing sensor to all other sensors and actuators in their laboratory and simultaneously gather a large amount of research data. In C3, there is no standard interface to connect tools developed by computer scientists in the project and data generated by machine tool developers. However, streaming data from a machine tool are available 24/7 from a cloud service in JSON format for a specified input time period. The machine tools often work around the clock to manufacture parts. Computer scientists have access to this data stream through a REST API. All computer scientists that perform tasks such as anomaly detection, erroneous data repair, and data profiling have developed a common understanding of the type of data they can develop their tools on based on the REST API. Docker containers are used as a common approach to build tools that can be exchanged and used by partners without special installation requirements and deployed on both edge devices and the cloud.
Benefits: Greatly increased the ease of reuse between different projects and the possibility to benefit from other participants’ experience. A software engineer participant in C3 pointed out that “The use of standards to exchange data combinatorially increases the possibilities in terms of how collaboration can occur between multiple data providers.” Generic tools built based on standard data exchange formats can be downloaded and used by projects around the world. This can increase the impact of tools from scientific research and help secure both publicly and privately funded projects.
Related Solutions: AP2.
6.3 P3: Quantifying Impact
Problem: Difficulties of measuring diverse impacts of an IA collaboration. Typical impact metrics include a publication and patent count. However these metrics do not capture a multifaceted impact of IA collaborations.
Context: In a simple interpretation, making an impact in IA collaboration means, for researchers, changing the current state-of-the-art for the better, and for practitioners, changing the current state-of-the-practice for the better. There is also a necessity to go beyond organizational impact and quantify how collaborations can have an impact on one or more of UN’s 17 goals on sustainable development.
4 Accurately measuring the factors contributing to that change is challenging [
48], yet important because as Peter Drucker said “if you cannot measure it, you cannot improve it.”
Solution: Measuring impact in IA co-creation requires public recognition of
reciprocal actions resulting in the creation of tangible and intangible assets that benefit the industry and society. Tangible assets can be new or improved technology, such as open source software, algorithms, methods, tools, process guidelines, and scientific publications. While intangible assets are innovation capacity, human relations, shared management, visibility and brand value of the collaboration. For tangible assets, impact can be determined by measuring return of investment or other quantitative metrics, for example, economic benefits. For intangible assets, impact could be gauged based on the timeliness, completeness and follow-up, consistency, relative quality, and complementarity of reciprocal actions for a successful collaboration. The latter can often lead to breakthroughs in science when partners involved are from diverse domains and are able to make small sacrifices to explore uncertain problem domains. In C1, we measured an impact by computing the amount of time saved in completing a testing activities in a company. In C2, one way to measure impact can be to look at the diversity and extent of the use of the Flow sensor developed by SZ. The sensor found application in health, sports, and music. The sensor was used by the University of Oslo as a low cost alternative to predict obstructive sleep apnea [
34]. A future impact of the technology can be quantified in the form of percentage of the target market of solutions for obstructive sleep apnea captured by the SZ sensor. In C3, the impact of improved data quality is quantified in the form of improved auditability of data quality in manufacturing and fostering a quality culture. Auditability is measured as a traceability of data quality hallmarks or statistical properties of sensor data quality, observing the manufacturing process. Quality culture is indirectly measured through the reduction of scrap rate in manufacturing. Maintaining a trace of data quality is valuable feedback to the entire manufacturing ecosystem, helping improve performance of operators and engineers. Thinking about societal impact, sustainable development is a principal factor in governance, and the impact of the co-creation process needs to be in alignment with UN’s sustainable development goals [
45]. These goals include no poverty, zero hunger, good health and well-being, quality education, gender equality, clean water and sanitation, affordable and clean energy, decent work and economic growth, industry, innovation, and infrastructure, reduced inequalities, sustainable cities and communities, responsible consumption and production, climate action, life below water, life on land, peace, justice, strong institutions, and partnerships. In C1, we collaborated on the improvement of video conferencing systems developed by Cisco systems. This effort is indirectly connected to the minimization of the need for air travel (number of trips avoided) and consequently addressing UN’ goal of responsible consumption. In C2, a project manager at SZ said,
“We always adhere to responsible production aiming to minimize defects, which consequently minimizes waste in the production of our Flow sensors. This was achieved by keeping a close eye on the inventory (number of sensors) and using low cost production technology such as a pick-and-place machine for local production of electronic printed circuit boards.” Benefits: Calculating quantitative and qualitative metrics corresponding to different types of impact (value), tangible and intangible, enables more comprehensive impact measurement and effort recognition. It is often an intellectual exercise to connect our seemingly negligible local contributions to a larger goal such as sustainability. It is increasingly requested by both public funding bodies and investors to quantify and present such an impact as part of their work.
Related Solutions: AP13.
6.4 P4: Active Dialog
Problem: Goal misalignment between industry and academia, as well as limited transparency regarding intermediate results produced at different stages of collaboration ultimately lead to a lacking interest and disengagement.
Context: IA collaborations often suffer from inadequate level of communication, both between and within industrial and academic teams [
12,
47,
50] on different aspects including goals, expectations, results, time frames, and responsibilities. As industry and academia are generally two culturally different environments, insufficient communication between the two leads to the lack of commitment and further deepens the gap between them.
Solution: Promote active dialog between industry and academia at all stages of the collaboration, from problem understanding and definition to solution validation. Active dialog implies words and actions occur in close succession or concurrently. Regular dialog does not necessarily mean words acted upon. Hence, the emphasis on active dialog. Such a practice helps build a trusting environment in which a participative and mutually beneficial value creation can be implemented. Promote active dialog across academic teams, as well as across industry partners, which will lead to a higher level of reuse of research results. Active dialog can be promoted by providing different channels of interaction, such as physical and virtual collaboration and meeting space, as well as different communication and knowledge exchange tools. In C1, SRL created several platforms to maintain active dialog. Bi-annual partner workshops were organized at a conference location to assemble all stakeholders in one place for a day and half of discussions to gain clarity and define problems that are both interesting for researchers and useful for industrial partners. Long term projects can get weary and require constant re-invigoration. Therefore, SRL invited several external keynote speakers, and organized hands-on workshops to invigorate stakeholders with new ideas and increased clarity. The keynotes also helped keep the stakeholders updated on the state of the art. In C2, the projects were of shorter duration, leading to intense bursts of joint work. However, it was essential for SZ to keep stakeholders in the loop and build a network between customers for sharing of ideas and artifacts, and also for the establishment of new projects. In C3, we have an additional challenge of remote work, due to COVID-19, where active dialog is maintained by weekly meetings over Microsoft Teams and through close follow-up on research ideas and subsequent action points in these meetings. Due to a large size of the project, there is a hierarchical communication structure in the project where smaller working groups work on specific topics and the results are summarized and communicated upwards in a leaders’ meeting. The different working groups also help manage potential conflicts of interest between competing companies.
Benefits: Continuous goal alignment and progress update between the collaborating partners, which leads to the shared ownership of both the problem and the solution, which is further crucial to prevent the drop of interest and commitment. A researcher in C2 pointed out, “I was amazed to see how quickly we converged on which feature should be developed first, once we started talking details with the engineers.” Maintaining active dialog and keeping people in the loop is a continual process. However, there is a thin line between annoying stakeholders with unnecessary emails and actually engaging in a communication that is mutually beneficial. It is necessary to adjust the volume and the velocity of the communication to maintain a healthy active dialog.
Related Solutions: P7, P14, AP6, AP14.
6.5 P5: Power of Synergy
Problem: Confined research value creation due to a dissociate efforts made by industry and practice separately.
Context: The growing complexity of industrial software systems developed by the majority of our partners requires a combination of knowledge, skills, and efforts to develop cost-effective solutions for software engineering and testing. Such combination of knowledge and skills is hardly found in any individual or a team whose members are of the same background. Moreover, co-creating with experts from different domains can push engineering and validation of software systems to new horizons. Still, it happens that IA collaboration projects lack collective value creation, as each side of collaboration works in isolation.
Solution: Enact a
managed research value co-creation, which entails a project plan with deliverables and milestones, and a project timeline, analysis of risks and how to mitigate them. A project leader conducts regular meetings with all stakeholders to hear about updates and find synergy during co-creation. Partners have regular internal meetings with individual stakeholders in order to meet the deadline and develop satisfactory results. Activities need to be performed consistently, ideally avoiding a last minute dash. All meeting notes and decisions are documented for easier onboarding of new members. In such a managed co-creation environment, individuals with different background, expertise and perspectives are brought together to share knowledge and work on a joint problem whose solving brings benefits to all participating parties. Consequently, there is a great potential for the identification of additional common areas of interest and synergies between the partners, which can lead to breakthroughs in research and practice. For instance, in C1, the testing of high-end video conferencing systems developed by Cisco Systems with several variable parameters and requirements for high video throughput pushed software testing research at SRL to handle a very large combinatorial problem. This led to novel algorithms published as scientific articles [
35,
38,
39,
40,
41,
43] and software. In C2, SZ’s Flow sensor was used as a wearable in health (obstructive sleep apnea), sports (force in a rowing oar), and music (breathing rhythms while performing). All these application domains pushed the scientists, hardware and software developers in the collaboration to improve many aspects of the product: increasing battery life through low energy consumption needed for overnight recordings, long distance Bluetooth communication and re-connection logic due to uncertain connectivity, and machine learning models to predict variables of interest such as respiratory flow and apnea events from raw sensor data. These innovative activities were not foreseen during the conception phase of the project. As a hardware engineer at SZ mentioned,
“I would have never thought that we will get this far with all these new features we developed for Flow sensor. It’s because we all worked together bringing ideas from so many different fields.” In C3, one example of synergy is when computer scientists and electronic engineers co-create to enable data quality management at different levels of abstraction (real-time and aggregated data on the cloud) and across different domains of expertise (signal processing and machine learning). For example, electrical engineers from IDEKO manage data quality during analog to digital conversion in a real-time controller, by data conversion and aggregation. Computer scientists use this aggregated data from the real-time controller to perform data profiling. They use machine learning models to create virtual sensors, which are used to repair erroneous or missing data in one sensor based on data available from other sensors.
Benefits: Collective creativity leads to more effective problem solving, often resulting in unexpected and non-obvious ways of value creation. The synergistic influences from different domains can make a tool or a product far more robust and appealing to many users.
Related Solutions: P1.
6.6 P6: Minimum Viable Tools
Problem: Lack of objective evidence showing the practical value of the knowledge created in early stages of a collaborative project.
Context: Practitioners engage in research collaborations with the objective of applying in their context the knowledge created in the collaboration and relevant to their problems, for solving practical problems. Researchers on the other hand are typically concerned with creating generalizable knowledge, and it may take a while before first results of contextualizing such a knowledge are available. Meanwhile, practitioners are kept in dark, which makes them not invested in the collaboration effort, hampering long-term success. As a SE practitioner in C1 described, “Every time researchers present an update, they explain some new scientific stuff, but I never seem to quite get what are they talking about.”
Solution: Develop a
minimum viable proof of concept (MVPC) and tools able to demonstrate practical benefits of even simple research concepts in early stages of collaboration. Such MVPCs help get started with co-creation and create a sense of concreteness in a joint activity. MVPCs enable feedback provision from industry to academia in the early stage of the solution development, which increases the probability of creating the outcomes of mutual benefit. A minimal viable tool could be an artifact that allows automation of tasks that are normally performed by humans, or older and less optimal software tools. We emphasize the co-creation of tools as a way to encapsulate a solution for a given problem context, as opposed to CMD by Dittrich, in order to emphasize the creation of an asset that can be configured and executed. CMD on the other hand is less tangible and not an asset that is a manifestation of ideas into a software or physical object such as a sensor system. In C1, tools like TITAN and Depict were developed over continual interaction and testing with the respective industry partners. This lean approach to development gave a sense of ownership to the industry practitioners over the tools developed, and the tools adapted to suit their work environment. However, it is also very important to make sure the lean approach of development extends to more than just one partner. For instance, after being initially developed with the Norwegian Toll Customs, Depict was also improved due to tests with the database systems at the Cancer Registry of Norway. In C2, the flow sensor developed by SZ provided only raw data from forces measured from the expansion and contraction of the ribcage. This was a minimal signal that could be used to create many new value propositions with many partners. This included prediction of sleep apnea, respiratory flow, and breathing patterns for stress reduction, to name a few. It would have been much more expensive and uncertain to launch a product in the market if SZ had to secretively develop it for a specific target over several months. In C3, researchers collaborate with practitioners to develop a MVPC on a publicly available dataset to predict tool wear in manufacturing. The publicly available dataset protects the industrial practitioner from sharing intellectual property while testing out ideas along with a researcher. The MVPC is developed to be generalizable to new datasets that will eventually be available from partners in the manufacturing industry. One such example of a co-created MVPC is the erroneous data repair system available on Github,
5 which was demonstrated on the public dataset on CNC milling tool wear.
6 Benefits: Research value co-creation from the beginning of the collaboration, where industry gets to understand practical benefits of abstract research concepts developed at first by researchers, as well as to influence solution development by providing timely feedback.
Related Solutions: P7, AP2, AP3.
6.7 P7: Frequent Iteration
Problem: Progress on technical developments are reported sparsely, on each side of the collaboration. Consequently, useful feedback to guide further development is captured sparsely.
Context: In technology projects where software tools are key artifacts in a collaboration, it is important to demonstrate progress frequently, and receive feedback to direct technology improvement efforts. Lacking feedback from researchers to practitioners makes practitioners disengaged. Lacking feedback from practitioners to researchers makes researchers base their solution on possibly inaccurate working assumptions (about the requirements and constraints for the solution needed by industry). Inaccurate working assumptions in software tool development can lead to technical choices which may not be easily correctable later on. For example, we encountered such an issue while developing TITAN in collaboration with Cisco Systems, where inaccurate working assumptions led to a limited choice of how the tool can be used. TITAN was designed as a standalone tool, while Cisco’s preference was to use it as a service.
Solution: Demonstrate progress and intermediate results of technical developments often. Researchers can obtain feedback from practitioners on the soundness of technology choices. Practitioners can keep researchers up to date on their technical developments, which is necessary for smooth integration of all technical artifacts developed. Frequency should be relatively high, daily or weekly, until a tipping point is reached, when the stakeholders using a co-created knowledge become self-sufficient. Ideally, the shorter this time is the better. The mutually acceptable frequency of iteration often reflects the interest level and investment of stakeholders in a project. It is necessary to take it as a cue and adapt accordingly. For instance, in C1, SRL was wary about introducing too many meetings that do not appeal to stakeholders or sometimes feel like a waste of time. Instead, the frequency was adjusted according to the level of intensity experienced in the collaboration. We recollect that the frequency increased when it was about trying tools hands-on and obtaining results, while it decreased when it was about writing scientific papers. In C2, the frequency was very high before an event where the sensors were demonstrated, such as the MusicLab organized by the University of Oslo or the World Rowing Masters regatta. The exhibition of results to a larger audience or public often creates more intensity of collaboration and frequent improvements. In C3, there is a high frequency of iteration in requirements specification and task allocation processes, due to the diversity in the partners. The partners are experienced in their respective domains and have tools available to treat manufacturing sensor data. Hence, frequent iterations entailed defining requirements and tasks for each partner, while minimizing conflicts of interest and ensuring the protection of intellectual property for commercial businesses. As a researcher at SINTEF pointed out, “I think just because we are meeting so frequently, we avoid any conflicts over who is working on what.”
Benefits: Bidirectional feedback provided often helps validate results often and ensure that potential misinterpretations and missed requirements are identified and fixed when the time is appropriate. This can be early on or when the tool needs to be demonstrated to a larger audience. Outcome technology developed becomes applicable, usable and scalable to the relevant context.
Related Solutions: P4, P6.
6.8 P8: Two-faceted Problem and Impact
Problem: Problems addressed in a collaboration are not relevant for one side of the collaboration.
Context: Academic researchers and industry practitioners have different motives for collaborative projects. Researchers are driven by scientific challenges and practitioners by practical solutions. Although their drives are different in nature, for the collaboration to succeeded, all stakeholders need to have the opportunity to reach their objectives and create impact that matters to them.
Solution: Define the problems with two sides to them i.e., as a practical problem and deriving from it, a research problem, which are closely linked. A practical problem is a need to improve unsatisfactory aspects of industry practice, caused by a concrete condition, for example, the lack of test automation. The practical problem caused by this condition may be the high cost of testing. A research problem is a gap in knowledge whose understanding is crucial for solving a practical problem. Thus by solving a research problem we create ground for solving its related practical problem. In C1, SRL defined the problem of test case selection and prioritization based on the needs of its industry partners ABB Robotics and Cisco partners. For researchers, this meant researching test optimization techniques, and for practitioners, this meant developing a tool that can alleviate the problem of manual test selection. As manual test selection was tedious, some of these tests were redundant and there was a need to research the problem of automatically selecting a minimal set of test cases and prioritizing them based on what changes were made in the code base. A researcher in C1 indicated that
“A two-faceted problem definition allowed us to, on the one hand, advance the state of test optimization for highly-configurable software, while on the other develop a tool TITAN [
42]
that reduced the effort of manual testing for Cisco Systems Norway.” Similarly, in C2, researchers at the University of Oslo developed algorithms to predict sleep apnea from the breathing data obtained by the flow sensor. This helped SZ define problems such as being energy efficient about overnight recordings, addressing issues with Bluetooth connectivity and reliable data storage. In C3, the problem initially defined was to perform erroneous data repair. IDEKO did not see a benefit in solving this problem, although it was written as part of the
description of work (DoW). Therefore, the problem specifications evolved slightly beyond the initial DoW. For instance, when SINTEF presented some of their previous work on data profiling with constraints on data, IDEKO became interested in using that technology in their manufacturing setup, as they saw it being immediately useful compared to erroneous data repair. SINTEF, however, initially followed the DoW, but soon realized that they can bring more value to the project through data profiling with constraints. Therefore, it was necessary for SINTEF to be flexible and address the real industrial problem with higher priority that addresses a challenging research problem that was not immediately useful. Once trust is established through solving simpler and more pressing problems, it becomes easier to address more scientifically stimulating problems later on.
Benefits: Both sides of IA collaboration see the benefits of solving joint problems, and consequently creating impact, each from their own perspective, i.e., academic and industrial impact. This condition greatly contributes to active commitment to the collaboration from all partners.
Related Solutions: P1, P3, P11.
6.9 P9: Joint Authorship of Scientific Articles
Problem: Industry practitioners are often reluctant to participate in joint authorship of scientific articles.
Context: Authoring articles based on scientific methods combined with insights from industry experts can bring increased credibility and reusable knowledge from the co-creation process. However, an important challenge is to involve industry professionals in the cumbersome writing process that requires several rounds of proofreading, editing, precision in data and statistics followed by peer-review and publication. Industry practitioners are highly sensitive to the risk of revealing trade secrets and flaws in industrial systems through publications that could bring the business into bad light. Some have a strict internal publication vetting process before any form of external communication of scientific results. This often can discourage all stakeholders in performing thorough scientific studies. Nevertheless, transparency through published scientific articles about technology or methodology, such as high quality software testing practices, can bring trust in a company.
Solution: Writing and peer-review helps clarify complex concepts that companies often grapple with. Joint authorship of scientific articles must be seen as a way to distill complexity into a sequence of words and illustrations that clearly communicate the outcome of a co-creation process. Industry practitioners should ideally be engaged in describing their viewpoints in ways they are initially comfortable with. This can be in the form of user stories, answering a multiple-choice survey where the questions are carefully crafted, or interviews of experiences such as with software tools developed by scientists. The peer-reviewed publication and its acceptance by the larger community of researchers and practitioners can develop increased enthusiasm of industry practitioners in the scientific publication process. Increased motivation can lead to practitioners taking up industrial PhD positions in collaboration with a research group. For instance, in C1, an employee from ABB Robotics decided to realize an industrial PhD while on partial leave. During the PhD he published scientific articles [
46] in conferences and journals in the software engineering and testing community. Another practitioner from C1 pointed out that
“It was only after we wrote the first paper together with researchers that we started appreciating such a writing exercise, as it helped us understand scientific writing, which is often a great source of innovative ideas.” In C2, there were no instances of co-authorship, however, the industrial partner SZ has been acknowledged in several publications made by research groups, which gave credit to SZ and hopefully increased the company visibility. For instance, a study at the University of Oslo helped validate the accuracy of the Flow sensor developed by SZ in comparison to other low-cost sensors for sleep apnea studies [
34]. In C3, some of the commercial businesses have an interest in publishing results from the project. For instance, PREDICT approached SINTEF to work on temporal clustering of manufacturing data with a publication in sight. The publication can document approaches implemented by PREDICT, who has limited resources for communicating their research to the public. Often ideas are lost in source code among companies that do not have the capacity to document and test their ideas. This, unfortunately, increases their technical debt in the project in the long term.
Benefits: The consequences of joint authorship can have many advantages such as: (a) documented industrial impact of scientific research, (b) skill upgrade for industry practitioner, such as through industrial PhD programs, (c) higher credibility for industrial products and processes due to transparency and evaluation in published scientific articles, (d) more efficient use of resources because researchers can help practitioners write papers of high rigor, while practitioners can help strengthen the experimental evaluation by providing data and case studies, and (e) increased ability of industry practitioners to absorb academic knowledge disseminated in the community typically through scientific papers.
Related Solutions: P5, P10.
6.10 P10: Managed Intellectual Property Rights
Problem: The rights to the intellectual property created during a research knowledge co-creation need to be managed professionally to increase trust between partners.
Context: Intellectual property is a category of property that includes tangible creations of human intellect. They can be categorized in the form of patents, copyrights, and trade secrets. For instance, in C2, SZ patented a sensor for respiratory inductance plethysmography. Similarly, partners in C1, such as Cisco Systems have several trade secrets in the domain of video conferencing systems. Academic partners, such as SRL, developed software programs that are automatically protected under a copyright as a defense mechanism against copyright infringement. Every stakeholder in a collaboration has an interest in intellectual property and the outputs of a co-creation process can generate property that some partners can exploit, as per a clearly specified agreement. However, as the observer in C1 noted down her observation from the observation sessions, “Managing intellectual property is difficult, as it is very hard to trace where ideas originate from and how they evolve into tangible products.”
Solution: A collaboration needs to be established based on a consortium agreement, about who the producers and consumers of intellectual property will be, as well as how the intellectual property will be maintained, if required in the future. For instance, in C1, academic partners such as SRL are financed by the Norwegian Research Council through agreements. The consortium agreement stated that some partners, such as SRL, will publish research articles about their work with the industry after a careful scrutiny of the results together with the industry partners. Esito, for instance, was identified as an exploitation partner that would commercialize software, handle licensing agreements with other interested industrial partners, and provide maintenance support, if needed in the future. In C2, SZ obtained an US patent on their invention of the breathing sensor Flow. The patent allowed SZ to not only attract investors, but also have the possibility to license its technology to other companies. In C3, a consortium agreement has been signed by all partners on how intellectual property will be shared. All research results and ideas that are performed on the data made available by industry partners and that are accepted for publication in scientific journals can be used by all partners. However, there are unwritten restrictions and a mutual understanding among competing partner companies to not reveal too much to each other. Neutral partners, such as research institutes, play an important role in brokering the interaction and protecting the interests of competing industrial partners in a large project.
Benefits: Establishing a legal framework such as a consortium agreement or a patent can simplify the collaboration and stem doubts that may arise around ownership of intellectual property.
Related Solutions: P9
6.11 P11: Deriving Research and Business Critical Questions Come First and Data Sharing Later
Problem: Business critical questions for partners in a co-creation process need to be vetted out before seeking to curate data to address the questions.
Context: Often researchers are eager to work on available data and generate insights. However, very often industrial partners need a good reason to share data, which is considered to be intellectual property and often can take several years to curate. Similarly, researchers themselves need to ask questions about which business questions are of scientific value and worth addressing before seeking to use whatever data are available to generate research questions and answer them. This also creates a bias to specify obvious questions that can be addressed based on available data.
Solution: Deriving research and business critical questions should ideally be performed before extraction of data. Here deriving questions entails discussions to understand which variables and metadata from a stakeholder can be of relevance to address a problem. This is in contrast to requesting all possible data and then formulating questions based on that. Data are usually an asset in a company and the sharing of data needs not be useful, unless the intention is clear. This also relates to the data minimization principle of the
General Data Protection Regulation (GDPR). Data minimization is a principle that states that data collected and processed should not be held or further used unless this is essential for reasons that were clearly stated in advance to support data privacy. In GDPR, this is defined as data that are
adequate. To derive research and business critical questions, workshops where research and business questions are formulated, re-formulated and prioritized by stakeholders can help. However, it is assumed that stakeholders know what data are available and can help guide discussions towards formulating research and business questions that are within a given scope. What is the most important is the satisfaction of stakeholders in the formulation of the questions both for its business and scientific value. In C1, SRL organized workshops between stakeholders to come up with research questions relevant to businesses. These workshops intended to specify questions for research-based innovation. The questions need to be arguable with previous scientific evidence, if possible have quantifiable answers, and can be answered using prototypical development and experimentation. The questions should be leading a narrative, but also be mutually independent of one another. Once the questions were agreed on during these workshops, the rest of the half-year was used to co-create software prototypes to answer the questions. For instance, Toll customs asked the question: Can we verify that all customs declarations are sent to the statistical bureau of Norway before being archived? This led to the development of the tool Depict that was used to model and verify this requirement. In C2, SZ and UiO were interested in detecting obstructive sleep apnea from abdominal breathing data. Before the data collection took place it was very important to define research questions that needed to be answered. One of the first questions was about the quality of the breathing data and how good it is for clinical decision making addressed in [
34]. Similarly, in C3, a research question of interest is what is the data quality of sensor data from machine tools. In particular, the completeness and consistency of the data. Addressing this question entails the development of data acquisition systems that ensure at least 99% data completeness and consistency. At the beginning of the project, there has been a phase of gaining trust through presentations and refining research questions that can bring new value to industrial partners. For instance, an engineer from IDEKO explained
“We use real-time systems to acquire high frequency vibration data from machine tools, but do not perform data quality checks. This is where we use the help of SINTEF to specify assertions on the data and to verify them in near real-time on data arriving at a frequency of 1Hz.” This need had to be identified before SINTEF could gain access to their REST API to access data.
Benefits: The benefit of deriving research and business critical questions first is to agree on topics and the use of time on something that is truly beneficial to stakeholders in co-creation. Also, it places much less pressure on stakeholders with data to give away the data without seeing the immediate benefits from it.
Related Solutions: P8.
6.12 P12: Minimize Moving Parts
Problem: A co-creation project can have many moving parts such as a team size, partnerships, technology changes, and evolution of use cases leading to many source of uncertainty.
Context: Both short and long term co-creation projects can have partners that suffer from employee churn leading to changes in team size, partners joining and leaving consortia, and evolution of use-cases with time. These moving parts can lead to uncertainty in terms of human resources and objectives of the project. If one partner is more prone to frequent changes this can also amount to deterioration of trust with other more stable partners.
Solution: Minimizing moving parts in a project is typically necessary for its stability and longevity. When dialogues are established between specific people it is best to select the compatible people and maintain the dialogue instead of replacing people. Changing personnel in a dialogue can entail a large cost in on-boarding and re-establishing trust. If a company is large enough it may even be necessary to have at least two employees following up on a project (bus factor of 2), such that the project stays on track even if one of them leaves the company. Similarly, in terms of use cases and their owners it is ideal if partners and problem definitions stay stable during the course of the project with minor variations. Choice of technological solutions to implement in a project should have a good mix of stable and experimental elements. The experimental aspects with high activity could be carried out in parallel, without affecting the long term goals of a project hinged on more stable technological solutions. In C1, SRL relied on developing tools based on the stable Eclipse framework. The experimental elements were built as extensible plugins in the project without affecting the base software. Both TITAN [
42] and Depict [
54,
56] were tools developed on the Eclipse platform. Most employees recruited on the team were PhD students or postdoctoral fellows with a work contract of 2–3 years enabling long terms stability in the project. In C2, a handful of critical members were hired full-time by SZ to ensure sustained development and customer support for a resource constrained setup in a startup. These included two software developers, one industrial designer, one electronics engineer, and the CEO. The other employees were only hired part time and some of them served roles in other organizations. In C3, efforts from partners are bounded by the number of person-months. There are usually two or three members from each partner attending all meetings in the project. The meetings minutes are documented and many of the meetings are recorded after the start of COVID-19. The dialogue and relationships between partners have matured due to conversations among the same people for a long period of time.
Benefits: As a project manager in C3 summarized, “Minimizing moving parts brings stability to a collaboration and allows going deeper into solving a problem by a team. It also helps minimize the unpredictable costs from on-boarding new members, learning new and experimental software tools, and dealing with changes in use cases under study.”
Related Solutions: None.
6.13 P13: Anticipate Unavailability of Data
Problem: Researchers are eager to obtain data and perform statistical analysis, machine learning and generate visualizations to bring valuable insight. However, very often the data obtained from partners may not be conducive to statistically significant conclusions or adequate for tasks such as machine learning.
Context: When data are collected from an observation, they are not always stored with a clear purpose. Some variables can be observed and their evolution over time is stored for potential future use. For instance, data can be in the form of logs while executing a software program. These logs represent daily use, but are not necessary obtained from a controlled experiment. Using such data for making scientific conclusions may not be easy, as it does not allow clear comparison between two different scenarios, such as in a randomized controlled trial. Many companies may collect years of such data and expect some value from it. Similarly, researchers seek high quality data amenable to making scientific conclusions with strong statistical significance. The mismatch in data made available and its expected utility for generating scientific evidence is a problem all parties in a co-creation should be wary about.
Solution: It is important to anticipate unavailable data necessary to generate scientific evidence and plan for it. This can be by: (1) defining scientific problems that can be addressed satisfactorily with existing stream of data, or (2) specifying and carrying out a new data collection protocol under controlled conditions amenable to high statistical significance. In C1, researchers decided to define scientific problems based on available data. For instance, in a collaboration with Cisco, researchers were provided with daily logs from automated testing. In [
57], SRL researchers use deep learning to help find the most failure-prone test cases in continuous engineering processes at Cisco Systems, without enforcing a new data collection protocol. On the other hand, studying the impact of the mobile game intervention FightHPV [
49], developed in collaboration with the Cancer Registry of Norway, required a stricter data collection protocol, where data about people using the app was collected based on informed consent, using their electronic national ID, also called BankID. Data about how people used the app and the learning from it were used as part of an observational study comparing the uptake of screening and HPV vaccination before and after the launch of FightHPV. In C2, SZ required data from the Norwegian Institute of Sports Science to create physical and deep learning models to predict respiratory minute ventilation from ribcage movement data. Since the data collection process is cumbersome, the development of the deep learning models was based on maximizing the use of deep learning techniques that can generalize reasonably well from small data and discuss the risks of faulty prediction for unforeseen data [
55]. In C3, sensor data from machine tools such as CNC milling, turning, broaching, and grinding is needed to develop tools to validate and improve data quality for Industry 4.0. The sensor data from machine tools are sometimes not available due to technical issues such as variable frequencies, and sometimes due to security issues preventing data tampering. Our strategy to address this lack of data is to summarize data based on top five frequencies and discrete machine states derived from a high frequency acquisition of accelerometer data at the edge that is unavailable. This summary data are secure and contain adequate information to derive data quality hallmarks by other partners in the team.
Benefits: As a researcher in C3 mentioned, “Anticipating the unavailability of data is very important to manage expectations in a team.” It is also important for thinking creatively about generating value from available data or establishing a data collection protocol very early in the process, in order to obtain statistically significant results.
Related Solutions: None.
6.14 P14: Define Long-term Tasks that Enable Remote Collaboration
Problem: Continuous engineering practice used in the industry with short cycles to release is also often expected from research activities.
Context: Continuous engineering practices in the industry are based on rapid releases of product updates with a customer base and utility that is relatively well-defined compared to a co-creation process involving research. This agile approach is beneficial when goals are clear and tasks are well-defined. However, problems that require long-term research and experimentation are often expected to be broken down into small time-bounded tasks, as seen in continuous engineering best practices. This is, however, not always feasible for every problem where ideas need to mature from uncertain and unpredictable experiments carried out by collaboration between researchers and industry practitioners.
Solution: Researchers need to study existing practices and read recent research and think in isolation to have mature revelations that can help industrial stakeholders think out of the box. Research activities need to be formulated as long-term tasks running in parallel and should be enabled by remote collaboration to draw on minds and expertise from around the world. Remote collaboration is even more pronounced during the COVID-19 pandemic which has forced many teams to collaborate remotely and has become the norm. This is especially necessary for members of the collaboration obtaining a PhD degree or undergoing postdoctoral training. On the other hand, industrial partners focus on short-term tasks that have a granularity small enough to be completed in a two-week sprint. This makes it difficult for industry to think long-term and hence it is a perfect marriage and a win-win situation to work with neutral researchers who can look into topics that require long-term thinking and experimentation. In C1, SRL researchers explored long-term research topics such as the use of deep learning for test case prioritization and selection in continuous integration testing at Cisco Systems [
57]. The researcher was a PhD student whose work helped optimize nightly build and test runs of Cisco’s conferencing systems. In C2, SZ had the long term goal of developing a mathematical model to predict respiratory flow from ribcage movement data obtained from their Flow sensor. This work required collaboration with the Norwegian School of Sports Science to obtain data to verify the validity of a physical model and also test new techniques in deep learning [
55]. Data acquisition from an exercise spirometer and the Flow sensor developed by SZ was a long-term task that was performed in parallel and remotely by the sports school. Meanwhile, SZ was focused on selling their sensor devices to stakeholders who could build their own applications. In C3, most manufacturing companies have a continual production of parts and have been engaged with researchers to use the data from sensors monitoring the manufacturing process. For instance, RENAULT produces 1,500 cylinder heads for their Cleon engine per day. Their CNC machines can generate several hours of high-frequency sensor data while the production is ongoing. Researchers from SINTEF use this stream of multivariate sensor data to do long-term research, i.e., compute metrics for data quality to ensure that it has a completeness and consistency close to hundred percent. The data quality research is running in parallel, without interrupting the manufacturing process. The data quality research is in turn used to attain the long-term goal of zero defect manufacturing. As a researcher at SINTEF said,
“Long-term tasks facilitate remote work, as we work remotely in Norway and collaborate with partners in France and Spain to acquire relevant data for long-term research.” Benefits: Defining long-term tasks allows stakeholders to perform activities that require maturity. It also aligns well with the goals of young researchers who wish to establish themselves over a period of 2–3 years (PhDs and postdocs), not just as problem solvers but also thinkers and philosophers in their domain. Long-term tasks also facilitate remote work that is necessary for isolated thinking and reflection, and has become inevitable during the COVID-19 pandemic.
Related Solutions: P4, P7.
7 Anti-patterns for Industry–Academia Collaboration
Anti-patterns are an extension to patterns. Anti-patterns are frequently used practices to commonly occurring problems that are ineffective [
10]. They are useful because they describe the causes of failures, which are important to understand, so that the corresponding failures are not commonly repeated but are avoided and mitigated. Similar to patterns, anti-patterns are specified using templates, which supports their reusability. We use the following template:
Synopsis ♦ Context ♦ AntiPattern Solution ♦ Symptoms and Consequences ♦ Refactored Solution ♦ Benefits and Consequences ♦ Related Solutions, illustrated in Figure
2. Anti-patterns are related to a specific
problematic solution, which occurs in a specific
context. They could be patterns which have over time become problematic, or which are not working in a context other than initially proposed for. It is important to document how the problematic solution became problematic by incorrectly resolving underlying problems that exist in the context.
Symptoms are useful for recognizing a specific problematic solution (anti-pattern), and understanding its implications.
Consequences describe the implications of the problematic solution.
Refactored solution is an effective method of resolving the problematic solution. Refactored solution provides
benefits, but it may also have remaining unresolved issues called
consequences. Finally, the refactored solution can be related to other
related solutions.
7.1 AP1: Miss the Forest for the Trees
Synopsis: Focus on individual evaluation parameters and expect the emergence of impact.
Context: The success of IA collaboration project is typically measured on each side of a collaboration using specific metrics. Such measurements help organizations evaluate the benefits of investing resources into such types of projects and serve as compass for future such investments.
AntiPattern Solution: During collaboration, scientists are asked to produce output such as high number of publications, successful grants, and a number of graduated PhD students. The term h-index is often used to measure impact as the maximum h number of publications cited h times. However, there are many ways to boost h-index, which include self-citation, or obliging authors to cite papers during a blind review process, or collaborating with mature authors, to name a few. Similarly, industry practitioners are interested in sales and profit, as their bonuses are linked to such numbers. In addition, they overlook the ethical and environmental impact of their products and services.
Symptoms and Consequences: Focusing on parameters that evaluate scientists and industry practitioners individually often takes the focus away from the real expected impact that can be achieved as a team. Francis Darwin, the son of Charles Darwin once said, “In science, the credit goes to the man who convinces the world, not to whom the idea first occurs.” This quote succinctly summarizes the difference between impact and output. Individual evaluation parameters may be well perceived for an individual’s career progression, but it is not necessarily a good indicator of impact in our society. Boosting individual parameters in academic excellence may overshadow innovative ideas among younger researchers. Many potentially world-changing inventions taking birth in academia end up in the so-called technological valley of death as researchers lack the skills or support to convince funding agencies that their inventions are worth investing in and can change the world. Similarly, many companies die both slow and quick deaths by focusing only on short term goals, such as sales, while ignoring ideas developed in research labs and startups or societal trends.
Refactored Solution: High impact is defined by the Cambridge English dictionary as the ability to withstand great force. Although this refers to materials, the same concept can be applied to scientific research or industrial products. Research that can stand the query and test of the brightest mind outside the echo chambers of confined research communities has the potential for impact. Similarly, products and services offered by companies need to be able to beat their competition in several aspects to achieve real impact. High impact can be achieved when academia and industry can pinpoint the improvement or creation that end users see as a welcome change. For example, in C1, one of the industrial partners, Cisco Systems, was interested in building the highest quality of video conferencing systems with special emphasis on security for high-end conversations. This drove them to make their testing methods of world class standard, in collaboration with SRL researchers. A number of articles published on testing video conferencing systems increased the trust in the quality of Cisco’s products. In another C1 project, a project manager at Toll Customs said, “We call this a high impact project; we see the improvement in our daily practice by using Depict, and we have joint publications showing this.” In C2, the impact at the startup phase of SZ translated to increased sales in sensors. SZ achieved impact by collaborating with numerous partners and having them purchase sensors and use them in their research. The published results and communication led to increased contacts from more individuals, companies and universities worldwide for sales of sensors. SZ collaborated with artists and studios to promote the technology through installations in galleries and the public space. These activities generated media coverage and direct interaction of SZ’s technology and the general public. In C3, it is important for individual partners to have well-defined problems to solve. In monthly meetings, we constantly re-align the partners by asking them how their work contributes to the key performance indicators (KPIs) of 99% average completeness and consistency in manufacturing data quality. These metrics can be realized individually; however, continuous re-alignment of the partners’ metrics increases the quality culture in the entire manufacturing line. As operators are made aware of the quality of the data that is acquired, it gives them higher confidence in perceiving quality. This helps enable the ultimate goal of zero defect manufacturing and reduction of industrial waste addressing UN’s sustainability goal 12 on responsible consumption and production.
Benefits: Greater impact creation as a result of focusing on the bigger picture of how co-creation in a team is bringing real change in the society, rather than on individual evaluation metrics.
Related Solutions: P3, AP13.
7.2 AP2: Non-reusable Minimal Viable Solutions
Synopsis: Developing solutions with minimal features for problem owners to validate the feasibility of ideas quickly, but overlooking solution extensibility and sustainability.
Context: It has become a common mantra to develop minimal viable software products in projects to impress and engage stakeholders.
Symptoms and Consequences: Many IA collaboration projects need to be realized in short periods of time, resulting in the development of minimal viable products. These products often end up becoming project-specific. There may be similarities between such projects that run simultaneously, but since they last for a limited time, there is very little time to abstract from the different repositories and develop coherent (and possible more generalized) artifact that could be reused. Therefore, there is a risk of reinventing the wheel for each project in the worst case scenario.
Anti-pattern Solution: A minimal viable product or MVP is quickly developed by an academic group to address an immediate industrial need. It is often done with impressive speed by reusing open source solutions and existing long standing libraries and tools. The development of an MVP is a well-known concept in the lean philosophy for startups and has found its way into academia.
Refactored Solution: Research groups need to spend time in developing and maintaining a vision and software artifacts that can stand the test of time. It is important to develop a core engine that can interface to many sources and sinks, and can be deployed on many operating platforms, with features that represent building blocks for constructing solutions for projects with different requirements and stakeholders. The abstraction of what can be relevant for a long time requires continuous dialog and exchange of ideas between different project that execute simultaneously. The goal is to enable abstracting from artifacts such as prototypical source code developed in projects. However, such communication between projects, especially spread in time, is also contingent on the clauses of an intellectual property rights agreement (if one exists). It is also necessary to build software artifacts with an architecture that is easy to use and extend. In C1, a core engine for test case optimization was developed that was applied to problems presented by several industry partners, including Cisco systems and ABB Robotics. In C2, SZ developed a skeleton app with an extensible architecture that could be easily adapted and tested for the needs of various stakeholders in a short period of time. A software developer at SZ indicated, “Developing an extensible app saved us so much time when reusing the solution for different applications.” In C3, all software artifacts are developed with extensibility in mind. For instance, SINTEF developed a system for erroneous data repair that is based on a data pipeline that can be improved based on new machine learning models or new components, developed by other project partners, such as for explainable AI.
Benefits: Moving from the idea of a minimal viable product to the concept of developing a minimal extensible engine can greatly improve reuse and applicability to a number of diverse problem domains. This mindset helps a research group position itself as the maintainer of a constantly reused software artifact, giving the group recognition in its community.
Related Solutions: P6, AP3, AP8.
7.3 AP3: Premature Results in Short-term Collaborations
Synopsis: Short-term IA collaborative projects may pose lower risk for industry investments, but inevitably lead to limited practical and scientific impact.
Context: Industry side of IA collaboration typically prefers short-term projects compared to long-term ones [
9,
29], as a way to, for example, cater to requirements for a short-horizon investment of resources.
Anti-pattern Solution: The short term may be aligned well with the goals of most industry companies, for instance, it speeds up learning about whether investing in the collaboration is worthwhile. However, short-termism is not always in alignment with a long term research vision of the project.
Symptoms and Consequences: In such short-term collaborations research goals are often left yearning for scientific abstraction and maturity. A researcher at C1 indicated, “Our students often struggle with publishing the results coming from short-term projects, because they have not reached the maturity required for publication.”
Refactored Solution: Projects aiming to achieve both practical and research impact should aim for a mid-term duration. This will give enough time for the thinking and abstraction process that is necessary in scientific research, but will also allow the creation of time-tested tools that industry can effectively apply and assimilate. From our experience, both C1 and C2 collaboration contexts benefited from structuring longer projects in sprints defined over a period of maximum three months, before they are re-evaluated and adjusted. This was an optimal way to keep the project partners engaged in short bursts of a few months. Scientific maturity was attained through incremental publication and going through a peer-review process every 6 months or so. The scientific publication process gave us external validity for our chosen direction, while also helping us address questions that were not part of our foresight. This led to more maturity in the scientific aspects of the collaborations. In C3, certain deliverables have a very short time to be developed and accepted in the early stages of the project. It is often important to communicate the premature nature of some of these deliverables to reviewers. Moreover, the project is planned in an incremental manner, such that new versions of deliverables are produced every year and the quality and maturity of the results from the project improves with time.
Benefits: Achievement of satisfactory levels of both scientific maturity and practical readiness of the results created in a collaboration.
Related Solutions: P6, AP2.
7.4 AP4: No Skin in the Game
Synopsis: “Free” IA collaboration projects may attract industry partners, but are amenable to the risk of insufficient investment from industry partners and overall dissatisfaction.
Context: Funding programs for IA collaboration projects involve academia and industry at varying levels of investment from each partner.
AntiPattern Solution: In some IA collaboration projects, no investment is required from industry, thus making such projects “free” for industry. Researchers spend several months to identify a relevant problem and come up with a solution that can benefit the industry partner. However, the industry partner makes limited involvement in the project.
Symptoms and Consequences: The consequences of a partner not having skin in the game is less intensity and involvement in a project and a constant feeling of fragility. The partner that has more skin in the game will have to go out of the way to make sure the partnership holds with partners that have less at stake. The success and longevity of a relationship is also in jeopardy when partners give less and get services for free.
Refactored Solution: “Skin in the game” is to have incurred risk (monetary or otherwise) by being involved in achieving a goal. It is important for partners to have skin in the game in a co-creation project, such that they incur a cost for, for example, leaving the project midway. However, it is difficult to enforce such costs for long-term projects spanning several years because relationships, trends, and enthusiasm can greatly vary. Hence, such costs should only be associated with short-term projects of 3 to maximum 6 months. The simplest approach to achieve skin in the game is by introducing “in-cash” investment into a collaboration, so that all parties have a vested interest in the project. However, in-cash investment also has the potential of souring relationships between partners, as trust may be lost and one partner may want to pull the collaboration in a different direction. In C1, SRL established a consortium agreement where partners had to commit to at least one person-year every year in a project for an 8-year project. In C2, partners were required to purchase sensors developed by SZ in order to obtain support. The alternative of lending sensors did not always work very well, as some partners did not have a concrete plan on how to use them. A project manager at SZ noted, “Once we decided to put a price on the sensor, we could clearly see who of our collaborators is truly interested in collaboration.” In C3, most partners are invited into a consortium based on prior experience of working together. Partners who do not meet their commitments often undergo a natural selection process of being excluded from funding opportunities in subsequent projects. Hence, it is of paramount importance for both industry and research to attain a level of satisfaction in the collaboration by delivering what is promised. Skin in the game is ensured through the monitoring of efforts made in deliverables, attendance lists and the level of participation in meetings.
Benefits: Having skin in the game will change the way partners collaborate, taking joint ownership of the project. It may also mean that there can be intellectual property disputes for the work put into the project.
Related Solutions: P1, P5, P8, AP5.
7.5 AP5: Inequitable Value Creation
Synopsis: Optimizing KPIs to create value, and thus engendering equitable value creation.
Context: Researchers and industry stakeholders in a project are measured based on KPIs that quantify the value they generate.
AntiPattern Solution: Value creation for partners is an essential aspect in a research based co-creation process. Value creation is any process that creates outputs that are more valuable than its inputs. This is the basis of efficiency and productivity and is often quantified as KPIs. Value for businesses can be quantified in the form of KPIs such as number of improved processes, products, sales, and services. Value for researchers can be quantified in the form of KPIs, such as publications in journals with high impact factor, creation of open-source tools and open datasets for research. It can also be seen as enhancement of knowledge or well-being in the workforce for all partners.
Symptoms and Consequences: There can easily be a discord in which path creates value for each partner. For instance, researchers may be too focused and isolated while writing scientific publications, neglecting the need for value creation for an industrial partner, which may entail improving a product that is used on a daily basis. Similarly, industry partners may not see the value in a validation study of their products and services, which can create more trust in their company among shareholders and customers.
Refactored Solution: Value creation must be discussed with clarity in the very start of a project. Value creation can take place both in the short-term of a few months and the long-term over a few years. Value creation must be quantified such that each partner comes out of the project feeling that their inputs resulted in a something of value to themselves. There must be an equitable balance in value creation for all partners. For instance, if researchers obtain a high impact publication, enough effort and time must be spent in ensuring that the result also increases sales or leads to improved products and services for industrial partners. Quantifying such creation of value over a common index is a possible way forward. In C1, SRL defined value creation in terms of a balance in invested effort in the project, along with metrics about artifacts created by different partners, which include reports, research papers, software, or new sale leads. These metrics, when seen together, gave the consortium an idea on where to invest more effort to keep the value creation equitable. In C2, equitable value creation was measured in terms of markets made available to SZ based on the research collaboration. For instance, prediction of obstructive sleep apnea by University of Oslo opened a very large market that helped SZ refine its business strategy. In C3, making value creation equitable entailed finding a balance between creating data quality tools for manufacturing partners and publication of scientific results. The scientific results were essential for the research institutes such as SINTEF and TU DARMSTADT, while the tools developed during the project would eventually become prototypical components in the product lines of partners in the manufacturing sector, such as IDEKO, PREDICT, and DANOBAT.
Benefits: Equitable value creation in a co-creation project can increase its sustainability and strengthen the cohesion in generated value. For instance, establishing a causal link between high-impact research resulting in increased visibility and sales, and improved stock valuation for an industry stakeholder can be a mutually acceptable win-win situation. A researcher at C1 said, “Equitable value creation is a must for sustainable collaboration, otherwise, one side of the collaboration will give up.”
Related Solutions: AP4.
7.6 AP6: The Incomplete Tower of Babel
Synopsis: Industry and academia speak different languages, especially differing on degree of mathematical formalism, which leads to less clarity in co-creation.
Context: Stakeholders do not speak the same language, like in the biblical story of the Tower of Babel, which stayed uncompleted because the workmen could not understand one another’s language. Industry practitioners and academic researchers are groomed in different environments, from which they inherited different vocabulary, with different degrees of formality. However, when practitioners and academics are involved in a coordinated effort to solve a problem together, it becomes important to communicate ideas in the same terminology, in order to avoid confusion and ambiguity, and to make sure that arguments and idea are conveyed in a clear and convincing way. For instance, from our experience, the term testing for academics is less about physical and real-world testing (which is how industry usually interprets it) and more about generating variations of test cases in a constrained domain. Moreover, what are software tools for academics, are only prototypes for practitioners, which lack robustness and many important -ilities [
61].
AntiPattern Solution: Academics like to strip technology off the hype and specify a well-defined problem in the language of mathematics, while industry practitioners build their careers on technologies that are contemporary and cool. As a SE practitioner in C2 pointed out, “When we explain our daily practice to our academic partner, it is very difficult to use the same language they do, so they can understand us.” In a collaborative project, academics talk to industry practitioners, make notes and create a mental image of the problem. After that, academics typically isolate themselves and transform the mental model to a model in a mathematical terms and formulas. They essentially convert an ill-conditioned problem from the real-world to a well-conditioned problem using their preferred mathematical formalism. This formal specification is then used to exchange data and perform experiments, to solve a problem that concerns the industry practitioner.
Symptoms and Consequences: The differences of terminology and language between academics and industry practitioners distance the stakeholders over time. A lack of dynamic and business-focused communication between academia and industry often leads to scientific research straying too far away from the changing needs of industry practitioners.
Refactored Solution: Understanding provides the foundation for effective collaboration. Therefore, both parties in collaboration must make an effort, from the early stages of collaboration, to understand the terminology used by the other party and be synchronized. This can be done spontaneously in a day-to-day interaction, but also focused on business purpose, through lightning talks on specific topic. Such approach helps readjust academic language to continually evolving industry needs, while practitioners learn to think clearly in terms of well-defined mathematical formalisms. For example, in C1, SRL researchers made short talks on the topics relevant for the work being done, such as combinatorial testing or multi-objective optimization along with a presentation of how a real-world problem may be formalized. Practitioners, such as from ABB Robotics and CISCO Systems, demonstrated how they test product features, which was eventually formalized as a combinatorial interaction testing problem. The important aspect of such knowledge exchange is asking frequent questions and clarifications, instead of assuming what the term means. SRL and its partners spent a large amount of time understanding and defining what testing really meant for each partner. For instance, testing for the Norwegian Toll Customs entailed verification of databases consisting of customs declarations. While testing for ABB Robotics meant adherence of the operation of a painting robot to a trajectory specification. In C2, there were differences in terminology between machine learning experts and those who were performing signal processing on the sensor data. For instance, signal processing experts use techniques for transforming signals in a time domain to a frequency domain, to obtain different features for reasoning about sensor data, while machine learning experts directly use raw time series data and a neural network to obtain outputs of interest with minimal feature engineering. We bridged the gap between the two schools of thought by comparing results from both approaches, as part of the software pipeline from early stages of the project. In C3, partners in the manufacturing domain and those in ICT often speak different languages. For instance, SINTEF formulated the problem of erroneous data repair in manufacturing sensor data as a machine learning and SE problem, involving concepts such as data version control, data pipelines, convolutional neural networks, epochs, training and test sets, and feature engineering. Despite the mathematical formulation of the machine learning problem, it was made quite clear to the stakeholders in the manufacturing domain how the output of a neural network will contribute to improved data quality.
Benefits: Continuous synchronization between academia and industry to improve mutual understanding of concepts, models, and experimental results can help both academics and practitioners to achieve a higher level of clarity in co-creation.
Related Solutions: P4.
7.7 AP7: Immeasurable Objectives
Synopsis: Objectives serve the purpose of informing the partners of what can be expected from the collaboration. However, immeasurable objectives serve no purpose, other than to contribute to discontent of the stakeholders.
Context: Measuring the progress of a project at different stages of an IA collaboration is a good practice that helps reach the project goals faster.
AntiPattern Solution: When the problem analysis is conducted at a too high level and requirements of the target solution collected casually, it is difficult to define clear and measurable objectives. The example of an unclear objective is to develop an effective tool for test case prioritization. The problem with this objective definition is that it is too vague and unspecific, providing no means to measure when the objective has been met.
Symptoms and Consequences: Ill-conceived objectives nearly always guarantee dissatisfaction with the outcome for the problem owner.
Refactored Solution: It is necessary to analyze the problem to be solved thoroughly, to capture detailed requirements, and to understand the desired outcome. This information should then be translated to an objective in a formalism with well-defined syntax and semantics (e.g., logic, mathematical function). Objectives need to be specific and measurable. In C1, for instance, we defined the objective as to develop a tool that applies test case prioritization based on high fault detection as an objective function. The effectiveness of the tool can be measured in terms of fault detection capability of prioritized test cases, compared to manual test selection, using historical test data. In C2, we specified an objective as a machine learning problem. Given overnight breathing patterns with manual annotation of apnea events, create a machine learning model to automatically predict sleep apnea from new overnight breathing patterns. The effectiveness of the objective can be measured in the form of prediction accuracy and recall, and represented as a receiver operator characteristics diagram. As a researcher in C2 observed, “These measurements give clarity and confidence to a collaboration, as they serve as scientific evidence.” In C3, it was necessary to break down the high-level KPIs of achieving 99% data completeness and consistency to smaller manageable and measurable goals. SINTEF organized presentations for each partner in the group and produced a synthesis of who does what with the manufacturing data. This information was then used to break down the work into time-bound goals for each partner, to improve data quality and help achieve the goal of 99% completeness and consistency. For instance, SINTEF will develop a module for data profiling and erroneous data repair, INLECOM will develop a module for anomaly detection, PREDICT will develop a module for temporal clustering of manufacturing data, and DNV GL will develop the architecture for data quality as a service. All these individual efforts have a measurable impact on the KPIs.
Benefits: Well-defined objectives lead to measurable progress and more satisfaction with the results produced.
Related Solutions: P11.
7.8 AP8: Confound Lab Setup with the Real World
Synopsis: In SE, only the research knowledge and prototypes tested in the wild stand the chance of success when deployed to operation.
Context: To prove useful in practice, the results of IA collaborative projects need to be thoroughly tested for its envisioned application scenarios.
AntiPattern Solution: SE researchers often work to solve practical problems. For example, in our collaboration with Cisco, we used test logs to improve the cost-effectiveness of testing video conferencing systems. In the collaboration with ABB, we used operational data to predict maintenance activities of ABB robots. When designing solutions to such practical problems, researchers may overlook the requirements and constraints of the solution to be deployed in practice, such as the volume of data being produced daily, or they may test the solution in overly simplistic scenarios. Failing to take real requirements into consideration results in a limited scalability of the candidate solution. What seemed a promising technology when tested in the lab suddenly cannot be transferred into a working solution in practice.
Consequences: Knowledge developed and tested in the lab does not generalize to the real-world environment.
Refactored Solution: During early stages of collaboration, researchers need to understand as much of the domain as possible, to identify the technical and non-technical requirements of the solution sought. Because the requirements form the basis for deriving working assumptions that define what the target solution should look like, early requirements capture is crucial. In addition, later, during the solution testing, requirements can be used to generate a range of more realistic test scenarios to be checked, to validate the robustness of the systems once deployed in operation. In C1, SRL came close to recreate a real world setup, given budgetary restrictions. The research lab acquired a video conferencing system from CISCO Systems and an UR3 collaborative robot from Universal Robotics to perform testing research. This industrial equipment follows the same standards as higher-end versions in the product line of the partners. Performing experiments with equipment was transferable to a real-world setup in a factory, due to the use of the same standards and software programs. In C2, sensors were tested at low-temperature ice baths, and for waterproofing we used a small pressure to ensure that its connectivity and thermal sensitivity is understood and managed before being deployed to a partner. These experiments were repeated several times to ensure that there were no confounding factors in the measurements. In C3, initial experiments on data quality validation were performed using a publicly available dataset on CNC milling.
7 However, demonstrating the algorithms for data quality validation and repair on this dataset led to obtaining access to IDEKO’s live CNC machine with real data from production operating 24/7.
Benefits: Research knowledge and prototypes, when deployed in practice, scale to the complexity of the real world environment. A SE practitioner in C1 said, “We seamlessly deployed the tool in our test framework, largely because the tool was developed and tested by working with real video conferencing system and data.”
Related Solutions: AP2.
7.9 AP9: Sunken Cost Fallacy in Public Research Grants
Synopsis: Investments in IA collaboration projects should not be contingent on previously invested resources, but on a clear sign of the collaboration sustainability and ability to create industrial and societal impact in longer term.
Context: Academia aims to obtain long-term public grants (3–10 years) from national research funding agencies or the European Research Council for financial stability.
AntiPattern Solution: Academic researchers create consortia with industrial partners and apply for funding from national and European research councils. These grants are aimed to support long-term value creation and knowledge transfer from research labs to the industry. The research councils often tend to reinvest in the same people and companies and research groups over several years or sometimes decades.
Symptoms and Consequences: The model of supporting businesses and academia with public grants has been applied for decades in several countries with reasonable success in terms of exploitable intellectual property and improving training of young professionals prepared for industry. However, many research programs continue to get funded because they have received funding previously. The funding is often obtained by the same person growing in a leadership role. This prevents new and innovative ideas from getting enough attention and funding to develop, because the priorities, vision and tone are set by the leader. Moreover, there is little accountability in terms of industrial and societal impact that is financially sustainable without public grant support.
Refactored Solution: Research groups should address industrial and societal impact and its financial sustainability during a long cycle of funding. This can involve taking risk such as spinning off a business with private investments to demonstrate adequate skin in the game from various stakeholders, innovation through patents, and job creation. This effort should be recognized by the public grants bodies and used as evidence for a new round of research funding. In C1, SRL had one attempt of spinning off a software testing startup based on the technology developed through the research collaborations. However, due to lack of resources, the initiative did not continue. A researcher in C1 who was leading the spin-off initiative reflected that
“While eventually we did not manage to get the startup off the ground due to the lack of resources, through the venture with investors we proved the innovation potential underlying the technology, which was followed by some seed money.” In C2, maintaining financial stability required SZ to pivot from a consumer product company to a consulting firm that allows white labelling of its sensor product to larger actors. SZ did not actively seek public grants due to large effort and relatively low chances of success. It is important to also note that the creation of business entities requires that people with business, marketing, sales and design acumen meet scientific researchers and that such a meeting place is provided. In C1, SRL established the Simula Garage, an incubator to facilitate such interactions. In C3, the ongoing project is a follow-up of another EU project MC-Suite
8 that was of smaller size than InterQ, in terms of budget and the number of partners. InterQ had to mitigate the perception of financing the same partners in MC-Suite and involve new partners that they have not worked with earlier. The addition of new partners, such as TU Darmstadt, INLECOM, and DNV GL has helped bring in many new ideas into the project that were not foreseen by the older members of the project.
Benefits: The benefits of nudging use-inspired research and co-creation towards financial sustainability will help minimize the dependence of research groups on purely public funding. Public grant bodies on the other hand can diversify their investments and avoid the sunken cost fallacy.
Related Solutions: AP11.
7.10 AP10: Adhering to Traditional Reward Mechanisms
Synopsis: Adherence to traditional reward mechanism may be suited to individual partners, but only adherence to bilateral reward mechanisms can bring about successful IA research co-creation.
Context: All partners in a co-creation project are driven by a traditional reward systems internalized from years of operation in specific contexts. These reward mechanisms are followed to achieve a lot of activity and quantification of numbers that help individuals advance in their careers.
AntiPattern Solution: Industry partners are rewarded for product quality, the number of customers acquired and customer satisfaction, while researchers are rewarded for publications in high impact factor journals. In C1, for instance, for SRL researchers, one metric of interest was publications in high-impact journals and top conferences, while for industry partner Cisco the main metric has been maintaining high product quality through rigorous testing. Similarly, in C2, the main reward mechanism for SZ was high volume sales of their sensors and simplification of customer support, while for their academic partners it was about high quality publications.
Symptoms and Consequences: Mismatches in rewards mechanisms can create isolation of partners, as each partner tries to optimize metrics that interest their organization.
Refactored Solution: Each partner is the quintessential frog in its own well, unless they understand the reward mechanisms of each other. For instance, it is important for a research organization to realize that companies can only thrive financially when they obtain sales of their products and services. One solution is to have researcher work part-time in a company to understand the reward mechanisms. Similarly, it is also worthwhile for an industry practitioner to spend some time in a research group and understand the complexity of publishing high quality scientific articles and its benefits for the society. In C1, we had an industrial PhD student from ABB robotics who spent some time in a research group and published several papers. In addition, he facilitated the political decisions to incorporate research artifacts produced in the collaboration for testing ABB’s painting robots. A researcher at SRL pointed out that “This factor greatly streamlined the collaboration between SRL and ABB robotics.” Similarly, in C2, the collaboration between SZ and University of Oslo was better aligned after the University of Oslo included SZ in research grant proposals as a partner and a vendor, to help increase the sales for SZ. In C3, reward mechanisms are defined as KPIs as part of consortium agreement, before the commencement of the project. These KPIs are something all partners in the project agree to and see as feasible goals to attain, despite their own reward systems in their companies. These KPIs are often specified to be simple and easy to achieve and often amenable to interpretation. For instance, in InterQ, a KPI is to achieve 99% data completeness in acquiring manufacturing data and what is most relevant to the project is demonstrating this goal.
Benefits: Matching reward mechanisms can make it easy to develop a win-win situation for collaborators in a project. A collaboration can flourish if the partners understand each others reward mechanisms and the need for it in the society. Ideally, companies that can gain an edge from research will have increased sales of their products and services, while researchers can create high impact from their research work.
Related Solutions: P3, P8, AP13.
7.11 AP11: Arranged Marriage
Synopsis: Long-term IA collaboration projects need to welcome change in different aspects of the collaboration (leadership, resources), as an opportunity to improve defective parts and be able to co-create value.
Context: National and European councils that fund scientific research set criteria that require the establishment of long-term co-creation projects between diverse stakeholders, many of who may not have ever worked together in the past. It is in the best political and financial interests of the research councils to ensure that the long-term projects succeed. Very much like arranged marriages in India.
Anti-Pattern Solution: Whenever a governmental research council decides to allocate funds to a project involving academic and industry partners, a consortium agreement is signed that outlines how time will be spent by each partner and how the intellectual results will be exploited. It also outlines how conflicts will be managed and mitigated.
Symptoms and Consequences: The stakeholder with responsibility of running the project is often asked to keep the funding bodies and research council happy and satisfied about the ongoing collaboration. This often can lead to phrasing results and outcomes of a project with a steady streak of positiveness. The non-engagement of some partners and poor outcomes are not presented with full clarity, since it is in nobody’s interest, especially when the money comes from public funding with no real ownership. Reviewers are appointed to the job of evaluating projects where their role becomes limited to going through a checklist, as they do not really have “skin in the game.” Elected politicians may sometimes be seen as in charge of setting priorities and allocating grants. However, very often their own terms are shorter than the length of long-term co-creation projects. It has become very common to smooth communication of results to hide bigger issues underlying a project. This often leads to tremendous loss of resources and public tax payer money over several years. It also leads to sub-optimal and forced relationships between stakeholders in some projects that lack the necessary level of synergy that can stand the test of time and be beneficial to the society.
Refactored Solution: Negative results should be given as much importance as positive results in co-creation projects. When partnerships do not reach a mutually beneficial agreement it should be easy to replace partners in long running projects. Intellectual property agreements should ensure that partners are fairly compensated for their contributions in case they are replaced. This may be seen analog to a divorce in a marriage. Similarly, change of personnel and leadership should be taken into account in research projects when conflicts arise frequently and are not resolved. These collaborations should be treated similar to change in leadership in businesses, where a board elects a CEO. All in all, the funding bodies should learn to embrace both positive and negative results as their own and see the value in the honesty. In C1, SRL has had to part ways with Norwegian Toll Customs and FMC technologies midway in the project, as the companies’ strategy did not align with the goals of the project. They did not see value in spending more time and effort in the project after a few years of effort. This was communicated to the Norwegian Research Council and a new partner, the Cancer Registry of Norway, was added to the project. In C2, most public grants were secured by the University of Oslo and the collaboration with SZ lasted for short periods of time, whenever it was relevant. Here, the person of contact or the role of leader in a project varied from depending on who was best suited to manage the collaboration. In C3, the criteria enforced to have at least three European countries in a project, which can be seen as a peace project that fosters collaboration, while countries are getting to know each other despite cultural differences. A good approach to a successful collaboration is to have a solid foundation of partners who have worked with each other, and then add new partners in the project. For instance, SINTEF, IDEKO, DANOBAT, and TEKNIKER as a group and Renault, Predict and Comau as another group have previously worked together. While INLECOM, DNVGL, and TU Darmstadt are newcomers to the consortium. Despite the strong collaborative foundation, some of the new members have had reservations for the collaboration, due to potential conflicts of interest in intellectual property and in business partnerships. We mitigate the effect of such reservations in collaboration by creating separate groups of industrial partners in conflict of interest and neutral researchers who ensure that project results are equally shared by both parties.
Benefits: There are many benefits of being flexible with agreements in long-term collaboration projects. When there is a lack of synergy, then partners can leave and be replaced. As a project manager at SZ reflected, “It is in the best interest of all parties to say when things are not working, instead of hiding it, so that we can focus energy on what is working.” Similarly, leadership of a publicly funded project should be mutable like in businesses. This can inspire honesty in projects and hope for change when projects are not running optimally and need a fresh breath of energy. The flexibility of course comes at a cost that needs to be clearly quantified before major structural changes are made. The funding bodies should take an active role in evaluating alternate paths when they see that projects are not bringing the benefits to the society they initially hoped for.
Related Solutions: P1, AP4, AP9.
7.12 AP12: Repackaging Ideas
Synopsis: Stakeholders repackage old ideas and concepts into terminology developed for new trends and hype.
Context: New technologies find themselves on the Gartner hype cycle every year and are much discussed in news and media. Funding agencies prepare call texts based on what is up and coming. The technologies move quickly on the hype cycle and hence, it is necessary for stakeholders, both academia and industry, to repackage something they have been working on for years to the demands and pressure of new trends and terminology.
AntiPattern Solution: Industrial partners want to stay relevant by staying abreast of and use the latest technological innovations. In an IA co-creation process, industry partners often rely on researchers to repackage what is being done for several years using a dated technology into a new technology. This is mainly due to the lack of resources to experiment with new technology in the industry. For instance, industry would like to experiment with NoSQL databases instead of relational databases, to become scalable to more data and users. Researchers on the other hand repackage their old concepts with new terminology. For instance, as an industry practitioner in C1 said, “Constraint programming has been around for a couple of decades, but in contemporary times a preferred term would be symbolic AI to stay relevant and possibly increase a chance of obtaining funding. But in essence, if constraint programming did not work in practice because of scalability issues, neither will symbolic AI.”
Symptoms and Consequences: Repackaging old ideas into new terms and technologies creates the following issues: (a) new technologies may not be relevant to the problem at hand, (b) the use of new technology may only be superficial if the real advantages of using new technology is not understood, (c) new technologies and terms may not pass the baptism time and may die out too early, and (d) a lot of money and person-hours can be used in pursuit of repackaging and reselling old ideas in a new framework.
Refactored Solution: It is often important to embrace the emergence of a new technological trend, as it is a culmination of what is made possible with growing computational power and storage for instance. Both academia and industry want to stay relevant and up to date in their respective domains. Instead of repackaging old ideas using new terminology and technologies, we suggest that the stakeholders identify the core problem in a mathematical formalism that is independent of new trends and technology, as a first step. This will help the team think clearly without being blinded by hype. This is also a place where old and new concepts have many aspects in common. The implementation of a solution however can use new technologies and terminology. In C1, researchers in SRL stripped a complex problem into mathematical symbols and statements to understand the real complexity. For instance, the problem of test selection and prioritization of video conferencing software at Cisco was specified as a learning problem [
37], as a first step, followed by the choice of technologies, such as TensorFlow or PyTorch for machine learning, at a larger scale. In C2, the problem of predicting respiratory minute ventilation from ribcage movement data was initially attempted by creating a biophysical model of the respiration. This had its limitations, as ribcage movement measurements were varying from person to person. Therefore, it was necessary to attempt to address the same problem with contemporary deep learning technology. Scientists in the co-creation project specified the same problem as a deep learning problem, based on data collected from different people. The deep learning model was eventually trained and tested using novel machine learning frameworks, such as TensorFlow and PyTorch, with better results. In C3, the idea of controlling the manufacturing process using data from observing is not new. However, the use of deep learning models is trendy and hence the idea of using deep neural networks was repackaged for tasks such as improving data quality in faulty sensors.
Benefits: The simplification of a problem to its bare bones mathematical formulation makes it trend-, newfangled-terminology- and technology-agnostic. This brings clarity to the co-creation process, and all stakeholders are aligned to the core problem, instead of placing a lot of focus on finding a new wrapper as a solution to a problem. Eventually, once the problem is well-defined, stakeholders can make an informed decision on the type of technology to choose, to best solve the problem.
Related Solutions: AP6.
7.13 AP13: McNamara Fallacy
Synopsis: When decisions are made in a co-creation process, it is only quantifiable results, outcomes, and observations that matter.
Context: A co-creation process between researchers and practitioners needs to be evaluated on a regular basis by a grant-giving body. For a project spanning several years, such evaluation often takes place annually, midway, and at the end.
AntiPattern Solution: The evaluation of a co-creation project is often performed using selected metrics. These metrics include number of publications, citation index, number and monetary value of grants accepted by researchers. While industry practitioners are evaluated based on sales figures and monetary value of projects acquired in the case of consultancy firms.
Symptoms and Consequences: Robert McNamara was an MBA from Harvard University in 1939. He was the first president of the Ford Company from outside the Ford family, and he eventually became the US Secretary of Defense during the Vietnam war. McNamara during his time in Ford was known to select data points and ruthlessly optimize the efficiency, costs, and quality at Ford. He brought the same approach to the Vietnam war where his metric for success was
body count. This was a poor measure of how a war is progressing, because it reduced the deeply human process to a mere figure, known as the
McNamara fallacy. Similarly, co-creation between researchers and industry practitioners can be a deeply human process spanning several months to years. A researcher in C2 reflected,
“Measuring outcomes based on quantifiable metrics make us forget about the social and human process behind the success or failure, which is wrong, because this process can be incredibly insightful and valuable to learn from.” Very little time is spent on understanding the human experience of running a project. The sole focus on academic productivity based on metrics was associated with poorer physical health, increased burnout, and reduced productivity [
25].
Refactored Solution: The human process of co-creation needs to be recognized for its beauty and depth and also for the costs incurred. The process of co-creation should be documented with narrative, because it is valuable and insightful to know that behind technological innovations there are humans with flaws and challenges. The writing and documentation of the process helps externalize the non-quantifiable aspects that lead to quantifiable outcomes, such as a publication or a software artifact. For instance, in C2, there is a need to create awareness about improvement in physical health through the focus on breathing using the Flow sensor developed by SZ. This was achieved through simpler popular science communication in a press note.
9 The press note led to increased awareness about breathing as a metric that we often ignore in sports. In C3, gaining trust between research institutes and commercial partners in the manufacturing industry is of paramount importance, because researchers are seen as neutral between two or more companies competing in the same sector. An important thing that is not quantifiable is demonstrating that researchers are successfully able to balance the trade-off between openness needed for research and protecting the intellectual property of individual commercial partners and their interests in the project.
Benefits: Focusing on not only quantifiable results helps lay emphasis on the often challenging human process that results in a technological innovation. Knowledge of the human process can help improve co-creation and enhance the state of flow among collaboration participants. This can drastically mitigate burnout rates. The management of stakeholders by neutral parties such as research institutes is never quantified, but the ensuing trust developed during the project helps such research institutes be invited to new collaborations in future.
Related Solutions: P3, P8.
7.14 AP14: Not Invented Here
Synopsis: Collaboration partners show bias against internalizing the knowledge that originates from a different field of expertise.
Context: Collaborative projects between industry and academia are developed with an idea that each side of the collaboration has an expertise required for addressing the project challenges, but not possessed by the other side. For example, in C1, SRL researchers had a unique expertise in combinatorial test optimization, which Cisco engineers did not have, and which was needed for optimizing the testing of video-conferencing systems at Cisco. On the other hand, Cisco possessed video-conferencing systems, and had a deep knowledge of the challenges and constraints for testing such systems.
AntiPattern Solution: In a typical scenario of a collaborative project, industry and academia initially meet to discuss industry practice and identify the challenges that industry is looking to solve. Discussions lead to the identification of a set of requirements for the solution to be developed. Afterwards, these two teams part ways and start working on the project with sparse interaction and limited opportunity to update each other on the direction of their work. A few months forward, academia is ready to have industry deploy the research prototype they have developed in the lab. However, industry avoids to use the prototype, believing that if it has not been developed in-house, it is not as valuable.
Symptoms and Consequences: Bias against external knowledge by industry is developed as a consequence of not having insights in the knowledge development process, understanding of the underlying research concepts, nor the access to decision making about technical choices during the prototype development. In C1, in the example of Cisco video-conferencing system testing, the initial research prototype developed by SRL researchers had a similar destination, because there were technological incompatibilities between the prototype and the testing toolchain where the prototypes should have been integrated. Other reasons for having “not invented here syndrome” could include intellectual property concerns, costly absorption due to a steep learning curve of the core research concepts, or simply risks associated with the concepts unproved in the real world.
Refactored Solution: It is critical to establish co-creation as a form of collaboration, where industry and academia involve in a process of continuous interaction during all stages of collaboration, from problem definition to solution deployment. Co-creation is able to build mutual trust and partnership between collaborating parties that leaves no room for the notion of “external” knowledge. There is only one knowledge developed, and it is equally owned by everyone involved in the project. For example, in C1, a co-creation was established between SRL researchers and ABB engineers through several strategies. Researchers spent a lot of time at the industry site, working side by side the ABBs engineers. There was an industrial PhD student employed by ABB and supervised by a SRL researcher, which helped catalyze co-creation, as this person was able to connect knowledge from both teams and make them realize the value of the expertize of one another. In C2, one challenge for the startup SZ was to demonstrate that their invention was cheaper and better in accuracy compared to existing solutions. The UiO research group on obstructive sleep apnea, in fact, tested the Flow sensor with several other low cost sensors [
34], to demonstrate that the flow sensor indeed could be used for clinical studies. This external validation in a clinical study helped overcome the bias of not-invented-here and positioned SZ’s flow sensor for medical health applications, going beyond the initial focus on sports. In C3, it was not easy for the research institutes to obtain access to the data from the manufacturers for AI research, as the benefits of using AI was not very immediate, although advocated initially. Access to a data stream required several months of presentations and convincing with experiments on publicly available datasets. Yet, the most useful application for the manufacturing sensor data turned out to be data profiling and the applications of AI models were only a bonus. The use of AI is not very widespread in manufacturing and it is hard for industry actors to pause a bit and try to comprehend the benefits, as production runs 24 hours a day. Renault for instance mills 1,500 cylinder heads per day in the Valladolid factory in Spain and has to see how AI benefits their process while in operation. Making an effort to understand how other partner’s expertize can improve one’s processes is a key to mitigate the not-invented-here syndrome.
Benefits: More efficient use of time and resources, less reinvention and duplication. There is no negative attitude towards using the knowledge developed in the collaboration project. Instead, there is a strong sense of ownership of such knowledge, as it was created by a joint effort from industry and academia. A researcher in C1 mentioned that “At the beginning of collaboration, we often see a disinterest of practitioners for the ideas we present. But once we establish trust and start interacting frequently, this bias completely disappears.”
Related Solutions: P4, P5, P7, AP6.