Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
BY 4.0 license Open Access Published by De Gruyter (O) January 6, 2025

Information Circularity Assistance based on extreme data

Utilizing Artificial Intelligence, Scenario-Technique and Digital Twins to solve challenges of product creation for Circular Economy

Informationszirkularitäts-Assistenz auf der Basis extremer Daten
Einsatz von künstlicher Intelligenz, Szenario-Technik und digitalen Zwillingen zur Lösung von Herausforderungen der Produktentstehung für die Kreislaufwirtschaft
  • Iris Graessler

    Prof. Dr.-Ing. Iris Graessler studied mechanical engineering at the RWTH Aachen University, where she received her doctorate degree in 1999. In 2003 she qualified as a professor at the RWTH Aachen University due to her habilitation thesis entitled “Development of Configurable Adaptive Mechatronic Systems”. After thirteen years as a manager at Robert Bosch GmbH in the fields of product development, production systems and change management, she has held the Chair for Product Creation at the Heinz Nixdorf Institute of the University of Paderborn since 2013. Her research focuses on strategic product planning, especially Scenario-Technique, resilient requirements engineering, systems engineering as well as digitalization and virtualization. The emphasis lies on adaptive configurable mechatronic systems and cyber-physical systems. Graessler commits herself to the VDI (Association of German Engineers) as a member of the Executive Board of VDI/VDE Society for Measurement and Automation Technology (GMA), acting as Chairwoman of the Advisory Board and of the Department 3 “Digitalization and Virtualization”. Among other roles, she is Member of the Scientific Society for Product Development (WiGeP) and coordinator of the Priority Program SPP 2443 “Hybrid Decision Support in Product Creation” by the German Research Foundation (DFG).

    ORCID logo
    , Michael Weyrich

    Professor Michael Weyrich is head of the Institute of Industrial Automation and Software Engineering at the University of Stuttgart. He researches and teaches in the area of industrial information technology for automation. His research interests include intelligent automation components, autonomous systems, and automation safety. He studied at the Universities of Saarbrücken, Bochum and London and received his PhD from RWTH Aachen. He holds an honorary doctorate from the Ukrainian University TUDon. Before joining the university, he worked in the multinational industry for 10 years, including 4 years in Asia. He is Chairman of the VDI/VDE Society for Measurement and Automation Technology and a member of the VDE Executive Board. He holds several positions, e.g. head of the VDI/VDE Technical Committee on testing networked systems, director of the Innovation Campus Mobility, industry chair of the International Federation of Control (IFAC) for “Computers for Control”, member of the program committee in the IEEE ETFA conferences as well as member of the editorial board of the journal ATP edition.

    ORCID logo
    , Jens Pottebaum

    Dr.-Ing. Jens Pottebaum has been Senior Researcher at the chair for Product Creation at the Heinz Nixdorf Institute at Paderborn University since 2015. He studied Mechanical Engineering and Computer Science at Paderborn University. In 2011, he defended his PhD on “Optimization of application oriented learning by knowledge identification” with distinction. His research is focused on information based learning cycles in the context of Product Creation. With his background in both Mechanical Engineering and Computer Science, he researches and teaches on “Digital and Virtual Product Creation”. He plays a coordinating role in national and European research associations such as RepAIR, ANYWHERE, CREXDATA and Decide4ECO as well as the DFG Priority Program SPP 2443 “Hybrid Decision Support in Product Creation”.

    ORCID logo EMAIL logo
    and Simon Kamm

    Simon Kamm, M.Sc., studied Electrical Engineering and Information Technology at the University of Stuttgart. Since 2020, he has been a research assistant at the Institute of Industrial Automation and Software Engineering at the University of Stuttgart. His research focusses on machine learning based on heterogeneous data sources and types.

    ORCID logo

Abstract

This paper presents the concept of Information Circularity Assistance, which provides decision support in the early stages of product creation for Circular Economy. Engineers in strategic product planning need to proactively predict the quantity, quality, and timing of secondary materials and returned components. For example, products with high recycled content will only be economically sustainable if the material is actually available in the future product life. Our assumption is that Information Circularity Assistance enables decision makers to incorporate insights from extreme data – high-volume, high-velocity, heterogeneous and distributed data from the product life – into product creation through intelligent Digital Twins. Artificial Intelligence can help to derive sustainable actions in favor of circular products by processing extreme data and enriching it with expert knowledge. The research contributes in three key dimensions. First, a comprehensive literature review is conducted. This review covers concepts of intelligence in Scenario-Technique for strategic product planning, Digital Twin-based analysis of extreme data and relevant technologies from Data Science and Artificial Intelligence. In all areas, the state of the art and emerging trends are identified. Secondly, the study identifies information needs along the steps of the Scenario-Technique and information offerings based on Digital Twins. The concept of Information Circularity Assistance results from the coupling of these demands and offerings, extending the Scenario-Technique beyond traditional expert-based methods. Third, we extend existing Digital Twin methods used in circularity and discuss the deployment of Data Science and Artificial Intelligence algorithms within the product creation process. Our approach uses extreme data to provide a strategic advantage in optimizing product life cycle planning, which is illustrated by two sample applications. The aim is to provide Information Circularity Assistance that will support experienced product planners, developers, and decision makers in the future.

Zusammenfassung

In diesem Papier wird das Konzept der Informationszirkularitäts-Assistenz vorgestellt, das in den frühen Phasen der Produktentstehung für die Kreislaufwirtschaft Entscheidungshilfen bietet. Ingenieure in der strategischen Produktplanung müssen proaktiv die Menge, die Qualität und den Zeitpunkt von Sekundärmaterialien und zurückgegebenen Komponenten vorhersagen. So sind beispielsweise Produkte mit hohem Recyclinganteil nur dann wirtschaftlich nachhaltig, wenn das Material im zukünftigen Produktleben tatsächlich verfügbar ist. Unsere Annahme ist, dass die Informationszirkularitäts-Assistenz es Entscheidungsträgern ermöglicht, Erkenntnisse aus extremen Daten – hochvolumige, hochschnelle, heterogene und verteilte Daten aus dem Produktlebenszyklus – durch intelligente digitale Zwillinge in die Produktentstehung einfließen zu lassen. Künstliche Intelligenz kann dabei helfen, nachhaltige Maßnahmen zugunsten zirkulärer Produkte abzuleiten, indem sie extreme Daten verarbeitet und mit Expertenwissen anreichert. Die Forschung trägt in drei wesentlichen Dimensionen bei. Zunächst wird eine systematische Literaturrecherche durchgeführt. Diese umfasst Konzepte der Intelligenz in der Szenario-Technik für die strategische Produktplanung, die auf digitalen Zwillingen basierende Analyse extremer Daten und relevante Technologien aus den Bereichen Data Science und Künstliche Intelligenz. In allen Bereichen werden der Stand der Technik und aufkommende Trends ermittelt. Zweitens identifiziert die Studie den Informationsbedarf entlang der Schritte der Szenario-Technik und Informationsangebote auf der Grundlage von digitalen Zwillingen. Das Konzept der Informationszirkularitäts-Assistenz ergibt sich aus der Verknüpfung dieser Bedarfe und Angebote. Es erweitert die Szenario-Technik über traditionelle expertenbasierte Methoden hinaus. Drittens erweitern wir bestehende Methoden des Digitalen Zwillings, die in der Zirkularität eingesetzt werden, und diskutieren den Einsatz von Algorithmen der Data Science und der Künstlichen Intelligenz innerhalb des Produktentstehungsprozesses. Unser Ansatz nutzt extreme Daten, um einen strategischen Vorteil bei der Optimierung der Produktlebenszyklusplanung zu erzielen, was durch zwei Beispielanwendungen veranschaulicht wird. Ziel ist es, eine Informationszirkularitäts-Assistenz bereitzustellen, die in Zukunft erfahrene Produktplaner, Entwickler und Entscheidungsträger unterstützt.

1 Introduction and motivation

The development of circular mechatronic products, such as domestic appliances and power tools, requires decisions based on long-term considerations. These considerations should include aspects such as reuse, repurpose, repair, refurbish, remanufacture, and recycling [1]. Established methods for product planning and concurrent engineering have limitations for several reasons. For example, future recycling of products and their materials will no longer represent the “end of life”, but the beginning of the next phase in the product’s life [2], [3]. The quality of recyclate depends on the actual operating conditions such as stress and contamination and is therefore difficult to predict – and difficult to determine [4], [5], [6]. Quantities of available secondary materials are difficult to predict due to varying lifetimes and decommissioning rates, even in cradle-to-cradle models (cf. [7]). Business models and ecosystems of material and product manufacturers as well as recycling companies are volatile, but strategic product planning is needed for portfolio decisions that span decades (e.g. see [8]). During this time lag, separation and recycling technologies are being developed, creating capabilities that are initially unknown at the time of product planning and development [9], [10].

The Scenario-Technique allows the systematic assessment of possible futures [11]. Based on influencing factors such as technology development, an advanced understanding of future opportunities is generated. However, this process requires data [12]. In order to assess the life cycle of a product and its entire context (cf. multi-cycle view in [1]), a vast array of data is required. Digital Twins are powerful tools that enhance decision-making processes by utilizing such extreme data. These tools provide a novel foundation for decision-making in product planning and engineering [13], [14]. In the present era, Digital Twins facilitate the aggregation and organization of data. In the future, they must be designed in a manner that enables data analysis for new products or product generations.

The fields of Data Science and Artificial Intelligence show indications that analysis capabilities required for Information Circularity Assistance (ICA) become available. The objective is to transition the paradigm from “we try to use what we have for strategic planning” to “we know what we need and how we ensure to get it in the future”. Thus, the paper combines two research questions:

  1. What gaps need to be addressed to enable the use of Digital Twins in foresighted strategic product planning?

  2. Which Data Science and Artificial Intelligence technologies seem appropriate to analyze relevant data types to support specific requirements in foresighted strategic product planning?

A systematic literature survey is presented in order to provide an answer to the first question (see chapter 2). Based on this deep understanding of the research landscape, the synthesis of Scenario-Technique and Digital Twins is proposed in chapter 3. The approach is twofold: Criteria for matching Scenario-Technique demands and Digital Twin offerings are elaborated, and relevant Data Science and Artificial Intelligence technologies are systematically identified. A matrix is presented that correlates types of data and demands derived from the Scenario-Technique. For each match, the relevant Data Science and Artificial Intelligence technologies are identified. To facilitate the discussion of results in chapter 4, two exemplary products are utilized as illustrative examples: A context-aware pill dispenser and a robot vacuum cleaner are two examples of long-lasting, mechatronic, and cyber-physical systems. Conclusions and outlook are presented in chapter 5. The approach is designed to support strategic planners, innovation managers, and engineers in the early phases of product creation to take responsibility for future robust decisions toward circular products.

2 A systematic literature survey

In order to answer the question of gaps that need to be closed to enable the use of Digital Twins in foresight strategic product planning, the first research step is to systematically analyze the relevant landscape. To this end, a systematic literature analysis is conducted following the PRISMA approach as per [15]. Topic search (TS) terms are applied to the databases of Scopus and Web of Science, with results cross-checked against those returned by Google Scholar. TS1 refers to key questions and indicators for decision making, as detailed in Section 2.1. Here, a focus is set on available approaches that support strategic planning with the intention of designing circular products. TS2 is dedicated to the Scenario-Technique for strategic product planning with a focus on data-based approaches (Section 2.2). Complementary to that, TS3 examines the available approaches on Digital Twin-based analysis of extreme data (Section 2.3). These sections prepare for the identification of relevant Data Science and Artificial Intelligence technologies, which are then elaborated in three distinct categories (Section 2.4).

2.1 Key questions and indicators for decision-making

A wide range of circularity indicators are discussed in the literature [16], [17] and examined for their informative value in individual case studies [18], from enterprise down to product-related indicators referred to as nano-level [17]. Key questions to be answered include:

  1. What proportion of recycled material should the production system be designed for to be able to realize defined product characteristics?

  2. How shall product features be designed to withstand the load spectrum with the available material mix?

  3. How can it be ensured that a product can still be operated in an eco-efficient manner in 20 years?

A German study on the reuse of electric/electronic components in washing machines indicates that a differentiated analysis of indicators is necessary for a meaningful sustainability assessment [19]. Furthermore, the approaches of Dynamic Material Flow Analysis (dMFA) and Life Cycle Assessment (LCA) can be combined (cf. approach by [20]): For a given material mix, it is necessary to determine how existing processes and equipment must be calibrated to be able to manufacture? What bandwidth must be provided for in the production system? Which process parameters must be adjustable, and in what range? One challenge in this regard is the unpredictable product quality of recycled products [21]. This illustrates the challenges in the very early phases of product creation concerning the product and production system: it has been demonstrated that sufficiently precise forecasting models can only be calculated for a few years [22] or are dependent on a number of assumptions, including the frequency of reuse [19].

A systematic literature analysis based on PRISMA [15] led to 37 results (Scopus 18, Web of Science 19), which were reduced to 7 articles that deal more specifically with the use of quantitative or qualitative data. There are no approaches or statements on how to combine quantitative data to make qualitative long-term statements. An additional analysis of the literature confirms that few approaches refer to support in the early phases based on business models or product concepts but then provide purely qualitative statements. Reviews found in this analysis support that quantitative circularity metrics are currently primarily used in the late phases of development [23] and are inadequate for fully representing the complexity of the Circular Economy [24]. One example is the Circularity Evaluation Tool (CCET) [25]: The circularity potential is evaluated subjectively based on parameters that in turn have been abstracted from design guidelines [26]. The assessment is based on product types and does not take into account any available data on circularity indicators. The analysis of circular trends is initiated as the first phase in the Strategic Planning Decision Framework oriented to Circular Business Models (SPDF-CBM) [27], yet it is not subject to a systematic elaboration. Models for evaluation, which are refined through qualitative statements, reduce the forecasting errors, but assume valid data and only allow statements about events in the near future [28], [29]. Similarly, only positive scenarios are considered. The quality of recyclate is not considered [30], [31]. Based on the conducted literature review, no approach combines available data for quantifiable short-term statements with qualitative assessment schemes for long-term forecasts.

2.2 Scenario-Technique for strategic product planning

The Scenario-Technique serves as a support to systematically use trends in the multi-criteria evaluation of strategic options. Decisions in favor of Circular Economy business models require such considerations with continuous updates requiring agile Scenario-Technique approaches [11]. Algorithms like cross-impact analysis [32], consistency assessment [33], and Intuitive Logics (e.g. [34]) are applied, only partially including quantitative probabilities. Along consistency-based Scenario-Technique, artifacts are generated and correlated:

  1. Influencing factors: Identification and selection of relevant factors, e. g., in relation to product type, application domain/market, technologies, and material circularity

  2. Influence matrix with the active and passive sum of interactions [35] (e. g., in the system grid [33], possibly with time reference/dynamics [36], or impulsivity [11])

  3. Key factors as a subset of all influencing factors (e. g. [37])

  4. Projections of “multiple futures” [11], [38]: Access to expert knowledge and distributed data sources from statistical databases with an evaluation of the respective data quality

  5. Consistency assessment with numerical scales (see, e. g. [39]) and consistency matrix

  6. Raw scenarios, selected according to diversity, stability, and relevance [36], [40]: technical evaluation of projections, which must be explainable

  7. Scenarios with consequences that are generated in a comprehensible and credible manner with uncertainty to support decisions in terms of lead strategy and alternative strategies [11].

Algorithms partially supported by Scenario-Technique tools and data-based concepts like the Integrated Scenario-Data Model (ISDM) presented in [12] are used to generate these artifacts, for example, to evaluate indirect influences, to include influence strengths or to take input-output relationships into account [40]. Branch-and-bound [41] or evolutionary algorithms are used in scenario building [42]. Fuzzy approaches exist for scenario building as well as for consistency and influence assessment (like [43]). A considerable effort is caused by consistency assessment [39], [44], [45]. It depends to a large extent on personal experience in the application and is critical with regard to the quality of the scenario results [45], [46]. Here, initial studies show potential in the application of Data Science and Artificial Intelligence in consistency-based scenario technology [47] and in scenario-based foresight in general [48]. Examples are described both in the form of data-based approaches with neural networks and through integration into a higher-level knowledge base [39]. Web mining and text mining are also used in the preparatory steps of the Scenario-Technique to identify and aggregate relevant topics and derive influencing factors and projections [49]. All selection rules used are heuristic, require expert knowledge to use, and have a critical effect on the quality of the resulting scenarios. Within the individual process models, there are breaks between the artifacts [46], [50]. The resulting loss of information makes the interdependencies difficult for the user to understand [46], [50], [51]. Only experts can estimate causalities based on implicit knowledge. In terms of procedure and artifacts (see list above), the Scenario-Technique offers a basis for generating circularity-related long-term statements through forecasts of the consequences of strategic decisions. However, to date, there is no approach for deriving projections from data-based short-term statements and relating these to qualitatively described long-term forecasts in terms of influence and consistency.

2.3 Digital Twin-based analysis of extreme data

In order to derive these projections and relate them to long-term statements, it is necessary to map extreme data that occurs during the product life cycle and to create the corresponding data layers. Digital Twins are often used to map data for an asset (product, production system, etc.) with the aim of providing a digital representation of the asset [52], [53]. The basis of Digital Twins are models and their relationships. In addition to static models, which must be synchronized with reality, dynamic processes of the product must also be captured through active data acquisition and executable models to enrich the static image with dynamic behavior [54]. In addition to these aspects, three further properties must be included according to [54], [55]:

  1. The models and their relationships are synchronized with the asset so that the static image of the asset is always replicated.

  2. There must be active data acquisition from the asset to the Digital Twin so that the dynamic processes are also captured.

  3. The existence of an executable model (i. e., a simulation model) is necessary to expand the static image with the replication of the dynamic behavior of the asset.

The Digital Twin has already been investigated and further developed in preliminary works as a basis for the application of Data Science and Artificial Intelligence [56], [57]. A Digital Twin, as described above, has a data acquisition interface that is responsible for acquiring the data from often heterogeneous sources. A workflow for the acquisition of data from heterogeneous data sources was proposed in [58] and is illustrated in Figure 1. First, a communication must be set up as the basis for data collection. Then, the data must be integrated into an appropriate software environment to perform the following steps. The data is validated, outliers can be detected and the dimensionality of the data can be reduced, depending on the application and the amount of data. Finally, the data is semantically annotated to ensure consistency and the ability to align data from heterogeneous data sources.

Figure 1: 
Exemplary workflow for the data acquisition from heterogeneous data sources, following [58].
Figure 1:

Exemplary workflow for the data acquisition from heterogeneous data sources, following [58].

However, the existing definitions and applications of the Digital Twin mainly target at the production system or the product during design or production [59], [60]. Often, only one phase of the product life cycle is considered. However, a lot of data is generated for a product during its life cycle, e.g., during manufacturing, usage, condition monitoring, or recycling. The Digital Twin shall allow access to data throughout the life cycle, as described in the WiGeP position paper for the Digital Twin [61]. To enable long-term statements, the Digital Twin of a product must be able to represent this product over its whole life cycle. According to [62], a reliable Digital Twin must evolve with the changing characteristics of reality. These changing characteristics are often unknown when the Digital Twin is initially set up and can also include technological leaps (for example, concerning materials, material-process combinations, automated treatment, condition testing for recycled materials, and simulation models). A literature review [63] concluded that the integration of heterogeneous data structures in Digital Twins will be both a great opportunity as well as a major challenge in the future.

2.4 Identification of relevant data science and AI approaches

As previously stated, a variety of data is generated throughout the life cycle of a product, resulting in extreme data. To capture this data and information and gain knowledge from it, different Data Science and Artificial Intelligence methods exist, which may be used. The objective of this section is to describe potential technologies that may be applied in conjunction with Digital Twins to derive meaningful insights from the data for the Scenario-Technique. This chapter aims to provide an overview of technologies and not a comprehensive list of approaches. Further possibilities for enhancing the Digital Twin with Machine Learning algorithms are discussed in several publications, such as [64].

Knowledge Graph components will be used with extreme data and the Digital Twin. To derive insights from the Knowledge Graph, Graph Neural Networks or graph embedding algorithms for data analysis are used according to [65], [66]. By incorporating a Knowledge Graph, relationships within extreme data can be mapped, leading to Linked Data. However, querying a Knowledge Graph becomes complex for big graphs. Therefore, queries based on Natural Language Processing (NLP) can help in reducing the complexity. Large Language Models (LLMs) such as GPT-4 [67] or BERT [68] can understand natural language queries. A domain-specific customization is required to further translate this natural language into precise queries to the Knowledge Graph. This enables efficient exploration and querying. In [69], the application of LLMs for the automation domain and the interaction of these models with graphs and ontologies have already been investigated. This related work shall serve as a basis to enable natural language queries by the users to improve interaction for the extreme Scenario-Technique.

Table 1 gives an overview of the mentioned AI approaches, which are seen to be suitable for using them within the Information Circularity Assistance to allow a consistent information view across the life cycle of a product and allow the Scenario-Technique to use the information.

Table 1:

Overview of AI approaches for the usage within the Information Circularity Assistance.

Source AI approach Aspects of usage within the Information Circularity Assistance Analyzable data type/structure Technical information (framework used, etc.)
[65] Knowledge Graph enhancing the Digital Twin Structuring extreme data of the Digital Twin in a Knowledge Graph Open, data needs to be integrated into the graph structure Often used tools for Knowledge Graph creation: Neo4j, Ontotext
[66] Knowledge Graph embeddings Analyzing Graph Data and retrieving additional information from it Graph Data Libraries: AmpliGraph, PyKEEN, TorchKGE, Pykg2vec
[69] Large Language Model Natural language interface (human-machine interface) Textual Data Hugging face: High or low-level interface available, AllenNLP library, Fairseq (by facebook AI research), OpenAI playground
[70] Graph Neural Networks Provide predictive information based on available graph (and historical information) Graph Data Realized with typical neural network frameworks (e.g. PyTorch or Keras)

A Knowledge Graph is constituted by a knowledge base, which can be realized by an ontology [71]. A number of studies have concentrated on the construction of appropriate ontologies for the product life cycle. These offer the potential for the sharing and reuse of collected information in a uniform semantic description, integrating complex expert knowledge, and applying automatic reasoning [72]. In [73], the ontology is divided into several key entities for this purpose, including product, process, material, and properties. For example, the recycling of materials in products can also be modeled by this approach. In [74], an ontology was created with a focus on Additive Manufacturing. Its objective is to cover the entire product life cycle, thereby providing the opportunity to support the design of new products using the available data. In contrast [75], presents an ontology-based End-of-Life model that considers the possible reuse of individual product components. This could, for example, result in the products being offered again as refurbished ones. Table 2 provides a summary of the discussed ontologies. Starting from those ontologies, which already exist for different areas of product circularity, a new ICA base ontology shall be created.

Table 2:

Overview of exemplary ontologies in the field of product life cycle for mechatronic products.

Source Ontology name Aspects of product circularity covered Product/asset/domain realized for Technical information (framework used, etc.)
[66] “Basic ontology for cyber-physical production systems” Describing a production system with a focus on reconfigurability Manufacturing domain Ontology Design Patterns (ODP) created based on industry standards or existing ODPs re-used

SW tool: Protégé
[73] “Ontology for products and processes” Sustainability in manufacturing LCA of a bicycle SW tool: Protégé

Reasoner: pellet
[74] Additive Manufacturing Ontology (AMO) PLC for Additive Manufacturing Dentistry product manufacturing with PLC data handling SW tool: Protégé
[75] Ontology-based model of EoL product Remanufacturing Disassembly process for retrieving the target components of a worm gearbox SW tool: Protégé

Reasoner: HermiT OWL reasoner

Ontology learning approaches shall be used to further enhance these ontologies with the occurrence of extreme data, which is accessible in unstructured or semi-structured formats. As stated in [76], a variety of ontology learning approaches may be employed depending on the type of data to learn the correlations for the ontology. This may entail either the creation of a new ontology from scratch or the refinement of an existing one. For instance, linguistic approaches that employ natural language processing [77], [78], [79], [80] are frequently utilized for unstructured data to facilitate pre-processing, term/concept extraction, or relation extraction. Statistical methods (for example, contrastive analysis or hierarchical clustering) [81], [82] are frequently employed for semi-structured data to extract terms, concepts, and relations within the data set. Inductive logic programming [83], [84] can be utilized as a third learning method for axiom formation. To integrate the extreme data over the product life cycle, existing ontologies need to be refined during the life cycle, since not all properties and relationships can be modeled at the beginning. Therefore, linguistic and statistical ontology learning methods shall adapt the ontologies according to the occurring extreme data. The introduced ontology learning approaches and the concrete examples are grouped and listed in Table 3.

Table 3:

Overview of ontology learning approaches to enhance existing ontologies.

Source Learning approach Approach group Suitable product circularity data Technical information (SW tool, algorithm, etc.)
[77] Auto-generated concept maps from aforementioned contents Linguistic approach Textual Data, e.g.: maintenance information, textual description of properties Text extraction: Apacha POI API

NLP operations: stanford CoreNLP

Visualisation: IHMC CMap tool
[78] Hybrid approach to discover lexico-semantic relations and limit to the identification of hypernym/hyponym relations Linguistic approach Textual Data, e.g.: maintenance information, textual description of properties Text extraction: Apache Tika

Sentence detection: OpenNLP

NLP operations: stanford CoreNLP API

Serial data store: GATE

Feature extraction: JAPE, VSMs
[79] Single neural network approach to translate natural language definition into descriptive logic Linguistic approach Textual Data, e.g.: maintenance information, textual description of properties Single neural network via a Seq2Seq model (or recurrent encoder-decoder scheme): pointer network
[81] Domain ontology learning enhanced by optimized relation inst Statistical methods Extract taxonomies from semi-structured data Similarity measure: cosine-similarity

Learning method: hypertext-induced topic search with K-means based clustering
[82] Expressive taxonomy learning Statistical methods Extract taxonomies from semi-structured data Clustering: hierarchical clustering

Grouping entities: Axiom mining method

Taxonomy learning: injection of axioms into the clustering tree
[83] Inductive logic programming-based method that induces rules for extracting instances of various entity classes Inductive logic programming Extract entities (ontology instances) from text data Identifying candidates: domain-independent linguistic patterns

Similarity measure: WordNet semantic similarity measure
[84] Logic-based computational method for the automated induction of fuzzy ontologies axioms Inductive logic programming Extract ontology axioms from text data Retrieval engine: SoftFACTs

Algorithm: SoftFOIL (based on FOIL)

In general, the impact of circularity (for example, quantities or quality of recyclate) must be addressed in order to enable the implementation of hybrid decision support. To employ Data Science and AI technologies, it is essential to undertake a combined modeling of the data. The data pertaining to Digital Twins and Knowledge Graphs must be integrated and semantically enhanced through the utilization of ontologies. Existing ontologies for specific phases of the product life cycle can be aligned for this purpose and subsequently refined and augmented through ontology learning processes. This design of the extreme data serves to reinforce the Scenario-Technique.

The Digital Twin is founded upon a comprehensive set of data that encompasses the entirety of the product life cycle. This data set includes a variety of indicators that facilitate reuse, repurposing, repair, refurbishment, remanufacturing, and recycling. It may comprise both real data and synthetic data pertaining to the product’s life cycle. The synthetic data cloud is manually generated based on future scenarios that predict based on the analysis and assumption of historical data. As previously discussed, a comprehensive relevance analysis and prediction synthesis is essential, using Data Science and Artificial Intelligence-based algorithms.

Concurrently, the Scenario-Technique is to be developed into an advanced methodology, designated the Extreme Scenario Technique. This evolution incorporates insights derived from the Digital Twin analysis and projection synthesis, which is necessary to ensure the effective delivery of pertinent information. The Extreme Scenario Technique is specifically designed to address decision scenarios related to circular products and production systems. Following the Data-Information-Knowledge-Wisdom (DIKW) pyramid, the wealth of extreme data within the product life cycle is extended to facilitate actions in product planning and requirements engineering for future products, in line with the principles of the V-model.

3 Combination of Scenario-Technique and Digital Twin

While existing work in the field of the Digital Twin has made significant process, the handling of extreme data and the integration of data layers for the Scenario-Technique within the Digital Twin remain open challenges. The combination of the Scenario-Technique and digital the twin shows promise in addressing the challenges identified in the literature. The following two sections present the foundational concepts derived from the two existing fields, namely Scenario-Technique and Digital Twin, as illustrated in Figure 2. These concepts were previously discussed in chapter 2.

Figure 2: 
Overview of existing fields in Digital Twin and Scenario-Technique.
Figure 2:

Overview of existing fields in Digital Twin and Scenario-Technique.

The Digital Twin with its background and existing works has the potential to acquire, handle, and process extreme data for different applications. The Scenario-Technique brings established tools, procedures that facilitate agility, and an integrated scenario data model. The criteria from the literature identified in chapter 2 are aligned with the demands of the Scenario-Technique and the offerings of the Digital Twin (Section 3.1). As illustrated in Figure 2, it is postulated that the two fields influence each other and necessitate adjustments in the other field, such as

  1. Digital Twins offer opportunities that shall cover demands within the Scenario-Technique. By matching offerings and demands, value is given to Digital Twin data and efforts are reduced in Scenario-Technique. Matching is prepared in Section 3.1 by deriving criteria from both the demand and offering sides, analyzing the semantics of both data domains.

  2. Extending this matching from Scenario-Technique to Digital Twins, extreme data shall become interpretable through Digital Twin designs. The interconnection of Digital Twins with extreme Data is prescribed in Section 3.2. On that basis, the inference of needs for future Digital Twin designs to improve and realize an Information Circularity Assistance in the future becomes feasible.

Criteria are identified to evaluate existing Data Science and Artificial Intelligence technologies in Section 3.1. In Section 3.2, the categorization from Section 2.4 is adopted to prioritize relevant technologies as a basis for the identification of use cases.

3.1 Criteria for matching Scenario-Technique demands and Digital Twin offerings

As previously stated, the core of the Scenario Technique entail a procedure to systematically derive scenarios from influencing factors. This method requires the availability of extensive data and expert knowledge, as outlined in Section 2.1. Descriptors are derived from influence factors and projections are identified for each descriptor. The resulting scenarios are the product of bundling projections [11].Throughout the process, analysis and synthesis are performed. For instance, influence factors are checked for interdependencies, and projections are analyzed for consistency. The ISDM [12] was designed to provide comprehensive support for all steps of the process, encompassing both the establishment of connections to pertinent data sources and the management of data within the context of a Scenario-Technique project. These intermediate steps represent the demands, that the Scenario-Technique places on the Digital Twin. For information circularity, the complexity of data demands to perform the Scenario-Technique increases. These demands can be viewed as the interface between the Scenario-Technique and the Digital Twin, as illustrated in Figure 2. The Digital Twin presents further opportunities for enhancement, which shall enrich the Scenario-Technique. Building upon this foundation, five use cases for data science and Artificial Intelligence have been identified, which refer to different Scenario-Technique artifacts identified in Section 2.2. These use cases comprise the columns in Figure 3:

  1. Inference of relevant influence factors: The identification of influence factors requires considerable effort to describe future influences on a product in Circular Economy. In the original Scenario-Technique approach, both elaboration and checking for relevance require experts to contribute their knowledge, typically in workshop settings. Previous approaches that aim to reduce efforts subsume the reuse of influence factors from previous projects or generic sets that are relevant across application domains. Even with these approaches, the analysis of relevance in a specific Scenario-Technique project remains a challenge. Linked Data and domain ontologies in combination with a semantic representation of Information Quality seem promising approaches to support such inference.

  2. Correlation-based prioritization: The selection of influence factors, or descriptors, constitutes a significant foundation for the resulting scenarios. The prioritization needs to ensure that the data processed throughout Scenario-Technique steps is manageable and that the resulting scenarios are comprehensive for decision makers. At the same time, these scenarios must be representative of the entire domain of interest (like product creation and CE, combined with technologies, industries, markets, etc.). Only indications are possible anyhow due to the future-oriented perspective so correlation-based approaches in AI are applicable.

  3. Projection synthesis: Per descriptor, projections need to be described indicating how this descriptor will evolve in the future. As previously stated in Section 2.1, for CE, a combination of qualitative (e.g., “political obstacles on global trade will {increase, remain, decrease}”) and quantitative projections (e.g., “recyclate material will be available in {2, 5, 10} years”) should be employed wherever feasible. he formulation of such projections necessitates a profound understanding of the pertinent domains, which may vary across Scenario-Technique projects. The synthesis of projections can be achieved through the utilization of databases such as Statista, trend radars accessible in natural language, and simulations or simulation results.

  4. Reasoning on consistency: Assuming projections are available, consistency analysis of such projections requires an interpretation of the semantic interdependencies across all descriptors. For example, recycled material may be technically available in the near term, but economically affected by the evolution of global trade. Consistency analysis must therefore be based on a multi-domain understanding of all influence dimensions. For many domains, such as materials, technologies, or markets, domain ontologies are available as a promising basis also for consistency reasoning in the Scenario-Technique.

  5. Inference of consequences: When bundling projections into scenarios, a final consideration is how useful the resulting scenarios are for decision making. A typical approach is to aim for extreme scenarios, such as “best” and “worst” scenarios, to cover the full range of possible futures. This is a simplification compared to the actual objective of this step: The creation of actionable scenarios, i.e., scenarios that support decisions for or against solution alternatives. Therefore, the understanding the consequences that may be implied by certain projections and their combinations should be considered. Inferences based on correlations in the sense of “what-if” tasks can help in the final selection of scenarios (i.e., projection bundles).

Figure 3: 
Steps from the Scenario-Technique and data categories in Digital Twins spanning use cases for extreme data which could be interconnected in the Information Circularity Assistance.
Figure 3:

Steps from the Scenario-Technique and data categories in Digital Twins spanning use cases for extreme data which could be interconnected in the Information Circularity Assistance.

The life cycle of a product generates a huge amount of (extreme) data from different phases, e.g. planning and engineering, realization including production, operation/use, or service (especially maintenance, repair, and overhaul). In the literature there are different groups of phases for the product life cycle, e.g. grouping according to the beginning, middle, and end of life [61] or dividing into design, development, manufacturing, operation, and disposal [85]. The Industrial Digital Twin Association and asset administration shell also discuss the possibility of incorporating product life cycle data. However, there is currently no clear definition or specification.

All of the proposed groupings lead to different categories of data that must be provided by the Digital Twin to achieve proper information circularity. This paper is based on the gPLC model [1], with phases of planning, engineering, realization including production, operation and service delivery, as well as material and information circularity. Despite all possible variants, it is necessary to provide a categorization of the data, including an exemplary description, in order to subsequently identify relevant data science and Artificial Intelligence methods. The categories are named and examples for these data types are given. This is to provide an exemplary view of the expected extreme data that needs to be captured, managed and processed within a Digital Twin, that interacts with the previously described Scenario-Technique. These data categories form the rows in Figure 3:

  1. Planning Data of various types, such as data related to project timelines, resource allocation, and production schedules from the integration with planning software: This help optimize plans for sustainability and compliance. Digital Twins can also maintain a digital record of quality assurance processes, compliance documentation, and certifications earned by the product or its components. Such a centralized repository ensures traceability and facilitates audits or regulatory reporting. In addition, specifications for the new product, including sustainability requirements (e.g. percentage or recycled materials) can be used. This includes documents such as the system and component specification or recycling rate.

  2. Engineering Data: The Digital Twin can be integrated with various product development tools such as computer-aided design (CAD) software, product life cycle management (PLM) systems, and simulation tools. By connecting to these systems, it can access detailed engineering data such as design specifications, prototypes, simulation results, and design changes throughout the product development process. These models are part of the digital asset management component of the Digital Twin. For example, simulation models can provide insight into the environmental impact of different design choices, manufacturing processes, and supply chain decisions. Digital Twins can also perform virtual life cycle assessments to evaluate the environmental impact of the product from raw material extraction to end-of-life disposal. By integrating LCA methodologies into the Digital Twin framework, factors such as carbon footprint, water use, and ecological footprint can be quantified.

  3. Realization/Production Data: of such a system. Examples include supply chain data, manufacturing process data, or quality metrics. Digital Twins can provide real-time visibility into the entire supply chain network, including suppliers, manufacturers, distributors, and retailers. By integrating with enterprise resource planning (ERP) systems, Digital Twins can access data about supplier performance, inventory levels, lead times, and production schedules. This visibility enables organizations to identify bottlenecks, optimize inventory levels, and mitigate supply chain risks. Digital Twins can also help to ensure regulatory compliance throughout the production life cycle by maintaining comprehensive records of production processes, materials used, and environmental impact. By integrating with regulatory databases and compliance management systems, Digital Twins can automate compliance monitoring, generate audit reports, and ensure adherence to regulatory requirements.

  4. Operation Data: Gathering data from the operational phase is often difficult, resulting in sparse and therefore extreme data. Digital Twins can monitor product usage in real-time by integrating with embedded sensors and IoT devices. These sensors collect data on operational status, performance metrics, usage patterns, and environmental conditions during use. By analyzing this data, Digital Twins provide insights into how the product is being used, including usage patterns, frequency of use, and operational efficiency. In addition, Digital Twins can capture feedback and customer reviews throughout the product life cycle, including user interactions, service requests, and product reviews. By aggregating and analyzing this feedback, organizations gain valuable insights into customer preferences, pain points, and satisfaction levels. As a result, they can improve product usability, address customer concerns, and increas overall customer satisfaction.

  5. Circularity Data: Additional data occurs at the end of a product’s life when the product or parts of it may be reused, remanufactured, refurbished, etc. Circularity data can include life cycle data by collecting data on the environmental impact of the product throughout its life cycle. A Digital Twin can facilitate the final life cycle assessment. This includes data on energy consumption, greenhouse gas emissions, water use, and waste generation. By analyzing this data, organizations can identify opportunities to improve the product’s environmental performance, reduce resource consumption, and minimize waste generation through Circular Economy practices.

In conclusion, a huge amount of (extreme) data is generated during the product life cycle. Examples of occurring data across identified data groups are described, which will be used in the future to enrich the Scenario-Technique. Based on this overview and a concrete product, it is possible to identify opportunities that can already be realized based on existing data and a Digital Twin. However, it also clearly shows the need for future Digital Twin designs to handle this information circularity and not just cover single data groups. Figure 3 gives an overview of the aspects discussed in this chapter with the described data groups that are stored and modeled in the Digital Twin, the occurring extreme data, and the steps of the adapted new Scenario-Technique. The appropriate Data Science and Artificial Intelligence technologies serve as a kind of interface between the fields, which will be discussed in more detail in the following chapter.

As visualized, both fields (Scenario-Technique and the Digital Twin) need to be linked. Inputs from the Digital Twin improve the Scenario-Technique and then create new “data demands” for the Digital Twin, so there is a loop of adjustment between them. From the other side, the Scenario-Technique, with its adapted steps for the information circularity, also creates new possibilities for the Digital Twin within the Circular Economy. Appropriate data science and Artificial Intelligence technologies act as an interface or as a step to enrich the information transferred between the two fields.

3.2 Interconnecting the Digital Twin with extreme data

This section discusses the identified criteria for matching Scenario-Technique demands and Digital Twin offerings from Section 3.1 by introducing suitable Data Science and Artificial Intelligence technologies from the literature (Section 2.4) to enable the interconnection of the two fields (Scenario-Technique and Digital Twins), as visualized in Figure 3. Depending on the identified use cases within the Scenario-Technique (1–5 from Section 3.1) and the data category (A to B from Section 3.1), different Data Science and Artificial Intelligence technologies are suitable. In addition, different technologies may be suitable for a specific use case across different data groups or vice versa for a specific data group across all use cases. This will be discussed below.

Extensive expert knowledge is required to derive influence factors. Therefore, approaches for the reuse of influence factors are developed. For further automation and enrichment, the information from previous projects or the generic sets across application domains should be modeled in a computer-readable format. Linked Data in general (especially graph-based approaches as discussed above) as well as domain ontologies are approaches to formalize the influence factors and to allow the modelling of complex interactions between them. The prioritization of the influence factors is crucial for the subsequent scenario generation. For this step, correlation-based approaches including mathematical measures are a suitable choice. These approaches can be used to identify groups of influence factors correlations and to remove potentially redundant or uncorrelated factors. Experts can be involved in this step, and the correlation-based approaches can support them by enriching the previously built semantic representation. Different types of data may be suitable for projection synthesis. Retrieving relevant data from databases can be one step, where natural language processing can help to find matching database entries for selected descriptors. Trend radars may also be analyzed by natural language processing, or by using LLMs for a semantically enriched understanding or the reports and conversion into an appropriate format to match the information to the descriptors. Another source can be simulation models, where machine learning models can be used to increase the accuracy of the simulation model by adding additional parameters. In addition, machine learning can be used to build surrogate models to reduce processing time, allowing for faster evaluation of the indicators. During the consistency analysis, the semantic dependencies between the descriptors must be interpreted. Available domain ontologies with appropriate reasoning algorithms are promising approaches. As a final step in the Scenario-Technique, consequences must be derived. Based on the (consistent) projection and other defined effects, concrete scenarios are derived to support decision making. Machine Learning models such as decision trees can be a helpful technology to support the final selection of scenarios here.

On the other hand, different data is generated throughout the product life cycle. The data changes during the life cycle, leading to different approaches to analyze them. During the “beginning of life” (BoL), in our grouping the planning and engineering phase, the data often comes in the form of linked data/information. Different materials with different properties are linked to a final product with certain properties. This can be modeled within an information model and enriched with semantics, so semantic approaches may be more appropriate in these phases. The “middle of life” (MoL), in our case the data from production and during operation, often generates numerical data in the form of vectors (e.g. data from the production process, production parameters, environmental data during operation). This type of data is suitable for analysis using machine learning techniques, which are designed to analyze numerical data of different types and formats (e.g. image data, time-series data, text data, and parameter values) [86]. At the “end of life” (EoL), circularity data is generated. This data is a mixture of the two groups described above. Products and their composition may generate information that needs to be considered as linked data, leading to semantic approaches such as ontologies. In addition, tests can be performed on the products that generate data, that can be analyzed by machine learning models (for example, optical inspection which is analyzed by neural networks to determine the status of the product for reuse).

This evaluation leads to an intersection of approaches, as shown in Table 4. The rows are the retrieved data groups and the columns are the steps of the Scenario-Technique. Combining suitable Data Science and Artificial Intelligence technologies for the different groups, the final table is generated. Within the Machine Learning approaches, Knowledge Graph embeddings and Graph Neural Networks as discussed previously are also included.

Table 4:

Selection of DS/AI technologies suitable for data groups and Scenario-Technique use cases for the Circular Economy.

(1) Inference of relevant influence factors (2) Correlation-based prioritization (3) Projection synthesis (4) Reasoning on consistency (5) Inference of consequences
(A) Planning data LD, O CAI, ML, O LD, O, ML, LLM LD, O LD, O, ML
(B) Engineering data LD, O CAI, ML, O LD, O, ML, LLM LD, O LD, O, ML
(C) Production data LD, O, ML CAI, ML ML, LLM O, ML ML
(D) Operation data LD, O, ML CAI, ML ML, LLM O, ML ML
(E) Circularity data LD, O CAI, ML, O LD, O, ML, LLM LD, O, ML LD, O, ML
  1. Key: LD, Linked Data; O, ontology-based approaches; ML, machine learning; CAI, correlation-based AI and statistics; LLM, Large Language Models.

Based on this initial evaluation and assignment of approaches, some specific cells of the table will be discussed here. In addition, the application of technologies to specific scenarios will be part of the following chapter.

  1. Prioritization & planning data: Within the prioritization step, planning data can be used with Linked Data and ontology techniques. More specifically, the built-in semantics allow, for example, relevance rating or basic graph analytics, such as centrality measures, to provide meaningful insights for the prioritization step. In addition, data availability can be checked against the ontology to identify missing data. This information can either be used to collect that data or to reduce the priority.

  2. Projection synthesis & production data: Projection synthesis requires data access and calculations, for example to assess data quality and make qualitative statements. This can be done by machine learning models based on production data that can, for example, provide probabilities for product performance metrics depending on the materials used and their quality (for example, percentage of recycled materials used). Large Language Models can help to create qualitative statements from the set of predictions generated by a machine learning model.

  3. Consequences & circularity data: In order to draw conclusions, explainability is a necessary part. Therefore, comparisons and analogies are made, which can be enriched with machine learning based predictions based on the given product life cycle data. Semantic dependencies need to be understood, so Linked Data approaches are suitable to identify issues and effects within the provided data.

4 Discussion based on sample applications

The concept of Information Circularity Assistance based on Scenario-Technique, Digital Twins and extreme data is derived by conceptually integrating indications primarily available in the literature. In order to discuss the validity and, based on this, the feasibility in a tangible way, two exemplary products are presented: a context aware pill dispenser and a robot vacuum cleaner, both shown in Figure 4. These products are used as case studies in previous work that provides the background for this paper. While the pill dispenser is accompanied by Digital Twin research results [87], the vacuum cleaner brings in results from Scenario-Technique research. This chapter discusses the application of the previously introduced data science and Artificial Intelligence techniques to the products, and shows the preliminary work of the partners, which serves as a baseline for future developments.

Figure 4: 
Two exemplary products used to discuss validity and feasibility: (a) context aware pill dispenser and (b) robot vacuum cleaner.
Figure 4:

Two exemplary products used to discuss validity and feasibility: (a) context aware pill dispenser and (b) robot vacuum cleaner.

Approaching the discussion from the data side with Digital Twins, the pill dispenser [88], [89] is designed to assist elderly patients in medication processing. These systems are relatively new on the market, so it is crucial to discuss and identify different scenarios, which can be done using the Scenario-Technique. Two different use cases of the ICA are shown in Figure 5:

  1. Assuming Digital Twins are already in operation, their data can be used to derive future scenarios in strategic planning;

  2. vice versa, the specification of a Digital Twin to be instantiated for each single device should be done in a way that it serves use cases of AI (here: LLM).

Figure 5: 
Practical example of utilizing the ICA in two different use cases: (1) deriving future scenarios by using Digital Twin data in strategic planning, and (2) designing a Digital Twin specification in product engineering.
Figure 5:

Practical example of utilizing the ICA in two different use cases: (1) deriving future scenarios by using Digital Twin data in strategic planning, and (2) designing a Digital Twin specification in product engineering.

As an example of these use cases applied to the pill dispenser, many differences among “users” must be considered especially in the medical domain due to a variety of personal behaviors and physical health states. An LLM can be used to identify relevant scenarios for elderly people where assistance with medication management is required. In the future, this will be done using the Information Circularity Assistance. Existing work can serve as a baseline for handling and preparing the data within a Digital Twin. The Digital Twin covers the current information of the device and handles existing (simulation) models of the device. This enables additional use cases, such as predictive maintenance for the pill dispenser. A possibly faulty dispensing unit can be replaced so that the user does not find himself in a critical situation where he has no access to his medication. In addition, simulations can be run to simulate, for example, the temperature in each cell, as this can be a critical point for pills. The Digital Twin can be further enhanced by a context awareness module. In [88], [89], such an approach for context awareness is proposed, including system evaluation and context analysis. This can be extended in the future for more semantic expressiveness, e.g. via the function-behaviour-structure ontology of design [90], which supports the use of data from the planning and engineering phase, as discussed in Table 4. Context analysis includes the environment, the user, and the contextual systems. A context middleware can consume preprocessed data to generate a Knowledge Graph as one realization approach for Linked Data. Graph analytics as Machine Learning models are foreseen in the concept. In [58], this approach is extended by building a more general machine learning approach on top of the context model. This approach could generate relevant information for the Scenario-Technique by handling planning and engineering data in an ontology and via a Linked Data approach. In addition, operational data is captured, handled, and can be analyzed by graph analytics for further enrichment. The existing work needs to be extended for information circularity and can be further enriched by applying LLMs as discussed above for easier interaction during the Scenario-Technique phase.

Complementing this perspective with use cases in the context of Scenario-Technique, the robot vacuum cleaner is a Cyber-Physical System that is bundled with appropriate infrastructure to streamline communication for configuration, vacuuming dust, charging its battery. The background in this field is taken from experiments utilizing Data Science and Artificial Intelligence in Requirements Engineering (RE). These experiments were conducted to extract and to classify requirements by learning from user feedback ([91] cf. [92]). The concept includes, among other algorithms, support vector machines and BERT transformer models. It is transferable to the classification of influence factors, shifting the use case from Requirements Engineering to Scenario-Technique. When prioritizing influence factors, a similar approach applies to usage-related factors: In the Requirements Engineering approach, user feedback is classified in terms of functional and non-functional requirements as well as main characteristics according to VDI/VDE 2206. For prioritization, the classifier is configured according to the identified influence factors. Using ML-based Natural Language Processing (NLP) approaches, user feedback for the robot (or beyond, covering household devices in general) can be used to assess the importance for the future end users. This provides an indication for prioritization. Additionally, such user feedback can be used as a proxy when circularity data is not available. Based circular economy and life cycle assessments ontologies, user feedback can be classified. Consequences are indicated using correlation-based Artificial Intelligence and Machine Learning, inferring the correlation between ratings and end-of-life statements.

The previous examples take a more system-oriented view from the product side. On the other hand, there are many approaches that apply one or more of the aforementioned data science and Artificial Intelligence technologies to specific problems within the industrial automation domain, e.g. applying machine learning to operational data to predict the current state or a possible failure of a device, e.g. [93], [94]. This can help to extract and enrich the huge amount of operational data over a life cycle into meaningful indicator values instead of storing all possible data. These approaches can be used in the future to enrich the available information and thus also improve the proposed Information Circularity Assistance. Extending the scope to Large Language Models (LLMs), these models can help to generate meaningful projection synthesis statements and are currently a major trend, delivering outstanding results in various applications. In a recent work by Xia et al. [69], a novel framework for integrating LLMs with a Digital Twin system is presented. This framework helps to make informed decisions based on the available data of a Digital Twin for a modular production system. We see this work as a first proof of concept, that LLMs can support in decision-making based on Digital Twin data. In future work, the idea will be transferred to support decision-making within the Scenario-Technique, especially in the projection synthesis step. Assuming that extreme data can be handled within the Digital Twin by using ontologies and Linked Data technologies, and perhaps some enrichment by machine learning models, the Large Language Model can help to retrieve relevant information from it, thus facilitating user interaction tremendously, since the user does not have to worry about complex querying and understanding the huge amount of data. The user will be able to interact with the Information Circularity Assistance in natural language through the LLM, and the LLM will provide the relevant data and information to enable the user to make informed decisions.

5 Conclusion and outlook

In this paper, we have conducted a systematic literature review to explore key questions and indicators relevant to decision making in creating circular products. Our contributions aim at extending the Scenario-Technique for handling extreme data in strategic product planning. In addition, we have introduced the use of Digital Twins to generate future data situations with the intention of realizing Information Circularity Assistance in the area of product creation. To meet the data demands of our research, extreme data processing based on Digital Twins was proposed. We identified different categories of Artificial Intelligence algorithms, taking into account the diverse nature of data in different use cases in the scenario technique and the product life cycle. These categories include graph-based approaches that effectively model relationships within extreme data across the product life cycle. We also introduced an ontology-based approach that extends these graphs to capture nuanced semantics. Machine learning models, commonly used for operational data, play a key role in extracting meaningful insights while reducing data volume and gaining valuable knowledge. Correlation-based approaches and statistical methods are proposed to mathematically describe specific behaviors, which are particularly important for prioritizing potential influencing factors. Our research also reveals the promising capabilities of Large Language Models. They provide a natural language interface for users to generate projection synthesis or query data modelled within a Digital Twin, and show significant potential for extracting insights from Textual Data. In the future, the presented approaches will be incrementally evaluated to form the basis of the Information Circularity Assistance. The components outlined in the concept serve as a preliminary concept that will require further refinement to articulate clear interfaces and to address the specific data requirements associated with each component.


Corresponding author: Jens Pottebaum, Paderborn University, Heinz Nixdorf Institute – Chair for Product Creation, Fürstenallee 11, 33102 Paderborn, Germany, E-mail:

About the authors

Iris Graessler

Prof. Dr.-Ing. Iris Graessler studied mechanical engineering at the RWTH Aachen University, where she received her doctorate degree in 1999. In 2003 she qualified as a professor at the RWTH Aachen University due to her habilitation thesis entitled “Development of Configurable Adaptive Mechatronic Systems”. After thirteen years as a manager at Robert Bosch GmbH in the fields of product development, production systems and change management, she has held the Chair for Product Creation at the Heinz Nixdorf Institute of the University of Paderborn since 2013. Her research focuses on strategic product planning, especially Scenario-Technique, resilient requirements engineering, systems engineering as well as digitalization and virtualization. The emphasis lies on adaptive configurable mechatronic systems and cyber-physical systems. Graessler commits herself to the VDI (Association of German Engineers) as a member of the Executive Board of VDI/VDE Society for Measurement and Automation Technology (GMA), acting as Chairwoman of the Advisory Board and of the Department 3 “Digitalization and Virtualization”. Among other roles, she is Member of the Scientific Society for Product Development (WiGeP) and coordinator of the Priority Program SPP 2443 “Hybrid Decision Support in Product Creation” by the German Research Foundation (DFG).

Michael Weyrich

Professor Michael Weyrich is head of the Institute of Industrial Automation and Software Engineering at the University of Stuttgart. He researches and teaches in the area of industrial information technology for automation. His research interests include intelligent automation components, autonomous systems, and automation safety. He studied at the Universities of Saarbrücken, Bochum and London and received his PhD from RWTH Aachen. He holds an honorary doctorate from the Ukrainian University TUDon. Before joining the university, he worked in the multinational industry for 10 years, including 4 years in Asia. He is Chairman of the VDI/VDE Society for Measurement and Automation Technology and a member of the VDE Executive Board. He holds several positions, e.g. head of the VDI/VDE Technical Committee on testing networked systems, director of the Innovation Campus Mobility, industry chair of the International Federation of Control (IFAC) for “Computers for Control”, member of the program committee in the IEEE ETFA conferences as well as member of the editorial board of the journal ATP edition.

Jens Pottebaum

Dr.-Ing. Jens Pottebaum has been Senior Researcher at the chair for Product Creation at the Heinz Nixdorf Institute at Paderborn University since 2015. He studied Mechanical Engineering and Computer Science at Paderborn University. In 2011, he defended his PhD on “Optimization of application oriented learning by knowledge identification” with distinction. His research is focused on information based learning cycles in the context of Product Creation. With his background in both Mechanical Engineering and Computer Science, he researches and teaches on “Digital and Virtual Product Creation”. He plays a coordinating role in national and European research associations such as RepAIR, ANYWHERE, CREXDATA and Decide4ECO as well as the DFG Priority Program SPP 2443 “Hybrid Decision Support in Product Creation”.

Simon Kamm

Simon Kamm, M.Sc., studied Electrical Engineering and Information Technology at the University of Stuttgart. Since 2020, he has been a research assistant at the Institute of Industrial Automation and Software Engineering at the University of Stuttgart. His research focusses on machine learning based on heterogeneous data sources and types.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: Iris Graessler and Michael Weyrich served as principal investigators, leading the conceptualization of the study. Jens Pottebaum and Simon Kamm contributed to the conceptualization, provided writing – original draft, and were responsible for visualization within the study. All authors participate in writing – review & editing.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

[1] I. Gräßler and J. Pottebaum, “Generic product lifecycle model: a holistic and adaptable approach for multi-disciplinary product–service systems,” Appl. Sci., vol. 11, no. 10, p. 4516, 2021. https://doi.org/10.3390/app11104516.Search in Google Scholar

[2] R. Aydin and F. Badurdeen, “Sustainable product line design considering a multi-lifecycle approach,” Resour., Conserv. Recycl., vol. 149, pp. 727–737, 2019. https://doi.org/10.1016/j.resconrec.2019.06.014.Search in Google Scholar

[3] R. Schikiera, et al.., Nachhaltigkeit in der Industrie: Digitalisierung schafft Transparenz für die Kreislaufwirtschaft 06/2023 [Online], Düsseldorf, 2023. Available at: https://www.vdi-nachrichten.com/shop/nachhaltigkeit-in-der-industrie/ [accessed: Feb. 9, 2024].Search in Google Scholar

[4] E. Iacovidou, A. P. Velenturf, and P. Purnell, “Quality of resources: a typology for supporting transitions towards resource efficiency using the single-use plastic bottle as an example,” Sci. Total Environ., vol. 647, pp. 441–448, 2019. https://doi.org/10.1016/j.scitotenv.2018.07.344.Search in Google Scholar PubMed

[5] K. Ishii, C. F. Eubanks, and P. Di Marco, “Design for product retirement and material life-cycle,” Mater. Des., vol. 15, no. 4, pp. 225–233, 1994. https://doi.org/10.1016/0261-3069(94)90007-8.Search in Google Scholar

[6] C. J. Velte, A. Wilfahrt, R. Müller, and R. Steinhilper, “Complexity in a life cycle perspective,” in 24th CIRP Conference on Life Cycle Engineering, 2017, pp. 104–109.10.1016/j.procir.2016.11.253Search in Google Scholar

[7] H. Desing, G. Braun, and R. Hischier, “Resource pressure – a circular design method,” Resour., Conserv. Recycl., vol. 164, no. 3, 2021, Art. no. 105179. https://doi.org/10.1016/j.resconrec.2020.105179.Search in Google Scholar

[8] V. Schulze, et al.., Update-Factory für ein industrielles Produkt-Update: Ein Beitrag zur Kreislaufwirtschaft, Aachen, Garbsen, Wissenschaftliche Gesellschaft für Produktionstechnik (WGP)/Wissenschaftliche Gesellschaft für Produktentwicklung WiGeP e.V (WiGeP), 2021. Available at: https://wgp.de/wp-content/uploads/03_Impulspaper_WBK_2021-09_ES_WEB.pdf [accessed: Sep. 25, 2024].Search in Google Scholar

[9] R. Stark, H. Grosser, B. Beckmann-Dobrev, and S. Kind, “Advanced technologies in life cycle engineering,” Proc. CIRP, vol. 22, pp. 3–14, 2014. https://doi.org/10.1016/j.procir.2014.07.118.Search in Google Scholar

[10] C. J. Velte and R. Steinhilper, “Complexity in a circular economy: a need for rethinking complexity management strategies,” in Proceedings of the World Congress on Engineering, 2016.Search in Google Scholar

[11] I. Gräßler, P. Scholle, and H. Thiele, “Scenario-technique,” in Integrated Design Engineering: Interdisciplinary and Holistic Product Development, S. Vajna, Ed., Basel, Springer International Publishing, 2020, pp. 615–645.10.1007/978-3-030-19357-7_20Search in Google Scholar

[12] I. Gräßler, J. Pottebaum, and P. Scholle, “Integrated process and data model for agile strategic planning,” in 11th International Workshop on Integrated Design Engineering, Magdeburg, 2017.Search in Google Scholar

[13] J. Mangers, M. Amne Elahi, and P. Plapper, “Digital twin of end-of-life process-chains for a circular economy adapted product design – a case study on PET bottles,” J. Cleaner Prod., vol. 382, 2023, Art. no. 135287. https://doi.org/10.1016/j.jclepro.2022.135287.Search in Google Scholar

[14] C. Mouflih, R. Gaha, A. Durupt, M. Bosch-Mauchand, K. Martinsen, and B. Eynard, “Decision support framework using knowledge based digital twin for sustainable product development and end of life,” Proc. Des. Soc., vol. 3, pp. 1157–1166, 2023. https://doi.org/10.1017/pds.2023.116.Search in Google Scholar

[15] M. J. Page, et al.., “The PRISMA 2020 statement: an updated guideline for reporting systematic reviews,” Syst. Rev., vol. 10, no. 1, p. 89, 2021. https://doi.org/10.1186/s13643-021-01626-4.Search in Google Scholar PubMed PubMed Central

[16] B. Corona, L. Shen, D. Reike, J. Rosales Carreón, and E. Worrell, “Towards sustainable development through the circular economy—a review and critical assessment on current circularity metrics,” Resour., Conserv. Recycl., vol. 151, 2019, Art. no. 104498. https://doi.org/10.1016/j.resconrec.2019.104498.Search in Google Scholar

[17] C. T. de Oliveira, T. E. T. Dantas, and S. R. Soares, “Nano and micro level circular economy indicators: assisting decision-makers in circularity assessments,” Sustain. Prod. Consum., vol. 26, pp. 455–468, 2021. https://doi.org/10.1016/j.spc.2020.11.024.Search in Google Scholar

[18] L. Rigamonti and E. Mancini, “Life cycle assessment and circularity indicators,” Int. J. Life Cycle Assess., vol. 26, no. 10, pp. 1937–1942, 2021. https://doi.org/10.1007/s11367-021-01966-2.Search in Google Scholar

[19] S. Boldoczki, A. Thorenz, and A. Tuma, “Does increased circularity lead to environmental sustainability? The case of washing machine reuse in Germany,” J. Ind. Ecol., vol. 25, no. 4, pp. 864–876, 2021. https://doi.org/10.1111/jiec.13104.Search in Google Scholar

[20] M. Niero and P. P. Kalbar, “Coupling material circularity indicators and life cycle based indicators: a proposal to advance the assessment of circular economy strategies at the product level,” Resour., Conserv. Recycl., vol. 140, pp. 305–312, 2019. https://doi.org/10.1016/j.resconrec.2018.10.002.Search in Google Scholar

[21] A. Mishra, P. Verma, and M. K. Tiwari, “A circularity-based quality assessment tool to classify the core for recovery businesses,” Int. J. Prod. Res., vol. 60, no. 19, pp. 5835–5853, 2022. https://doi.org/10.1080/00207543.2021.1973135.Search in Google Scholar

[22] T. E. Graedel, “Material Flow analysis from origin to evolution,” Environ. Sci. Technol., vol. 53, no. 21, pp. 12188–12196, 2019. https://doi.org/10.1021/acs.est.9b03413.Search in Google Scholar PubMed

[23] M. Kravchenko, D. C. Pigosso, and T. C. McAloone, “Towards the ex-ante sustainability screening of circular economy initiatives in manufacturing companies: consolidation of leading sustainability-related performance indicators,” J. Cleaner Prod., vol. 241, 2019, Art. no. 118318. https://doi.org/10.1016/j.jclepro.2019.118318.Search in Google Scholar

[24] M. Saidani, B. Yannou, Y. Leroy, and F. Cluzel, “How to assess product performance in the circular economy? Proposed requirements for the design of a circularity measurement framework,” Recycling, vol. 2, no. 1, p. 6, 2017. https://doi.org/10.3390/recycling2010006.Search in Google Scholar

[25] J. Kamp Albæk, S. Shahbazi, T. C. McAloone, and D. C. A. Pigosso, “Circularity evaluation of alternative concepts during early product design and development,” Sustainability, vol. 12, no. 22, p. 9353, 2020. https://doi.org/10.3390/su12229353.Search in Google Scholar

[26] S. Shahbazi and A. K. Jönbrink, “Design guidelines to develop circular products: action research on nordic industry,” Sustainability, vol. 12, no. 9, p. 3679, 2020. https://doi.org/10.3390/su12093679.Search in Google Scholar

[27] F. N. Puglieri, et al.., “Strategic planning oriented to circular business models: a decision framework to promote sustainable development,” Bus. Strategy Environ., vol. 31, no. 7, pp. 3254–3273, 2022. https://doi.org/10.1002/bse.3074.Search in Google Scholar

[28] V. B. Moreto, G. d. S. Rolim, B. G. Zacarin, A. P. Vanin, L. M. de Souza, and R. R. Latado, “Agrometeorological models for forecasting the qualitative attributes of “Valência” oranges,” Theor. Appl. Climatol., vol. 130, nos. 3–4, pp. 847–864, 2017. https://doi.org/10.1007/s00704-016-1920-9.Search in Google Scholar

[29] Z. K. Avdeeva, E. A. Grebenyuk, and S. V. Kovriga, “Cognitive modelling-driven time series forecasting for predicting target indicators in non-stationary Processes,” IFAC-PapersOnLine, vol. 54, no. 13, pp. 91–96, 2021. https://doi.org/10.1016/j.ifacol.2021.10.425.Search in Google Scholar

[30] X. Wu, X. Shi, Y. Li, and X. Gong, “Estimation of annual routine maintenance cost for highway tunnels,” Adv. Civ. Eng., vol. 2022, 2022. https://doi.org/10.1155/2022/5374461.Search in Google Scholar

[31] M. Ostermann, et al.., “Integrating prospective scenarios in life cycle engineering: case study of lightweight structures,” Energies, vol. 16, no. 8, p. 3371, 2023. https://doi.org/10.3390/en16083371.Search in Google Scholar

[32] W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis: Fundamentals and Implementation, Cham, Springer Nature Switzerland; Imprint Springer, 2023.10.1007/978-3-031-27230-1Search in Google Scholar

[33] U. v. Reibnitz, Szenario-Technik: Instrumente für die unternehmerische und persönliche Erfolgsplanung, 2nd ed. Wiesbaden, Gabler, 1992.10.1007/978-3-663-15720-5Search in Google Scholar

[34] P. Wack, “Scenarios: shooting the rapids,” Harv. Bus. Rev., vol. 63, no. 6, pp. 139–150, 1985.Search in Google Scholar

[35] G. J. B. Probst and P. Gomez, Vernetztes Denken: Ganzheitliches Führen in der Praxis, 2nd ed. Wiesbaden, Gabler Verlag, 1991.10.1007/978-3-322-89072-6Search in Google Scholar

[36] J. Gausemeier, A. Fink, and O. Schlake, “Scenario-management: planning and leading with scenarios,” in Futures Research Quarterly, 1996.Search in Google Scholar

[37] V. Linss and A. Fried, “The ADVIAN® classification — a new classification approach for the rating of impact factors,” Technol. Forecast. Soc. Change, vol. 77, no. 1, pp. 110–119, 2010. https://doi.org/10.1016/j.techfore.2009.05.002.Search in Google Scholar

[38] J. Gausemeier and C. Plass, Zukunftsorientierte Unternehmensgestaltung: Strategien, Geschäftsprozesse und IT-Systeme für die Produktion von morgen, 2nd ed. München, Carl Hanser Verlag, 2014.10.1007/978-3-446-43842-2Search in Google Scholar

[39] E. J. Dönitz, Effizientere Szenariotechnik durch teilautomatische Generierung von Konsistenzmatrizen: Empirie, Konzeption, Fuzzy- und Neuro-Fuzzy-Ansätze, Wiesbaden, Gabler Verlag, 2009.10.1007/978-3-8349-8218-6Search in Google Scholar

[40] M. Mißler-Behr, Methoden der Szenarioanalyse, Wiesbaden, Dt. Univ.-Verl., 1993.10.1007/978-3-663-14585-1Search in Google Scholar

[41] R. Feldmann and N. Sensen, “Efficient algorithms for the consisteny analysis in scenario projects,” Fachbereich Mathematik-Informatik, Universität Gesamthochschule Paderborn, Paderborn, Tech. Rep., 1997.Search in Google Scholar

[42] V. Grienitz and A.-M. Schmidt, “Weiterentwicklung der Konsistenzanalyse auf Basis evolutionärer Strategien für die Entwicklung von Markt- und Umfeldszenarien,” in HNI-Verlagsschriftenreihe, Vorausschau und Technologieplanung, vol. 265, J. Gausemeier, Ed., Paderborn, HNI, 2009, pp. 409–433.Search in Google Scholar

[43] F. Kratzberg, Fuzzy-Szenario-Management: Verarbeitung von Unbestimmtheit im strategischen Management, 1st ed. Göttingen, Sierke, 2009.Search in Google Scholar

[44] F. Marthaler, J. W. Gesk, A. Siebe, and A. Albers, “An explorative approach to deriving future scenarios: a first comparison of the consistency matrix-based and the catalog-based approach to generating future scenarios,” Proc. CIRP, vol. 91, pp. 883–892, 2020. https://doi.org/10.1016/j.procir.2020.02.245.Search in Google Scholar

[45] S. Langkau, et al.., “A stepwise approach for scenario-based inventory modelling for prospective LCA (SIMPL),” Int. J. Life Cycle Assess., vol. 28, no. 9, pp. 1169–1193, 2023. https://doi.org/10.1007/s11367-023-02175-9.Search in Google Scholar

[46] D. Mietzner and G. Reger, “Advantages and disadvantages of scenario approaches for strategic foresight//advantages and disadvantages of scenario approaches for strategic foresight,” Int. J. Technol. Intell. Plan., vol. 1, no. 2, pp. 220–239, 2005. https://doi.org/10.1504/IJTIP.2005.006516.Search in Google Scholar

[47] I. Graessler, A. M. Tusek, H. Thiele, D. Preuß, B. Grewe, and M. Hieb, “Literature study on the potential of Artificial Intelligence in Scenario-Technique,” in Proceedings of at The XXXIII ISPIM Innovation Conference “Innovating in a Digital World”, Copenhagen, Denmark, 2022.Search in Google Scholar

[48] P. Ködding, K. Ellermann, C. Koldewey, and R. Dumitrescu, “Scenario-based foresight in the age of digitalization and Artificial Intelligence – identification and analysis of existing use cases,” Proc. CIRP, vol. 119, pp. 740–745, 2023. https://doi.org/10.1016/j.procir.2023.01.015.Search in Google Scholar

[49] V. Kayser and E. Shala, “Scenario development using web mining for outlining technology futures,” Technol. Forecast. Soc. Change, vol. 156, 2020, Art. no. 120086. https://doi.org/10.1016/j.techfore.2020.120086.Search in Google Scholar

[50] D. Mietzner, Strategische Vorausschau und Szenarioanalysen: Methodenevaluation und neue Ansätze, 1st ed. Wiesbaden, Gabler, 2009.10.1007/978-3-8349-8382-4_1Search in Google Scholar

[51] E. Tapinos, “Scenario planning at business unit level,” Futures, vol. 47, pp. 17–27, 2013. https://doi.org/10.1016/j.futures.2012.11.009.Search in Google Scholar

[52] A. Fuller, Z. Fan, C. Day, and C. Barlow, “Digital twin: enabling technologies, challenges and open research,” IEEE Access, vol. 8, pp. 108952–108971, 2020. https://doi.org/10.1109/ACCESS.2020.2998358.Search in Google Scholar

[53] F. Tao, H. Zhang, A. Liu, and A. Y. C. Nee, “Digital twin in industry: state-of-the-art,” IEEE Trans. Ind. Inform., vol. 15, no. 4, pp. 2405–2415, 2018. https://doi.org/10.1109/TII.2018.2873186.Search in Google Scholar

[54] D. Dittler, et al.., “Digitaler Zwilling für eine modulare Offshore-Plattform: Effizienzsteigerung grüner Power-to-X-Produktionsprozesse,” ATP Magazin, vol. 64, nos. 6–7, pp. 72–80, 2022. https://doi.org/10.17560/atp.v63i6-7.2606.Search in Google Scholar

[55] D. Braun, M. Riedhammer, N. Jazdi, W. Schloegl, and M. Weyrich, “A methodology for the detection of functional relations of mechatronic components and assemblies in brownfield systems,” Proc. CIRP, vol. 107, pp. 119–124, 2022. https://doi.org/10.1016/j.procir.2022.04.020.Search in Google Scholar

[56] N. Jazdi, B. A. Talkhestani, B. Maschler, and M. Weyrich, “Realization of AI-enhanced industrial automation systems using intelligent digital twins,” Proc. CIRP, vol. 97, pp. 396–400, 2021. https://doi.org/10.1016/j.procir.2020.05.257.Search in Google Scholar

[57] Z. Huang, Y. Shen, J. Li, M. Fey, and C. Brecher, “A survey on AI-driven digital twins in industry 4.0: smart manufacturing and advanced robotics,” Sensors, vol. 21, no. 19, p. 6340, 2021. https://doi.org/10.3390/s21196340.Search in Google Scholar PubMed PubMed Central

[58] S. Kamm, N. Sahlab, N. Jazdi, and M. Weyrich, “A concept for dynamic and robust machine learning with context modeling for heterogeneous manufacturing data,” Proc. CIRP, vol. 118, pp. 354–359, 2023. https://doi.org/10.1016/j.procir.2023.06.061.Search in Google Scholar

[59] B. Ashtari Talkhestani, et al.., “An architecture of an intelligent digital twin in a cyber-physical production system,” at-Automatisierungstechnik, vol. 67, no. 9, pp. 762–782, 2019. https://doi.org/10.1515/auto-2019-0039.Search in Google Scholar

[60] E. Negri, L. Fumagalli, and M. Macchi, “A review of the roles of digital twin in CPS-based production systems,” Procedia Manuf., vol. 11, pp. 939–948, 2017. https://doi.org/10.1016/j.promfg.2017.07.198.Search in Google Scholar

[61] R. Stark, et al.., “WiGeP positionspapier – digitaler zwilling,” [Online], 2020. Available at: http://www.wigep.de/fileadmin/Positions-_und_Impulspapiere/Positionspapier_Digitaler_Zwilling.pdf.10.3139/104.112311Search in Google Scholar

[62] A. Barni, A. Fontana, S. Menato, M. Sorlini, and L. Canetta, “Exploiting the Digital Twin in the assessment and optimization of sustainability performances,” in 2018 International Conference on Intelligent Systems (IS), 2018.10.1109/IS.2018.8710554Search in Google Scholar

[63] S. Mihai, et al.., “Digital twins: a survey on enabling technologies, challenges, trends and future prospects,” IEEE Commun. Surv. Tutor., vol. 24, no. 4, pp. 2255–2291, 2022. https://doi.org/10.1109/COMST.2022.3208773.Search in Google Scholar

[64] M. M. Rathore, S. A. Shah, D. Shukla, E. Bentafat, and S. Bakiras, “The role of ai, machine learning, and big data in digital twinning: a systematic literature review, challenges, and opportunities,” IEEE Access, vol. 9, pp. 32030–32052, 2021. https://doi.org/10.1109/access.2021.3060863.Search in Google Scholar

[65] N. Sahlab, S. Kamm, T. Müller, N. Jazdi, and M. Weyrich, “Knowledge graphs as enhancers of intelligent digital twins,” in 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS), 2021, pp. 19–24.10.1109/ICPS49255.2021.9468219Search in Google Scholar

[66] T. Müller, et al.., “Context-enriched modeling using knowledge graphs for intelligent Digital Twins of production systems,” in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), 2022, pp. 1–8.10.1109/ETFA52439.2022.9921615Search in Google Scholar

[67] OpenAI, GPT-4 Technical Report, arxiv, 2023. https://doi.org/10.48550/arXiv.2303.08774.Search in Google Scholar

[68] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. https://doi.org/10.48550/arXiv.1810.04805.Search in Google Scholar

[69] Y. Xia, M. Shenoy, N. Jazdi, and M. Weyrich, “Towards autonomous system: flexible modular production system enhanced with large language model agents,” in 2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), 2023, pp. 1–8.10.1109/ETFA54631.2023.10275362Search in Google Scholar

[70] S. J. Ali, G. Guizzardi, and D. Bork, Enabling Representation Learning in Ontology-Driven Conceptual Modeling Using Graph Neural Networks [Online], 2023, pp. 278–294, Available at: https://link.springer.com/chapter/10.1007/978-3-031-34560-9_17.10.1007/978-3-031-34560-9_17Search in Google Scholar

[71] V. Ryen, A. Soylu, and D. Roman, “Building semantic knowledge graphs from (Semi-)Structured data: a review,” Future Internet, vol. 14, no. 5, p. 129, 2022. https://doi.org/10.3390/fi14050129.Search in Google Scholar

[72] M. Mohammed, A. Romli, and R. Mohamed, “Using ontology to enhance decision-making for product sustainability in smart manufacturing,” in 2021 International Conference on Intelligent Technology, System and Service for Internet of Everything (ITSS-IoE), 2021.10.1109/ITSS-IoE53029.2021.9615289Search in Google Scholar

[73] M. Borsato, “Bridging the gap between product lifecycle management and sustainability in manufacturing through ontology building,” Comput. Ind., vol. 65, no. 2, pp. 258–269, 2014. https://doi.org/10.1016/j.compind.2013.11.003.Search in Google Scholar

[74] M. Mohd Ali, R. Rai, J. N. Otte, and B. Smith, “A product life cycle ontology for additive manufacturing,” Comput. Ind., vol. 105, pp. 191–203, 2019. https://doi.org/10.1016/j.compind.2018.12.007.Search in Google Scholar

[75] Y. Hu, C. Liu, M. Zhang, Y. Lu, Y. Jia, and Y. Xu, “An ontology-based product modelling method for smart remanufacturing,” in 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), 2023.10.1109/CASE56687.2023.10260547Search in Google Scholar

[76] M. N. Asim, M. Wasim, M. U. G. Khan, W. Mahmood, and H. M. Abbasi, “A survey of ontology learning techniques and applications,” Database, vol. 2018, 2018, Art. no. bay101. https://doi.org/10.1093/database/bay101.https://academic.oup.com/database/pages/About.Search in Google Scholar PubMed PubMed Central

[77] T. Atapattu, K. Falkner, and N. Falkner, “A comprehensive text analysis of lecture slides to generate concept maps,” Comput. Educ., vol. 115, pp. 96–113, 2017. https://doi.org/10.1016/j.compedu.2017.08.001.Search in Google Scholar

[78] J. Petit, J.-C. Boisson, and F. Rousseaux, “Discovering cultural conceptual structures from texts for ontology generation,” in 2017 4th International Conference on Control, Decision and Information Technologies (CoDIT), 2017, pp. 225–229.10.1109/CoDIT.2017.8102595Search in Google Scholar

[79] G. Petrucci, M. Rospocher, and C. Ghidini, “Expressive ontology learning as neural machine translation,” J. Web Semant., vol. 52, pp. 66–82, 2018. https://doi.org/10.1016/j.websem.2018.10.002.Search in Google Scholar

[80] S. Sen, J. Tao, and A. V. Deokar, “On the role of ontologies in information extraction,” in Reshaping Society through Analytics, Collaboration, and Decision Support: Role of Business Intelligence and Social Media, vol. 18, 2015, pp. 115–133.10.1007/978-3-319-11575-7_8Search in Google Scholar

[81] L. Xiao, C. Ruan, A. Yang, J. Zhang, and J. Hu, “Domain ontology learning enhanced by optimized relation instance in dbpedia,” in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 2016, pp. 1452–1456.Search in Google Scholar

[82] A. Zouaq and F. Martel, “What is the schema of your knowledge graph? leveraging knowledge graph embeddings and clustering for expressive taxonomy learning,” in Proceedings of the International Workshop on Semantic Big Data, 2020, pp. 1–6.10.1145/3391274.3393637Search in Google Scholar

[83] R. Lima, et al.., “An inductive logic programming-based approach for ontology population from the web//an Inductive Logic Programming-Based Approach for Ontology Population from the Web,” in Database and Expert Systems Applications: 24th International Conference, DEXA 2013, Prague, Czech Republic, August 26-29, 2013. Proceedings, Part I 24, 2013, pp. 319–326.10.1007/978-3-642-40285-2_28Search in Google Scholar

[84] F. A. Lisi and U. Straccia, “A logic-based computational method for the automated induction of fuzzy ontology axioms,” Fundam. Inform., vol. 124, no. 4, pp. 503–519, 2013. https://doi.org/10.3233/FI-2013-846.Search in Google Scholar

[85] V. Stegmaier, T. Eberhardt, W. Schaaf, N. Jazdi, M. Weyrich, and A. Verl, “Literature review and model proposal on the machine life cycle in industrial automation from different perspectives,” Proc. CIRP, vol. 120, pp. 690–695, 2023. https://doi.org/10.1016/j.procir.2023.09.060.Search in Google Scholar

[86] S. Kamm, S. S. Veekati, T. Müller, N. Jazdi, and M. Weyrich, “A survey on machine learning based analysis of heterogeneous data in industrial automation,” Comput. Ind., vol. 149, 2023, Art. no. 103930. https://doi.org/10.1016/j.compind.2023.103930.Search in Google Scholar

[87] N. Sahlab, D. Braun, T. Jung, N. Jazdi, and M. Weyrich, “A tier-based model for realizing context-awareness of Digital Twins,” in 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), 2021, pp. 1–4.10.1109/ETFA45728.2021.9613408Search in Google Scholar

[88] N. Sahlab, N. Jazdi, and M. Weyrich, “Dynamic context modeling for cyber-physical systems applied to a pill dispenser,” in 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), 2020, pp. 1435–1438.10.1109/ETFA46521.2020.9211876Search in Google Scholar

[89] N. Sahlab, N. Jazdi, and M. Weyrich, “An approach for context-aware cyber-physical automation systems,” IFAC-PapersOnLine, vol. 54, no. 4, pp. 171–176, 2021. https://doi.org/10.1016/j.ifacol.2021.10.029.Search in Google Scholar

[90] J. S. Gero and U. Kannengiesser, “The function-behaviour-structure ontology of design,” in An Anthology of Theories and Models of Design: Philosophy, Approaches and Empirical Explorations, Springer, 2014, pp. 263–283.10.1007/978-1-4471-6338-1_13Search in Google Scholar

[91] I. Gräßler and D. Preuß, “Anwendbarkeit von Requirement Mining in Benutzerrezensionen für die Entwicklung mechatronischer Produkte im B2C-Markt,” in Digital-Fachtagung VDI Mechatronik 2021; 24.-25. Mrz. 2021, 2021.Search in Google Scholar

[92] I. Gräßler, C. Oleff, and D. Preuß, “Proactive management of requirement changes in the development of complex technical systems,” Appl. Sci., vol. 12, no. 4, p. 1874, 2022. https://doi.org/10.3390/app12041874.Search in Google Scholar

[93] S. Kamm, K. Sharma, N. Jazdi, and M. Weyrich, “A hybrid modelling approach for parameter estimation of analytical reflection models in the failure analysis process of semiconductors,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), 2021, pp. 417–422.10.1109/CASE49439.2021.9551454Search in Google Scholar

[94] K. Sharma, S. Kamm, K. M. Barón, and I. Kallfass, “Characterization of online junction temperature of the SiC power MOSFET by combination of four TSEPs using neural network,” in 2022 24th European Conference on Power Electronics and Applications (EPE’22 ECCE Europe), 2022, pp. 1–8.Search in Google Scholar

Received: 2024-02-29
Accepted: 2024-09-11
Published Online: 2025-01-06
Published in Print: 2025-01-29

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 5.2.2025 from https://www.degruyter.com/document/doi/10.1515/auto-2024-0039/html
Scroll to top button