Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Markus Stumptner

    Allocation of resources to improve security is crucial when we consider people’s safety on transport systems. We show how a system engineering methodology can be used to link business intelligence and railway specifics toward better value... more
    Allocation of resources to improve security is crucial when we consider people’s safety on transport systems. We show how a system engineering methodology can be used to link business intelligence and railway specifics toward better value for money. A model is proposed to determine a probability of a success in service management. The forecasting model is a basic Markov Chain. A use case demonstrates a way to align statistical data (crime on stations) and probability of investment into resources (people, security measures, time).
    Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each analysis scenario and application. This is an approach that is inflexible, incurs high costs, and leads to “silos” that prevent sharing data... more
    Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each analysis scenario and application. This is an approach that is inflexible, incurs high costs, and leads to “silos” that prevent sharing data across different agencies or tasks. A standard approach to tackling this problem is to design a common ontology and to construct source descriptions which specify mappings between the sources and the ontology. Modeling the semantics of data manually requires huge human cost and expertise, making an automatic method of semantic modeling desired. Automatic semantic model has been gaining attention in data integration [5], federated data query [14] and knowledge graph construction [6]. This paper proposes an service-oriented architecture to create a correct semantic model, including annotating training data, training the machine learning model, and predict an accurate semantic model for new data source. Moreover, a holistic process for automatic semantic modeling is presented. By the usage of ASMaaS, historical semantic annotations for training machine learning model used in automatic semantic modeling can be shared, reducing costs of human resources from users. By specifying a well defined interface, users are able to have access to automatic semantic modeling process at any time, from anywhere. In addition, users must not be concerned with machine learning technologies and pipeline used in automatic semantic modeling, focusing mainly on the business itself.
    ... George, Adelaide John Gero, Sydney Greg Gibbon, Newcastle Achim Hoffmann, Sydney Ray Jarvis, Melbourne Cara MacNish, Perth Michael ... Level of Interchangeability Embedded in a Finite Constraint Satisfaction Problem Affects the... more
    ... George, Adelaide John Gero, Sydney Greg Gibbon, Newcastle Achim Hoffmann, Sydney Ray Jarvis, Melbourne Cara MacNish, Perth Michael ... Level of Interchangeability Embedded in a Finite Constraint Satisfaction Problem Affects the Performance of Search Amy M. Beckwith ...
    This chapter contains sections titled: Introduction, Behavior Diagrams, Behavior Modeling in UML, Consistent Extension, Consistent Refinement, Consistent Specialization, Conclusion, References
    Research in the area of process analytics has become useful in providing new insights into patient care and support decision making. In order to reach the full potential of process analytics, we must look into further details and address... more
    Research in the area of process analytics has become useful in providing new insights into patient care and support decision making. In order to reach the full potential of process analytics, we must look into further details and address challenges when applying it to real world scenarios which are often represented by complex systems. One area that has not been explored thoroughly is the ability to identify how processes relate to each other in a network of processes. Different often overlapping information about someone or something will always be kept in different domains. There is rarely a chance for these pieces of information to all link together and access them directly. Having access to the relation between processes and seeing an overall picture of network of process helps to better understanding a complex system and further analyse it to make better and informed decisions. However, attempts to link these processes into a network has led to challenges which have not been resolved yet. The contribution is a detailed use case that highlights existing challenges in discovering patient journey processes and two experimentation with preliminary results on addressing some of the identified challenges. The first experiment investigated compliance checking of clinical processes against guidelines and the second investigated the matching of event labels with an existing processable collection of health terms. The results of both experiments showed that further research and tool development is required to increase the automation for compliance checking and improve the accuracy of event and term matching.
    The University of South Australia is investigating the highly complex integration of information systems within the CRC for integrated engineering asset management (CIEAM) and develops techniques and tools to simplify the data exchange... more
    The University of South Australia is investigating the highly complex integration of information systems within the CRC for integrated engineering asset management (CIEAM) and develops techniques and tools to simplify the data exchange between enterprise critical systems. Recent research outcomes include a service oriented integration architecture, an integration toolbox and a service interface for the integration of asset management information systems. In this paper we also propose an extension on the architecture level for enabling a secure data exchange between multiple systems. It allows the secure information sharing internally within an enterprise but also externally with business partners and supports the current trend towards an open service oriented environment by protecting critical asset management data on a shared communication infrastructure.
    Visualization of complex information becomes increasingly important in the era of digitalization to convey information in a human readable way and support decision making. Although many visualization tools are available, they are often... more
    Visualization of complex information becomes increasingly important in the era of digitalization to convey information in a human readable way and support decision making. Although many visualization tools are available, they are often treated as black boxes and inflexible in customizing visualization and supporting the user in navigating through data and different types of visualizations. This paper presents the semantics of a domain specific modelling language for highly interactive visualizations called VizDSL. VizDSL allows the user to model the navigation on a conceptual level. The semantics are specified by mapping VizDSL concepts to Place Chart Nets which can be used for model validation, simulation and automated tool creation. The mappings are platform independent and can be used by a developer to implement the language.
    Information collection and exchange between different domains is challenging due to the complex nature of modern enterprise systems. Visualization is used to increase understanding and communicability, however, creating visualizations of... more
    Information collection and exchange between different domains is challenging due to the complex nature of modern enterprise systems. Visualization is used to increase understanding and communicability, however, creating visualizations of complex information poses significant challenges in itself, especially for non-IT users. As a solution to this problem, we have created VizDSL, a graphical domain specific language for modelling and executing interactive visualizations. In this demonstration, we show how VizDSL can be used to model and execute interactive visualizations of complex data specifications, from a variety of data sources including schema and instance data, without requiring any significant programming expertise. We demonstrate how VizDSL can be used to create a variety of visualization types as part of a single visualization process. Conference participants will benefit from attendance at this demonstration by increased understanding of how taking a model-driven approach to visualization could be applied to their own information domain and potential use cases.
    During the past decade, much effort has been invested in developing standards to overcome data and software interoperability barriers in the oil and gas industry. Whereas syntactical integration is no longer problem, semantic integration... more
    During the past decade, much effort has been invested in developing standards to overcome data and software interoperability barriers in the oil and gas industry. Whereas syntactical integration is no longer problem, semantic integration still remains an open challenge. To overcome this problem, standards provide more and more complex structures to capture the semantics of different domains. As standards become more expressive, they may become too complex so that their adaptation is inherently difficult, which may even lead to inconsistencies in specifications. This paper addresses these issues by simplification and verification of correctness properties on an interoperability standard. While the simplification has been achieved by means of multi-level modeling, the accuracy of an industry model has been addressed by verifying the correctness properties using the object-oriented rule-based system Flora-2. Transformation and verification rules were evaluated and their performance has been enhanced by applying algorithmic improvements. The outcomes show that the approach reduces complexity of specifications and supports the verification from a software engineering point of view to simplify the adoption of standards.
    Process models are widely used in organisations and can easily become large and complex. In the context of business process management, views are a useful technique to reduce complexity by providing only those process fragments that are... more
    Process models are widely used in organisations and can easily become large and complex. In the context of business process management, views are a useful technique to reduce complexity by providing only those process fragments that are relevant for a particular stakeholder. A key challenge in view management is the handling of changes that are performed concurrently by different stakeholders. Since the views may refer to the same process, the performed changes may affect the same region of a business process and cause a conflict. Many approaches have been proposed for resolving conflicts in a post-analysis phase after all changes have been applied. They can be become costly when dealing with multiple changes that lead to multiple conflicts which cannot be resolved automatically and require an additional negotiation phase between stakeholders. In this paper we propose a framework for the on-the-fly conflict resolution of changes that have been performed on views their underlying reference process. Different to existing approaches this framework applies behaviour consistency rules for business processes which consider the execution semantics and can be checked efficiently on the structure of processes without generating all possible execution traces or keeping track of change operations.
    The ability to predict drug activity from molecular structure is an important field of research both in academia and in the pharmaceutical industry. Raw 3D structure data is not in a form suitable for identifying properties using machine... more
    The ability to predict drug activity from molecular structure is an important field of research both in academia and in the pharmaceutical industry. Raw 3D structure data is not in a form suitable for identifying properties using machine learning so it must be reconfigured into descriptor sets that continue to encapsulate important structural properties of the molecule. In this study, a large number of small molecule structures, obtained from publicly available databases, was used to generate a set of molecular descriptors that can be used with machine learning to predict drug activity. The descriptors were for the most part simple graph strings representing chains of connected atoms. Atom counts averaging seventy, using a dataset of just over one million molecules, resulted in a very large set of simple graph strings of lengths two to twelve atoms. Elimination of duplicates, reverse strings and feature reduction techniques were applied to reduce the path count to about three thousand which was viable for machine learning. Training data from twenty six data sets was used to build a decision tree classifier using J48 and Random Forest. Forty three thousand molecules from the NCI HIV dataset were used with the descriptor set to generate decision tree models with good accuracy. A similar algorithm was used to extract ring structures in the molecules. Inclusion of thirteen ring structure descriptors increased the accuracy of prediction.
    ABSTRACT In the summer of 1956, John McCarthy organized the famous Dartmouth Conference which is now commonly viewed as the founding event for the field of Artificial Intelligence. During the last 50 years, AI has seen a tremendous... more
    ABSTRACT In the summer of 1956, John McCarthy organized the famous Dartmouth Conference which is now commonly viewed as the founding event for the field of Artificial Intelligence. During the last 50 years, AI has seen a tremendous development and is now a well-established ...
    ABSTRACT The integration of behaviour has long been a major goal for research into information systems integration. Its greater complexity compared to the well under- stood integration of structure means that it must be considered from a... more
    ABSTRACT The integration of behaviour has long been a major goal for research into information systems integration. Its greater complexity compared to the well under- stood integration of structure means that it must be considered from a highly abstract (conceptual) level so the person responsible for the integration can fo- cus on the integration rather than on implementation details. This paper extends previous work which de- fined ways to identify correspondences between pro- cesses and oered integration patterns arising from these correspondences. It investigates the integration of heterogeneous activity specifications and addresses the problem of defining activity specifications in the integrated process. As a result, the user can instan- tiate the integration patterns automatically and need not to deal with parameter bindings.
    Relationship matching is a key procedure during the process of transforming structural data sources, like relational data bases, spreadsheets into the common data model. The matching task refers to the automatic identification of... more
    Relationship matching is a key procedure during the process of transforming structural data sources, like relational data bases, spreadsheets into the common data model. The matching task refers to the automatic identification of correspondences between relationships of source columns and the relationships of the common data model. Numerous techniques have been developed for this purpose. However, the work is missing to recognize relationship types between entities in information obtained from data sources in instance level and resolve ambiguities. In this paper, we develop a method for resolving ambiguous relationship types between entity instances in structured data. The proposed method can be used as standalone matching techniques or to complement existing relationship matching techniques of data sources. The result of an evaluation on a large real-world data set demonstrated the high accuracy of our approach (>80%).
    : As ontologies become more prevalent for information management the need to manage the ontologies increases. Multiple organizations, within a domain, often combine to work on specific projects. When separate organizations come together... more
    : As ontologies become more prevalent for information management the need to manage the ontologies increases. Multiple organizations, within a domain, often combine to work on specific projects. When separate organizations come together to communicate, an alignment of terminology and semantics is required. Ontology creation is often privatized for these individual organizations to represent their view of the domain. This creates problems with alignment and integration, making it necessary to consider how much each ontology should influence the current decision to be made. To assist with determining influence a trust‐based approach on authors and their ontologies provides a mechanism for ranking reasoning results. A representation of authors and the individual resources they provide for the merged ontology becomes necessary. The authors are then weighted by trust and trust for the resources they provide the ontology is calculated. This is then used to assist the integration process allowing for an evolutionary trust model to calculate the level of credibility of resources. Once the integration is complete semantic agreement between ontologies allows for the revision of the authors' trust.
    Software processes are used for organizing work in software development projects. In order to use them for a number of projects they are described in a generic way. Since software development is highly individual, they have to be... more
    Software processes are used for organizing work in software development projects. In order to use them for a number of projects they are described in a generic way. Since software development is highly individual, they have to be particularized (i.e., instantiated) in turn for becoming applicable in projects. A number of instantiation approaches have been proposed in recent years, but
    ABSTRACT Service composition is a recent field that has seen a flurry of different approaches proposed towards the goal of flexible distributed heterogeneous interoperation of software systems, usually based on the expectation that such... more
    ABSTRACT Service composition is a recent field that has seen a flurry of different approaches proposed towards the goal of flexible distributed heterogeneous interoperation of software systems, usually based on the expectation that such systems must be derived from higher level models rather than be coded at low level. In practice, achieving service interoperability nonetheless continues to require significant modelling approach at multiple levels, and existing formal approaches typically require the analysis of the global space of joint executions of interacting services. Based on our earlier work on providing locally checkable consistency rules for guaranteeing the behavioral consistency of inheritance hierarchies, we propose a model-driven approach for creating consistent service orchestrations. We represent service execution and interaction with a high-level model in terms of Petri-net based Behavior diagrams, provide formal criteria for service consistency that can be checked in terms of local model properties, and give a design methodology for developing services that are guaranteed to be interoperable.
    ABSTRACT It is common in organizational contexts and in law to apply a decision-scope approach to decision making. Higher organization levels set a decision scope within which lower organization levels may operate. In case of conflict the... more
    ABSTRACT It is common in organizational contexts and in law to apply a decision-scope approach to decision making. Higher organization levels set a decision scope within which lower organization levels may operate. In case of conflict the regulations and rules of a higher level (e.g. European Union) take precedence over those of a lower level (e.g. a member state). This approach can also be beneficially applied to the specialization of the most important kind of business rules in information systems, action rules. Such rules define under which conditions certain actions may, must not, or need to be taken. Applying the decision scope approach to business process modeling based on BPMN means that business rules should not be buried in decision tasks but be made explicit at the flow level. This requires a rethinking of the current BPMN modeling paradigm in that several aspects in conditional flow so far modeled jointly are separately captured: (a) potential ordering of tasks, (b) conditions under which a task may or may not be invoked, and (c) conditions under which a particular task needs to be invoked. Rather than re-defining BPMN as such, an appropriate extension may be provided on-top of BPMN and mapped to standard BPMN primitives. Applying the decision scope approach to active data warehousing, where analysis rules express actionable knowledge, requires to consider two alternative hierarchies: (1) hierarchies of sets of points at the same granularity and (2) the roll-up hierarchy of points in multi-dimensional space. This paper presents the decision scope approach, outlines how it complements inheritance and specialization approaches typically followed in object-oriented systems, and introduces consistency rules for business rule specialization as well as auto-correction rules, which rectify an inconsistent lower-level business rule such that it becomes consistent with higher-level business rules.

    And 490 more