Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Mohamed Elsharkawi

    Nowadays, manufacturers are shifting from a traditional product-centric business paradigm to a service-centric one by offering products that are accompanied by services, which is known as Product-Service Systems (PSSs). PSS customization... more
    Nowadays, manufacturers are shifting from a traditional product-centric business paradigm to a service-centric one by offering products that are accompanied by services, which is known as Product-Service Systems (PSSs). PSS customization entails configuring products with varying degrees of differentiation to meet the needs of various customers. This is combined with service customization, in which configured products are expanded by customers to include smart IoT devices (e.g., sensors) to improve product usage and facilitate the transition to smart connected products. The concept of PSS customization is gaining significant interest; however, there are still numerous challenges that must be addressed when designing and offering customized PSSs, such as choosing the optimum types of sensors to install on products and their adequate locations during the service customization process. In this paper, we propose a data warehouse-based recommender system that collects and analyzes large v...
    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms... more
    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms have ignored the fact that physical obstacles such as rivers, lakes, and highways exist in the real world and could thus affect the result of the clustering. In this paper, we propose CPO, an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical information used to label the cell as dense or non-dense. It also labels each cell as obstructed (i.e. intersects any obstacle) or nonobstructed. For each obstructed cell, the algorithm finds a number of non-obstructed sub-cells. Then it finds the dense regions of non-obstructed cells or sub-cells by a breadthfirst search as the required clusters with a cent...
    In this paper, we propose an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical... more
    In this paper, we propose an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical information that enables us to label the cell as dense or non-dense. We also label each cell as obstructed (i.e. intersects any obstacle) or non-obstructed. Then the algorithm finds the regions (clusters) of connected, dense, non-obstructed cells. Finally, the algorithm finds a center for each such region and returns those centers as centers of the relatively dense regions (clusters) in the spatial area.
    There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve... more
    There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve this novel goal. The most recent and richest framework (model) is the Colombo model. However, even for experienced developers, working with Colombo formalisms is low-level, very complex and time-consuming. We propose to use UML (Unified Modeling Language) to model services and service composition in Colombo. By using UML, the web service developer will deal with the high level graphical models of UML avoiding the difficulties of working with the low-level and complex details of Colombo. To be able to use Colombo automatic composition algorithm, we propose to represent Colombo by a set of related XML document types that can be a base for a Colombo language. Moreover, we propose the transformation rules between UML and Colombo proposed XML documents. Ne...
    Research Interests:
    Product-service systems (PSSs) are being revolutionized into smart, connected products, which changes the industrial and technological landscape and unlocks unprecedented opportunities. The intelligence that smart, connected products... more
    Product-service systems (PSSs) are being revolutionized into smart, connected products, which changes the industrial and technological landscape and unlocks unprecedented opportunities. The intelligence that smart, connected products embed paves the way for more sophisticated data gathering and analytics capabilities ushering in tandem a new era of smarter supply and production chains, smarter production processes, and even end-to-end connected manufacturing ecosystems. This vision imposes a new technology stack to support the vision of smart, connected products and services. In a previous work, we have introduced a novel customization PSS lifecycle methodology with underpinning technological solutions that enable collaborative on-demand PSS customization, which supports companies to evolve their product-service offerings by transforming them into smart, connected products. This is enabled by the lifecycle through formalized knowledge-intensive structures and associated IT tools that provide the basis for production actionable “intelligence” and a move toward more fact-based manufacturing decisions. This paper contributes by a recommendation framework that supports the different processes of the PSS lifecycle through analysing and identifying the recommendation capabilities needed to support and accelerate different lifecycle processes, while accommodating with different stakeholders’ perspectives. The paper analyses the challenges and opportunities of the identified recommendation capabilities, drawing a road-map for R&D in this direction.
    ABSTRACT Abstract Two-phase locking (2PL) is the concurrency control mechanism that is used in most commercial database systems. In 2PL, for a transaction to access a data item, it has to hold the appropriate lock (read or write) on the... more
    ABSTRACT Abstract Two-phase locking (2PL) is the concurrency control mechanism that is used in most commercial database systems. In 2PL, for a transaction to access a data item, it has to hold the appropriate lock (read or write) on the data item by issuing a lock request. While the way transactions set their lock requests and the way the requests are granted would certainly affect a system’s performance, such aspects have not received much attention in the literature. In this paper, a general transaction-processing model is proposed. In this model, a transaction is comprised of a number of stages, and in each stage the transaction can request to lock one or more data items. Methods for granting transaction requests and scheduling policies for granting blocked transactions are also proposed. A comprehensive,simulation model,is developed,from which,the performance,of 2PL with our proposals is evaluated. Results indicate that performance,models,in which transactions request locks on an item-by-item basis and use first-come-first-served (FCFS) scheduling in granting blocked transactions underestimate the performance,of 2PL. The performance,of 2PL can be greatly improved,if locks are requested in stages as dictated by the application. A scheduling policy that uses global information and/or schedules blocked transactions dynamically shows a better performance than the default FCFS. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Two-phase locking; First-come-first-served; Concurrency control; Transaction-processing model
    Abstract—Inspired by the great success of information
    Given a set of moving object trajectories, we show how to cluster them using k-means clustering approach. Our proposed clustering algorithm is competitive with the k-means clustering because it specifies the value of “k ” based on the... more
    Given a set of moving object trajectories, we show how to cluster them using k-means clustering approach. Our proposed clustering algorithm is competitive with the k-means clustering because it specifies the value of “k ” based on the segment’s slope of the moving object trajectories. The advantage of this approach is that it overcomes the known drawbacks of the k-means algorithm, namely, the dependence on the number of clusters (k), and the dependence on the initial choice of the clusters’ centroids, and it uses segment’s slope as a heuristic to determine the different number of clusters for the k-means algorithm. In addition, we use the standard quality measure (silhouette coefficient) in order to measure the efficiency of our proposed approach. Finally, we present experimental results on both real and synthetic data that show the performance and accuracy of our proposed technique. KEYWORDS Moving Object Database (MOD), clustering moving objects, and k-means clustering algorithm. 1
    The nested transaction model was introduced to satisfy the requirements of advanced database applications. Moreover, it is currently the basic transaction model for new databases like workflow systems, mobile databases, and... more
    The nested transaction model was introduced to satisfy the requirements of advanced database applications. Moreover, it is currently the basic transaction model for new databases like workflow systems, mobile databases, and objectrelational databases. Though there are several performance evaluation studies of different concurrency control mechanisms in nested transactions, effects of transaction parameters on the overall system performance have not received any attention. In this paper, we study the effects of transactions characteristics on system performance. We developed a detailed simulation model and conducted several experiments to measure the impact of transactions characteristics on the performance. First, the effect of the number of leaves on the performance of nested transactions is investigated under different shaping parameters. Also, effects of the depth of the transaction tree on the system performance are investigated.
    In the last decade, Moving Object Databases (MODs
    As the number of provenance aware organizations increases, particularly in workflow scientific domains, sharing provenance data becomes a necessity. Meanwhile, scientists wish to share their scientific results without sacrificing privacy,... more
    As the number of provenance aware organizations increases, particularly in workflow scientific domains, sharing provenance data becomes a necessity. Meanwhile, scientists wish to share their scientific results without sacrificing privacy, neither directly through illegal authorizations nor indirectly through illegal inferences. Nevertheless, current work in workflow provenance sanitizing approaches do not address the disclosure problem of sensitive information through inferences. In this paper, we propose a comprehensive workflow provenance sanitization approach called ProvS that maximize both graph utility and privacy with respect to the influence of various workflow constraints. Experimental results show the effectiveness of ProvS through testing it on a graph-based system implementation.
    Coalescing is a data restructuring operation applicable to temporal databases. It merges timestamps of adjacent or overlapping tuples that have identical attribute values. The likelihood that a temporal query employs coalescing is very... more
    Coalescing is a data restructuring operation applicable to temporal databases. It merges timestamps of adjacent or overlapping tuples that have identical attribute values. The likelihood that a temporal query employs coalescing is very high. However, coalescing is an expensive and time consuming operation. In this paper1 we present a novel temporal relational model through which coalescing be-comes quite simple. The basic idea is to augment each time-varying attribute in a temporal relation with two additional attributes that trace changes in values of the correspond-ing time-varying attribute. One attribute traces changes in values with respect to each individual instance (i.e. tuples having the same key value), while the other attribute traces changes in values globally for all instances (i.e. all tuples in the temporal relation). Using these tracing attributes, coa-lescing could be easily implemented through a quite simple join-free group-by query. The coalescing query is fully p...
    With the rapidly increasing popularity of XML as a model for data representation and exchange on the Internet, there is a lot of interest in efficient storage of data that conforms to a labeled-tree data model. In this paper, we propose a... more
    With the rapidly increasing popularity of XML as a model for data representation and exchange on the Internet, there is a lot of interest in efficient storage of data that conforms to a labeled-tree data model. In this paper, we propose a novel structure that is capable of evaluating XML queries without re-parsing the XML document. Unlike existing approaches to store XML documents, the proposed structure stores only paths of an XML document. Therefore, the proposed structure is superior as the structure size is reduced. We present algorithms for evaluation of XPath queries on documents stored in the new structure. The paper also presents algorithms that update the structure taking into consideration both value and schema updates on XML document.
    In this paper, we propose an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical... more
    In this paper, we propose an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical information that enables us to label the cell as dense or non-dense. We also label each cell as obstructed (i.e. intersects any obstacle) or non-obstructed. Then the algorithm finds the regions (clusters) of connected, dense, non-obstructed cells. Finally, the algorithm finds a center for each such region and returns those centers as centers of the relatively dense regions (clusters) in the spatial area.
    Today portable devices as mobile phones, laptops, personal digital assistants(PDAs), and many other mobile devices are ubiquitous. Along with the rapid advances in positioning and wireless technologies, moving object position information... more
    Today portable devices as mobile phones, laptops, personal digital assistants(PDAs), and many other mobile devices are ubiquitous. Along with the rapid advances in positioning and wireless technologies, moving object position information has become easier to acquire. This availability of location information triggered the need for clustering and classifying location information to extract useful knowledge from it and to discover hidden patterns in moving objects' motion behaviors. Many existing algorithms have studied clustering as an analysis technique to find data distribution patterns. In this paper we consider the clustering problem applied to moving object trajectory data. We propose a “time-based" clustering algorithm that adapts the k-means algorithm for trajectory data. We present two techniques: an exact, and an approximate technique. Besides, we present experimental results on both synthesized and real data that show both the performance and accuracy of our propos...
    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms... more
    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms have ignored the fact that physical obstacles such as rivers, lakes, and highways exist in the real world and could thus affect the result of the clustering. In this paper, we propose CPO, an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical information used to label the cell as dense or non-dense. It also labels each cell as obstructed (i.e. intersects any obstacle) or nonobstructed. For each obstructed cell, the algorithm finds a number of non-obstructed sub-cells. Then it finds the dense regions of non-obstructed cells or sub-cells by a breadthfirst search as the required clusters with a cent...
    Product-service systems (PSSs) are being revolutionized into smart, connected products, which changes the industrial and technological landscape and unlocks unprecedented opportunities. The intelligence that smart, connected products... more
    Product-service systems (PSSs) are being revolutionized into smart, connected products, which changes the industrial and technological landscape and unlocks unprecedented opportunities. The intelligence that smart, connected products embed paves the way for more sophisticated data gathering and analytics capabilities ushering in tandem a new era of smarter supply and production chains, smarter production processes, and even end-to-end connected manufacturing ecosystems. This vision imposes a new technology stack to support the vision of smart, connected products and services. In a previous work, we have introduced a novel customization PSS lifecycle methodology with underpinning technological solutions that enable collaborative on-demand PSS customization, which supports companies to evolve their product-service offerings by transforming them into smart, connected products. This is enabled by the lifecycle through formalized knowledge-intensive structures and associated IT tools tha...
    There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve... more
    There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve this novel goal. The most recent and richest framework (model) is the Colombo model. However, even for experienced developers, working with Colombo formalisms is low-level, very complex and timeconsuming. We propose to use UML (Unified Modeling Language) to model services and service composition in Colombo. By using UML, the web service developer will deal with the high level graphical models of UML avoiding the difficulties of working with the low-level and complex details of Colombo. To be able to use Colombo automatic composition algorithm, we propose to represent Colombo by a set of related XML document types that can be a base for a Colombo language. Moreover, we propose the transformation rules between UML and Colombo proposed XML documents. Nex...
    As the number of provenance aware organizations increases, particularly in workflow scientific domains, sharing provenance data becomes a necessity. Meanwhile, scientists wish to share their scientific results without sacrificing privacy,... more
    As the number of provenance aware organizations increases, particularly in workflow scientific domains, sharing provenance data becomes a necessity. Meanwhile, scientists wish to share their scientific results without sacrificing privacy, neither directly through illegal authorizations nor indirectly through illegal inferences. Nevertheless, current work in workflow provenance sanitizing approaches do not address the disclosure problem of sensitive information through inferences. In this paper, we propose a comprehensive workflow provenance sanitization approach called ProvS that maximize both graph utility and privacy with respect to the influence of various workflow constraints. Experimental results show the effectiveness of ProvS through testing it on a graph-based system implementation.
    XML has become the prime standard for data exchange on the Web and a uniform data model for data integration. Since it solve the problem of different data representation the challenge to secure or making certain data invisible is as... more
    XML has become the prime standard for data exchange on the Web and a uniform data model for data integration. Since it solve the problem of different data representation the challenge to secure or making certain data invisible is as important as making the data available in an efficient manner. Disclosure problem still considered an open problem especially in XML.. To guarantee secure XML documents we need to develop appropriate protection techniques to control the inference process and protect sensitive data from reaching to unauthorized users. In this paper we discuss the problem of protecting XML data at logical level specifically solve disclosure problem. The objective is to prevent unauthorized users to infer sensitive information through the data they authorized to access (result of previous queries), integrity constraints, and using inferences. In most existing access control approaches the XML elements specified by access policies are either accessible or inaccessible accord...
    — Many applications benefit from discovering and analyzing the topics of interest of social network users such as recommender systems. In this paper, we propose a methodology for building a topic hierarchy tree for user interests... more
    — Many applications benefit from discovering and analyzing the topics of interest of social network users such as recommender systems. In this paper, we propose a methodology for building a topic hierarchy tree for user interests solicited from multiple knowledge bases. Different tree levels indicate different degree of abstraction, where topics are at the higher nodes and subtopics at the children nodes, i.e. leaf nodes are at the lowest level of abstraction. For each node, we aim to generate a diverse list of keywords that we call XWords lists; that contain list of words from which we can infer the node's topic(s), we call these words: Topic Indicating Words (TIW). These TIWs are used for topic identification of users' posts. To build the hierarchy we explore some of the available knowledge bases; namely, WordNet, Wikipedia and Directory Mozilla (DMoz) and integrate topics from those knowledge bases to build a complete topic tree for users' interests.
    Sentence compression is the process of removing words or phrases from a sentence in a manner that would abbreviate the sentence while conserving its original meaning. This work introduces a model for sentence compression based on... more
    Sentence compression is the process of removing words or phrases from a sentence in a manner that would abbreviate the sentence while conserving its original meaning. This work introduces a model for sentence compression based on dependency graph clustering. The main idea of this work is to cluster related dependency graph nodes in a single chunk, and to then remove the chunk which has the least significant effect on a sentence. The proposed model does not require any training parallel corpus. Instead, it uses the grammatical structure graph of the sentence itself to find which parts should be removed. The paper also presents the results of an experiment in which the proposed work was compared to a recent supervised technique and was found to perform better.
    Manufacturers today compete to offer not only products, but products accompanied by services, which are referred to as product-service systems (PSSs). PSS mass customization is defined as the production of products and services to meet... more
    Manufacturers today compete to offer not only products, but products accompanied by services, which are referred to as product-service systems (PSSs). PSS mass customization is defined as the production of products and services to meet the needs of individual customers with near-mass-production efficiency. In the context of the PSS mass customization environment, customers are overwhelmed by a plethora of previously customized PSS variants. As a result, finding a PSS variant that is precisely aligned with the customer’s needs is a cognitive task that customers will be unable to manage effectively. In this paper, we propose a hybrid knowledge-based recommender system that assists customers in selecting previously customized PSS variants from a wide range of available ones. The recommender system (RS) utilizes ontologies for capturing customer requirements, as well as product-service and production-related knowledge. The RS follows a hybrid recommendation approach, in which the proble...
    In this paper, we address the problem of task redundancy in crowdsourcing systems while providing a methodology to decrease the overall effort required to accomplish a crowdsourcing task. Typical task assignment systems assign tasks to a... more
    In this paper, we address the problem of task redundancy in crowdsourcing systems while providing a methodology to decrease the overall effort required to accomplish a crowdsourcing task. Typical task assignment systems assign tasks to a fixed number of crowd workers, while tasks are varied in difficulty as being easy or hard tasks. Easy tasks need fewer task assignments than hard tasks. We present TRR, a task redundancy reducer that assigns tasks to crowd workers on several work iterations, that adaptively estimates how many workers are needed for each iteration for Boolean and classification task types. TRR stops assigning tasks to crowd workers upon detecting convergence between workers’ opinions that in turn reduces invested cost and time to answer a task. TRR supports Boolean, classification, and rating task types taking into consideration both crowdsourcing task assignment schemes of anonymous workers task assignments and non-anonymous workers task assignments. The paper includes experimental results by performing simulating experiments on crowdsourced datasets.
    Fake news propagation in online social networks (OSN) is one of the critical societal threats nowadays directing attention to fake news mitigation and intervention techniques. One of the typical mitigation techniques focus on initiating... more
    Fake news propagation in online social networks (OSN) is one of the critical societal threats nowadays directing attention to fake news mitigation and intervention techniques. One of the typical mitigation techniques focus on initiating news mitigation campaigns targeting a specific set of users when the infected set of users is known or targeting the entire network when the infected set of users is unknown. The contemporary mitigation techniques assume the campaign users’ acceptance to share a mitigation news (MN); however, in reality, user behavior is different. This paper focuses on devising a generic mitigation framework, where the social crowd can be employed to combat the influence of fake news in OSNs when the infected set of users is undefined. The framework is composed of three major phases: facts discovery, facts searching and, community recommendation. Mitigation news circulation is accomplished by recruiting a set of social crowd users (news propagators) who are likely t...
    Social networks can be modeled as attributed networks whose nodes represent users, edges represent relationships among users (e.g. friendship/follow) and attribute vectors hold properties of nodes and/or edges. In this paper, we consider... more
    Social networks can be modeled as attributed networks whose nodes represent users, edges represent relationships among users (e.g. friendship/follow) and attribute vectors hold properties of nodes and/or edges. In this paper, we consider friends' recommendation based on interest-based communities generated from topic based attributed social networks (TbASN). In our model, an attribute vector is not just a container for explicit users' profile data that is stored in social network's dataset, but rather holds topic vectors that are derived from analyzing the implicit interest of users' that are aggregated from his/her posts on the social network (e.g. tweets in Twitter, posts in Facebook). In our framework, topics of interest are represented as a hierarchy of topics (Topics/Subtopics) forming hierarchical interest-based communities. Users within each interest-based community are clustered according to their profile features (age, location, education etc.). Those clusters are later used in recommendations where recommendations target members of the same cluster to guarantee the quality and coherence of recommendations. In addition, we propose a recommendation selection approach to handle the large number of recommended candidates. The main advantage of the proposed approach is that it considers multiple criteria for candidate selection including the number of common communities, the resemblance in basic features, as well as network proximity. In addition to recommending friends of similar interests, frequent pattern mining is used to discover frequently occurring interests in order to be used in recommending communities for users to join. Although our approach is generic and can be applied to most of the existing social networks, we used Twitter as our target social network.
    Universal quantifier queries on recursive relation are defined as the set of queries that query the database to get pairs of records (r1, r2) from the same relation such that the second record of each pair must intersect with the first... more
    Universal quantifier queries on recursive relation are defined as the set of queries that query the database to get pairs of records (r1, r2) from the same relation such that the second record of each pair must intersect with the first record in a set of required attributes. Such query remains to be an interesting type of queries specially today with the appearance of many applications that need them. Today there are many databases that include scholarly data like DBLP, those databases though include information about papers, conferences, authors, and many other useful information, they don't correlate authors with similar interests or research directions. Nevertheless, the current explosion in the amount of data has driven the need for new techniques and technologies as traditional database techniques are no longer adequate to manage, store, and query those large amounts of data. Thus, the use of cloud emerged as a solution for several big data problems. Using clusters of commodity machines turn to be an optimal solution. Recently, there has been considerable interest in designing new algorithms using inverted index to efficiently answer different types of queries over big data. In this paper, we present a new technique for evaluating universal quantifier queries on recursive relation on large scholarly datasets using the popular MapReduce framework and inverted index. In addition, we present experimental results that show the performance of our proposed technique over the famous scholarly DBLP data-set.
    In this paper, we study problems in processing queries against temporal object-oriented databases (TOODB’s). Solutions are given for the following problems: (1) A problem due to object migration. In object-oriented databases an update to... more
    In this paper, we study problems in processing queries against temporal object-oriented databases (TOODB’s). Solutions are given for the following problems: (1) A problem due to object migration. In object-oriented databases an update to some instance variables of an object may enforce the object to “migrate” to a new class. Due to this migration an object’s history in a TOODB is distributed among several classes. Given a temporal query, we need to determine which classes in the schema should be searched to answer the query. (2) Problems due to the rich semantics of the model, specifically problems in creating versions of shared and default-valued instance variables and problems associated with complex instance variables. (3) Problems due to schema evolution. When the schema is changed, we have to keep the old schema in order to answer temporal queries. (4) Problems due to updating the history. In historical and temporal databases, it is possible to correct the history information w...
    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms... more
    Clustering large spatial databases is an important problem, which tries to find the densely populated regions in a spatial area to be used in data mining, knowledge discovery, or efficient information retrieval. However most algorithms have ignored the fact that physical obstacles such as rivers, lakes, and highways exist in the real world and could thus affect the result of the clustering. In this paper, we propose CPO, an efficient clustering technique to solve the problem of clustering in the presence of obstacles. The proposed algorithm divides the spatial area into rectangular cells. Each cell is associated with statistical information used to label the cell as dense or non-dense. It also labels each cell as obstructed (i.e. intersects any obstacle) or nonobstructed. For each obstructed cell, the algorithm finds a number of non-obstructed sub-cells. Then it finds the dense regions of non-obstructed cells or sub-cells by a breadthfirst search as the required clusters with a center to each region.
    The nested transaction model was introduced to satisfy the requirements of advanced database applications. Moreover, it is currently the basic transaction model for new databases like workflow systems, mobile databases, and... more
    The nested transaction model was introduced to satisfy the requirements of advanced database applications. Moreover, it is currently the basic transaction model for new databases like workflow systems, mobile databases, and objectrelational databases. Though there are several performance evaluation studies of different concurrency control mechanisms in nested transactions, effects of transaction parameters on the overall system performance have not received any attention. In this paper, we study the effects of transactions characteristics on system performance. We developed a detailed simulation model and conducted several experiments to measure the impact of transactions characteristics on the performance. First, the effect of the number of leaves on the performance of nested transactions is investigated under different shaping parameters. Also, effects of the depth of the transaction tree on the system performance are investigated.
    Research Interests:
    Research Interests:
    ABSTRACT
    Research Interests:

    And 20 more