Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Perturbation in the normal function of the cell signaling pathways often leads to diseases. One of the factors that help understand the mechanism of diseases is the precise identification and investigation of perturbed signaling pathways.... more
Perturbation in the normal function of the cell signaling pathways often leads to diseases. One of the factors that help understand the mechanism of diseases is the precise identification and investigation of perturbed signaling pathways. Pathway analysis methods have been developed as their purpose is to identify perturbed signaling pathways in given conditions. Among these methods, some consider the pathways topologies in their analysis, which are referred to as topology-based methods. Most of the topology-based methods used simple graph-based models to incorporate topology in their analysis, which have some limitations. We describe a new Pathway Analysis method using Petri net (PAPet) that uses the Petri net to model the signaling pathways and then propose an algorithm to measure the perturbation on a given pathway under a given condition. Modeling with Petri net has some advantages and could overcome the shortcomings of the simple graph-based models. We illustrate the capabilities of the proposed method using sensitivity, prioritization, mean reciprocal rank, and false-positive rate metrics on 36 real datasets from various diseases. The results of comparing PAPet with five pathway analysis methods FoPA, PADOG, GSEA, CePa and SPIA show that PAPet is the best one that provides a good compromise between all metrics. In addition, the results of applying methods to gene expression profiles in normal and Pancreatic Ductal Adenocarcinoma cancer (PDAC) samples show that the PAPet method achieves the best rank among others in finding the pathways that have been previously reported for PDAC. The PAPet method is available at https://github.com/fmansoori/PAPET.
XML is a markup language which is becoming the standard format for information representation and data exchange. A major purpose of XML is the explicit representation of the logical structure of a document. Much research has been... more
XML is a markup language which is becoming the standard format for information representation and data exchange. A major purpose of XML is the explicit representation of the logical structure of a document. Much research has been performed to exploit logical structure of documents in information retrieval in order to precisely extract user information need from large collections of XML documents. In this paper, we describe an XML information retrieval weighting scheme that tries to find the most relevant elements in XML documents in response to a user query. We present this weighting model for information retrieval systems that utilize plausible inferences to infer the relevance of elements in XML documents. We also add to this model the Dempster-Shafer theory of evidence to express the uncertainty in plausible inferences and Dempster-Shafer rule of combination to combine evidences derived from different inferences. Keywords—Dempster-Shafer theory, plausible inferences, XML informat...
Extraction and normalization of temporal expressions are essential for many NLP tasks. While a considerable effort has been put on this task over the last few years, most of the research has been conducted on the English domain, and only... more
Extraction and normalization of temporal expressions are essential for many NLP tasks. While a considerable effort has been put on this task over the last few years, most of the research has been conducted on the English domain, and only a few works have been developed on other languages. In this paper, we present ParsTime, a tagger for temporal expressions in Persian (Farsi) documents. ParsTime is a rule-based system that extracts and normalizes Persian temporal expressions according to the TIMEX3 annotation standard. Our experimental results show that ParsTime can identify temporal expressions in Persian texts with an F1-score 0.89. As an additional contribution we make available our code to the research community.
Abstract With the fast development of online social networks, a large number of their members are involved in more than one social network. Finding most influential users is one of the interesting social network analysis tasks. The... more
Abstract With the fast development of online social networks, a large number of their members are involved in more than one social network. Finding most influential users is one of the interesting social network analysis tasks. The influence maximization (IM) problem aims to select a minimum set of users who maximize the influence spread on the underlying network. Most of the previous researches only focus on a single social networks, whereas in real world, users join to multiple social networks. Thus, influence can spread through common users on multiple networks. Besides, the existing works including simulation based, proxy based and sketch based approaches suffer from different issues including scalability, efficiency and feasibility due to the nature of these approaches for exploring networks and computation of their influence diffusion. Moreover, in the previous algorithms, several heuristics are employed to capture network topology for IM. But, these methods have information loss during network exploration because of their pruning strategies. In this paper, a new research direction is presented for studying IM problem on interconnected networks. The proposed approach employs deep learning techniques to learn the feature vectors of network nodes while preserving both local and global structural information. To the best of our knowledge, network embedding has not yet been used to solve IM problem. Indeed, our algorithm leverages deep learning techniques for feature engineering to extract all the appropriate information related to IM problem for single and interconnected networks. Moreover, we prove that the proposed algorithm is monotone and submodular, thus, an optimal solution is guaranteed by the proposed approach. The experimental results on two interconnected networks including DBLP and Twitter-Foursquare illustrate the efficiency of the proposed algorithm in comparison to state of the art IM algorithms. We also conduct some experiments on NetHept dataset to evaluate the performance of the proposed approach on single networks.
ABSTRACT Recently, tree structures have become a popular way for storing huge amount of data. Clustering these data can facilitate different operations such as storage, retrieval, rule extraction and processing. In this paper, we propose... more
ABSTRACT Recently, tree structures have become a popular way for storing huge amount of data. Clustering these data can facilitate different operations such as storage, retrieval, rule extraction and processing. In this paper, we propose a novel and heuristic algorithm for clustering tree structured data, called TreeCluster. This algorithm considers a representative tree for each cluster. It differs significantly from the traditional methods based on computing tree edit distance. TreeCluster compares each input tree T only with the representative trees of clusters and as a result allows a significant reduction of the running time. We show the efficiency of TreeCluster in terms of time complexity. Furthermore, we empirically evaluate the effectiveness and accuracy of TreeCluster algorithm in comparison with the pervious works. Our experimental results show that TreeCluster improves some cluster quality measures such as intra-cluster similarity, inter-cluster similarity, DUNN and DB.
A thorough analysis of people’s sentiment about a business, an event or an individual is necessary for business development, event analysis and popularity assessment. Social networks are rich sources of obtaining user opinions about... more
A thorough analysis of people’s sentiment about a business, an event or an individual is necessary for business development, event analysis and popularity assessment. Social networks are rich sources of obtaining user opinions about people, events and products. Sentiment analysis conducted using multiple user comments and messages on microblogs is an interesting field of data mining and natural language processing (NLP). Different techniques and algorithms have recently been developed for conducting sentiment analysis on Twitter. Different proposed classification and pure NLP-based methods have different behaviours in predicting sentiment orientation. In this study, we combined the results of the classic classifiers and NLP-based methods to propose a new approach for Twitter sentiment analysis. The proposed method uses a fuzzy measure for determining the importance of each classifier to make the final decision. Fuzzy measures are used with the Choquet fuzzy integral for fusing the c...
Protein complexes play a dominant role in cellular organization and function. Prediction of protein complexes from the network of physical interactions between proteins (PPI networks) has thus become one of the important research areas.... more
Protein complexes play a dominant role in cellular organization and function. Prediction of protein complexes from the network of physical interactions between proteins (PPI networks) has thus become one of the important research areas. Recently, many computational approaches have been developed to identify these complexes. Various performance assessment measures have been proposed for evaluating the efficiency of these methods. However, there are many inconsistencies in the definitions and usage of the measures across the literature. To address this issue, we have gathered and presented the most important performance evaluation measures and developed a tool, named CompEvaluator, to critically assess the protein complex prediction methods. The tool and documentation are publicly available at https://sourceforge.net/projects/compevaluator/files/.
Abstract A large number of internet users share their knowledge and opinions in online social networks like forums, weblogs, etc. This fact has attracted many researchers from different fields to study online social networks. The Persian... more
Abstract A large number of internet users share their knowledge and opinions in online social networks like forums, weblogs, etc. This fact has attracted many researchers from different fields to study online social networks. The Persian language is one of the dominant languages in the Middle East which is the official language of Iran, Afghanistan and Tajikistan; so, a large number of Persians are active in online social networks. Despite this fact, very few studies exist about Persian social networks. In this paper we will study the characteristics of Persian bloggers based on a new collection, named irBlogs. The collection contains nearly 5 million posts and the network of more than 560,000 Persian bloggers which assures the reliability of the results of this study. Some of the analyzed characteristics are: the similarities and the differences between formal Persian and the language style that is used by Persian bloggers, the interests of the bloggers and the impact of other web resources on Persian blogosphere. Our analysis show that IT, sports, society, culture and politics are the main interests of Persian bloggers. Also, analysis of the links that are shared by Persian bloggers shows that news agencies, knowledge-bases and other social networks have a great impact on Persian bloggers and they are interested to share multimedia content.
Blogs are one of the main user-generated contents on the web and are growing in number rapidly. The characteristics of blogs require the development of specialized search methods which are tuned for the blogosphere. In this paper, we... more
Blogs are one of the main user-generated contents on the web and are growing in number rapidly. The characteristics of blogs require the development of specialized search methods which are tuned for the blogosphere. In this paper, we focus on blog retrieval, which aims at ranking blogs with respect to their recurrent relevance to a user’s topic. Although different blog retrieval algorithms have already been proposed, few of them have considered temporal properties of the input queries. Therefore, we propose an efficient approach to improving relevant blog retrieval using temporal property of queries. First, time sensitivity of each query is automatically computed for different time intervals based on an initially retrieved set of relevant posts. Then a temporal score is calculated for each blog and finally all blogs are ranked based on their temporal and content relevancy with regard to the input query. Experimental analysis and comparison of the proposed method are carried out usin...
A query term re-weighting method to reformulate textual queries is proposed.Our approach is a local query modification method.We use the information carried by the top documents in relation to each other.Query term re-weighting can... more
A query term re-weighting method to reformulate textual queries is proposed.Our approach is a local query modification method.We use the information carried by the top documents in relation to each other.Query term re-weighting can applied to short queries too.Queries which use a general vocabulary set show the least improvement. Pseudo-relevance feedback is the basis of a category of automatic query modification techniques. Pseudo-relevance feedback methods assume the initial retrieved set of documents to be relevant. Then they use these documents to extract more relevant terms for the query or just re-weigh the user's original query. In this paper, we propose a straightforward, yet effective use of pseudo-relevance feedback method in detecting more informative query terms and re-weighting them. The query-by-query analysis of our results indicates that our method is capable of identifying the most important keywords even in short queries. Our main idea is that some of the top documents may contain a closer context to the user's information need than the others. Therefore, re-examining the similarity of those top documents and weighting this set based on their context could help in identifying and re-weighting informative query terms. Our experimental results in standard English and Persian test collections show that our method improves retrieval performance, in terms of MAP criterion, up to 7% over traditional query term re-weighting methods.
Opinion leaders are the influential people who are able to shape the minds and thoughts of other people in their society. Finding opinion leaders is an important task in various domains ranging from marketing to politics. In this paper, a... more
Opinion leaders are the influential people who are able to shape the minds and thoughts of other people in their society. Finding opinion leaders is an important task in various domains ranging from marketing to politics. In this paper, a new effective algorithm for finding opinion leaders in a given domain in online social networks is introduced. The proposed algorithm, named OLFinder, detects the main topics of discussion in a given domain, calculates a competency and a popularity score for each user in the given domain, then calculates a probability for being an opinion leader in that domain by using the competency and the popularity scores and finally ranks the users of the social network based on their probability of being an opinion leader. Our experimental results show that OLFinder outperforms other methods based on precision-recall, average precision and P@N measures.
Accepted as full paper but not published because of not paying the registration fee
In this paper, we present our approach to author ranking subtask; which is a part of author-profiling task in RepLab 2014. In this subtask, systems are expected to detect influential authors and opinion makers on Twitter website. The... more
In this paper, we present our approach to author ranking subtask; which is a part of author-profiling task in RepLab 2014. In this subtask, systems are expected to detect influential authors and opinion makers on Twitter website. The systems’ output, for a given domain, must be a ranked list of authors according to their probability of being an influential author or opinion maker. Our system utilizes a Time-sensitive Voting algorithm, which is based on the hypothesis that influential authors tweet actively about topics of their interest. In this method, hot topics of each domain are extracted and a time-sensitive voting algorithm ranks each authors on their respective topics
1. Abstract Storing all database data in memory is an idea that many researchers have been studying it from mid-1980s when RAM prices decreased while their capacity increased. Main Memory Database systems (MMDB) are an efficient solution... more
1. Abstract Storing all database data in memory is an idea that many researchers have been studying it from mid-1980s when RAM prices decreased while their capacity increased. Main Memory Database systems (MMDB) are an efficient solution to store all database data in main physical memory. Conventional database systems like relational databases are optimized for disk I/O operations but Memory resident databases use different optimization mechanism to organize and cluster data in memory in an efficient manner that minimizes disk I/O operations. In this empirical study we compared performance of main memory databases with conventional database systems. For this purpose we chose SQL Server and PERST which are relational disk based and object-oriented main memory databases respectively. Then we evaluated them based of TPC-B and included our experimental results in this paper to assess their behavior from different aspects. Our results show that main memory databases can handle huge amoun...
A combination of Click-through data and Classifier in Learning to Rank task.Proposition of Click-through features based on the concept of Click-through data.Classifier Construction and Classifier Fusion on the top of Click-through... more
A combination of Click-through data and Classifier in Learning to Rank task.Proposition of Click-through features based on the concept of Click-through data.Classifier Construction and Classifier Fusion on the top of Click-through Features. Ranking as a key functionality of Web search engines, is a user-centric process. However, click-through data, which is the source of implicit feedback of users, are not included in almost all of datasets published for the task of ranking. This limitation is also observable in the majority of benchmark datasets prepared for the learning to rank which is a new and promising trend in the information retrieval literature. In this paper, inspiring from the click-through data concept, the notion of click-through features is introduced. Click-through features could be derived from the given primitive dataset even in the absence of click-through data in the utilized benchmark dataset. These features are categorized into three different categories and are either related to the users' queries, results of searches or clicks of users. With the use of click-through features, in this research, a novel learning to rank algorithm is proposed. By taking into account informativeness measures such as MAP, NDCG, InformationGain and OneR, at its first step, the proposed algorithm generates a classifier for each category of click-through features. Thereafter, these classifiers are fused together by using exponential ordered weighted averaging operators. Experimental results obtained from a plenty of investigations on WCL2R and LETOR4.0 benchmark datasets, demonstrate that the proposed method can substantially outperform well-known ranking methods in the presence of explicit click-through data based on MAP and NDCG criteria. Specifically, such an improvement is more noticeable on the top of ranked lists, which usually attract users' attentions more than other parts of these lists. This betterment on WCL2R dataset is about 20.25% for [email protected] and 5.68% for [email protected] in comparison with SVMRank, which is a well-known learning to rank algorithm. CF-Rank can also obtain higher or comparable performance with baseline methods even in the absence of explicit click-through data in utilized primitive datasets. In this regard, the proposed method on the LETOR4.0 dataset has achieved an improvement of about 2.7% on MAP measure compared to AdaRank-NDCG algorithm.
� Abstract— the problem of discovery association rules in large databases is considered. An encoding method for converting large databases to small one is proposed. Significant efficiency is obtained by applying some modified known... more
� Abstract— the problem of discovery association rules in large databases is considered. An encoding method for converting large databases to small one is proposed. Significant efficiency is obtained by applying some modified known algorithm on our proposed database layout. In addition, an new algorithm based on the proposed encoding method is introduced. Using some properties of numbers our database
There are large amounts of geo-spatial data consolidated in different domains such as economic geography, environmental studies, banking, retailing etc. There is vital need to extract the valuable implicit knowledge that lies inside these... more
There are large amounts of geo-spatial data consolidated in different domains such as economic geography, environmental studies, banking, retailing etc. There is vital need to extract the valuable implicit knowledge that lies inside these huge collections of data. Recently, Spatial Data Mining has attracted much attention as automated discovery of patterns from geo-spatial databases has become inevitable. The aspects of
Research Interests:
Research Interests:
Research Interests:
XML is a markup language which is becoming the standard format for information representation and data exchange. A major purpose of XML is the explicit representation of the logical structure of a document. Much research has been... more
XML is a markup language which is becoming the standard format for information representation and data exchange. A major purpose of XML is the explicit representation of the logical structure of a document. Much research has been performed to exploit logical structure of documents in information retrieval in order to precisely extract user information need from large collections of XML documents. In this paper, we describe an XML information retrieval weighting scheme that tries to find the most relevant elements in XML documents in response to a user query. We present this weighting model for information retrieval systems that utilize plausible inferences to infer the relevance of elements in XML documents. We also add to this model the Dempster-Shafer theory of evidence to express the uncertainty in plausible inferences and Dempster-Shafer rule of combination to combine evidences derived from different inferences. Keywords—Dempster-Shafer theory, plausible inferences, XML information retrieval.
Research Interests:
Thermoplastic segmented copolyester elastomer consisting essentially of a multiplicity of recurring short chain ester units and long chain ester units joined through ester linkages, said short chain ester units amounting to 15 to 75... more
Thermoplastic segmented copolyester elastomer consisting essentially of a multiplicity of recurring short chain ester units and long chain ester units joined through ester linkages, said short chain ester units amounting to 15 to 75 percent by weight of said copolyester and being derived from dicarboxylic acid such as an aromatic acid, e.g., terephthalic acid or a mixture of terephthalic acid and isophthalic acid, and an organic diol such as butanediol, and said long chain ester units amounting to 25 to 85 percent by weight of said copolyester and being derived from dicarboxylic acid such as an aromatic acid, e.g., terephthalic acid, or a mixture of terephthalic and isophthalic acids, and a long chain glycol such as polytetramethylene ether glycol, said copolyester having a melt index of less than 150 and a melting point of at least 90 DEG C., modified with 0.75 to 20 parts by weight, per 100 parts by weight of elastomer, of a multi-functional carboxylic compound taken from the group consisting of aromatic and aliphatic anhydrides having at least two anhydride groups. The modified elastomer possesses improved adhesion particularly at high temperatures and under high applied stress. A useful adhesive composition comprises (A) 1 to 99 percent by weight of the segmented copolyester elastomer, and (B) 99 to 1 percent by weight of a compatible low molecular weight thermoplastic resin. The adhesive composition can contain stabilizers as well as other ingredients.
Conceptual graph is a graph in which nodes are concepts and the edges indicate the relationship between them. Creation of conceptual graphs is a hot topic in the area of knowledge discovery. Natural Language Processing (NLP) based... more
Conceptual graph is a graph in which nodes are concepts and the edges indicate the relationship between them. Creation of conceptual graphs is a hot topic in the area of knowledge discovery. Natural Language Processing (NLP) based conceptual graph creation is one of the efficient but costly methods in the field of information extraction. Compared to NLP based methods, Statistical
... Mining Approach MohammadReza EffatParvar 1 , Mehdi EffatParvar 2 , and Maseud Rahgozar 1 ... Fathzadeh, R., Mokhtari, V., Shahri, A.: Coaching with Expert System implemented in Robocup Soccer Coach Simulation. In: Bredenfeld, A., et... more
... Mining Approach MohammadReza EffatParvar 1 , Mehdi EffatParvar 2 , and Maseud Rahgozar 1 ... Fathzadeh, R., Mokhtari, V., Shahri, A.: Coaching with Expert System implemented in Robocup Soccer Coach Simulation. In: Bredenfeld, A., et al. (eds.) RoboCup 2005. ...
IEEE 802.11 is a widely used standard for MAC and PHY layers of WLANs. Unfortunately the access methods offered in this standard cannot support QoS (quality of service) for real-time traffics. One parameter which is important in providing... more
IEEE 802.11 is a widely used standard for MAC and PHY layers of WLANs. Unfortunately the access methods offered in this standard cannot support QoS (quality of service) for real-time traffics. One parameter which is important in providing QoS is jitter (variance in packet delays). Majority of the proposed mechanisms for enabling DCF (distributed coordination function) with QoS capabilities does
Abstract Mining frequent tree patterns has many practical applications in areas such as XML document mining, Web mining, bioinformatics, network routing and so on. Most of the previous works used an apriori-based approach for candidate... more
Abstract Mining frequent tree patterns has many practical applications in areas such as XML document mining, Web mining, bioinformatics, network routing and so on. Most of the previous works used an apriori-based approach for candidate generation and frequency ...
Tree structures have gained popularity for storing data from different domains such as XML documents, bio informatics and so on. Clustering these data can facilitate different operations. In this paper, we propose TreeCluster, a novel and... more
Tree structures have gained popularity for storing data from different domains such as XML documents, bio informatics and so on. Clustering these data can facilitate different operations. In this paper, we propose TreeCluster, a novel and heuristic algorithm for clustering tree structured data. This algorithm considers a representative tree for each cluster. For each input tree T, TreeCluster computes the composition of the tree T and each of the clusters. Tree T belongs to the cluster which its composed tree gains the best score. After adding a tree to a cluster the representative tree of that cluster is updated. We evaluate the accuracy of the TreeCluster algorithm in comparison to the previous works
Large amounts of spatially referenced data have been aggregated in various application domains such as Geographic Information Systems (GIS), environmental studies, banking and retailing, which motivates the highly demanding field of... more
Large amounts of spatially referenced data have been aggregated in various application domains such as Geographic Information Systems (GIS), environmental studies, banking and retailing, which motivates the highly demanding field of spatial data mining. So far many optimization problems have been better solved inspired by the foraging behavior of ant colonies. In this paper we propose a novel algorithm for
In this paper we present a method of document representation called Rich Document Representation (RDR) to build XML retrieval engines with high specificity. RDR is a form of document representation that utilizes single words, phrases,... more
In this paper we present a method of document representation called Rich Document Representation (RDR) to build XML retrieval engines with high specificity. RDR is a form of document representation that utilizes single words, phrases, logical terms and logical statements for representing documents. The Vector Space model is used to compute index terms weight and similarity between each element and query. This system has participated in INEX 2006 and tested with the Content Only queries of the given collection. The results have been very weak but a failure analysis has revealed that it has been caused by an error in document processing which has produced inconsistent IDs and caused a mismatch between the IDs assigned to document elements such as single terms, phrases and logical terms. However similar experiment on INEX2004 collection yielded very good precision on high specificity task with s3e123 quantization.
The DCF (distributed coordination function) access method of IEEE 802.11 standard cannot support QoS (quality of service) for real-time traffics. Knowing that the use of multimedia applications over WLANs is increasing, it seems essential... more
The DCF (distributed coordination function) access method of IEEE 802.11 standard cannot support QoS (quality of service) for real-time traffics. Knowing that the use of multimedia applications over WLANs is increasing, it seems essential to overcome this problem. There are many methods to enable DCF with service differentiation and QoS. The difficulty in majority of these methods is unfair bandwidth
IEEE 802.11 is a widely used standard for MAC and PHY layers of WLANs. Unfortunately the access methods offered in this standard cannot support QoS (quality of service) for real-time traffics. It seems that the access methods employed in... more
IEEE 802.11 is a widely used standard for MAC and PHY layers of WLANs. Unfortunately the access methods offered in this standard cannot support QoS (quality of service) for real-time traffics. It seems that the access methods employed in this standard, causes high variations in delay or jitter and also waste of bandwidth due to collisions. In this paper, we
Protein–protein interactions (PPIs) are important for understanding the cellular mechanisms of biological functions, but the reliability of PPIs extracted by high-throughput assays is known to be low. To address this, many current methods... more
Protein–protein interactions (PPIs) are important for understanding the cellular mechanisms of biological functions, but the reliability of PPIs extracted by high-throughput assays is known to be low. To address this, many current methods use multiple evidence from different sources of information to compute reliability scores for such PPIs. However, they often combine the evidence without taking into account the uncertainty of the evidence values, potential dependencies between the information sources used and missing values from some information sources. We propose to formulate the task of scoring PPIs using multiple information sources as a multi-criteria decision making problem that can be solved using data fusion to model potential interactions between the multiple information sources. Using data fusion, the amount of contribution from each information source can be proportioned accordingly to systematically score the reliability of PPIs. Our experimental results showed that th...
ABSTRACT
Allocating data fragments in distributed database systems is an important issue in distributed database (DDB) systems. In this paper, we are going to improve the effectiveness of current NNA algorithm using a Fuzzy inference engine.... more
Allocating data fragments in distributed database systems is an important issue in distributed database (DDB) systems. In this paper, we are going to improve the effectiveness of current NNA algorithm using a Fuzzy inference engine. Results indicate that, our fuzzy based NNA algorithm leads 5% gain in some of systems performance metrics. This algorithm, providing a data clustering mechanism, which
This paper addresses the problem of determining the optimal location to place a fragment (object) in a distributed non-replicated database. The algorithm defined takes into consideration a changing environment with changing access... more
This paper addresses the problem of determining the optimal location to place a fragment (object) in a distributed non-replicated database. The algorithm defined takes into consideration a changing environment with changing access patterns. This paper contributes by allocating data fragments to their optimal location, in a distributed network, based on the access patterns for that fragment. The mechanism for achieving
Since accurate migrations of data fragments in distributed database systems, known dynamic fragment allocation, play an important role in amendment of distributed database performance, several algorithms each of which shows different... more
Since accurate migrations of data fragments in distributed database systems, known dynamic fragment allocation, play an important role in amendment of distributed database performance, several algorithms each of which shows different performance in various conditions, have been proposed to improve dynamic fragment allocation in distributed database systems. In this paper, we are going to propose a novel algorithm which is

And 10 more