Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Due to rapidly growing size of database, storage considers one of the obstacles for database performance. The structure of current hard disk media contains traditional technology with some changes, which cannot maintain the database... more
Due to rapidly growing size of database, storage considers one of the obstacles for database performance. The structure of current hard disk media contains traditional technology with some changes, which cannot maintain the database workloads. On the other hand, flash memory promises faster performance for database applications, flash memory has ended up to be the significant consistent data storage tool for mobile and embedded devices. Flash memory-based development platform supplies the adaptability to provide a wide range of hardware and software implementations in the cost-performance trade off scale. In this paper we have reviewed flash memory technology, the flash memory database trend. We present the role of flash memory in: importance of small database in small businesses, and database security. We presented advantages and weaknesses, utilizing this technology required an efficient flash memory-based database system; also, this issue counts one of the challenges.
Database systems have been extremely important part in daily life, because it is processing daily huge amounts of data, it has been depended for wide field's such as bank system, personal system, medical system, communication system,... more
Database systems have been extremely important part in daily life, because it is processing daily huge amounts of data, it has been depended for wide field's such as bank system, personal system, medical system, communication system, …etc. Sensitive data is stored in database, in other words, data that an attacker looking for, is stored in database, so concerning of protecting these data have been a critical issue, this concern is the central theme of database security. Database security is ensures the secrecy, integrity, and availability of stored data. There are many techniques and mechanisms of security, which differs from multi sides, reliability, requirement, cost, speed, policy …etc, which means, a technique may be the best for some place, whereas the same technique is totally insufficient for other place. Multilevel of security is the good solution, systems that protected by multilevel of security are more secure than others, because, if one or some of techniques or mecha...
IoT describes a new world of billions of objects that intelligently communicate and interact with each other. One of the important areas in this field is a new paradigm-Social Internet of Things (SIoT), a new concept of combining social... more
IoT describes a new world of billions of objects that intelligently communicate and interact with each other. One of the important areas in this field is a new paradigm-Social Internet of Things (SIoT), a new concept of combining social networks with IoT. SIoT is an imitation of social networks between humans and objects. Objects like humans are considered intelligent and social. They create their social network to achieve their common goals, such as improving functionality, performance, and efficiency and satisfying their required services. Our article’s primary purpose is to present a comprehensive review article from the SIoT system to analyze and evaluate the recent works done in this area. Therefore, our study concentrated on the main components of the SIoT (Architecture, Relation Management, Trust Management, web services, and information), features, parameters, and challenges. To gather enough information for better analysis, we have reviewed the articles published between 20...
Biodata are rich of information. Knowing the properties of biological sequence can be valuable in analyzing data and making appropriate conclusions. This research applied naturalistic methodology to investigate the structural properties... more
Biodata are rich of information. Knowing the properties of biological sequence can be valuable in analyzing data and making appropriate conclusions. This research applied naturalistic methodology to investigate the structural properties of biological sequences (i.e., DNA). The research implemented in the field of motif finding. Two new motifs properties were discovered named identical neighbors and adjacent neighbors.  The analysis is done in different situations of background frequency and motif model, using distinctive real data set of varied data size. The analysis demonstrated the strong existence of the properties. Exploiting of these properties considers significant steps towards developing powerful algorithms in molecular biology.
Abstract Cloud consumers need services that, in addition to meeting their business requirements, provide them with a certain level of quality of service (QoS). On the other hand, cloud providers desire to sale services corresponding to... more
Abstract Cloud consumers need services that, in addition to meeting their business requirements, provide them with a certain level of quality of service (QoS). On the other hand, cloud providers desire to sale services corresponding to their preferences. In this situation, cloud computing service negotiation (CCSN) can be used to establish an agreement among trading parties with conflicting preferences. The CCSN allows consumer and provider negotiate together automatically on negotiable issues that are important for both trading parties. It aims to provide maximum utility for the parties in possible shortest time. In this paper, we propose a CCSN which provides simple or composite services to consumers. We introduce strategies that negotiators can choose from. We carry out some simulations to compare the performance of the strategies. Analysis of the results of simulations shows that our recommended strategy is more efficient in terms of negotiator's utility and the number of rounds spent on negotiation to reach an agreement than the others. The contributions of the proposed CCSN can be summarized as follows: (1) design and simulation of a new negotiation strategy which aims to maximize utility for both trading parties and increase the speed in reaching an agreement, (2) proposing a process for aggregating the results of negotiations on simple task requirements to ensure end-to-end composite service requirements.
Abstract Data grid is emerging as the main part of the infrastructure for large-scale data intensive applications such as high energy physics and bioinformatics. The deployment of such infrastructures has allowed users of a grid site to... more
Abstract Data grid is emerging as the main part of the infrastructure for large-scale data intensive applications such as high energy physics and bioinformatics. The deployment of such infrastructures has allowed users of a grid site to gain access to a large amount of distributed data. Data replication is a key issue in a data grid and could be applied intelligently because it reduces data access time and bandwidth consumption for each grid site. In this paper, we introduce a new dynamic data replication algorithm named Popular Groups of Files Replication (PGFR). Our proposed algorithm is based on an assumption: users in a Virtual Organization have similar interests in groups of files. Based on this assumption, and file access history, PGFR builds a connectivity graph to recognize a group of dependent files in each grid site and replicates the most Popular Groups of Files to each grid site, thus increasing the local availability. We used OptorSim simulator to evaluate the efficiency of PGFR algorithm. The simulation results show that PGFR achieves better performance compared to the existing algorithm; PGFR minimized the mean job execution time, bandwidth consumption, and avoiding unnecessary replication.
Recent years have seen a great deal of attention in the aspects of cloud manufacturing. Generally, in cloud manufacturing, the capabilities and manufacturing resources that distributed in different geographical places are virtualized and... more
Recent years have seen a great deal of attention in the aspects of cloud manufacturing. Generally, in cloud manufacturing, the capabilities and manufacturing resources that distributed in different geographical places are virtualized and encapsulated into manufacturing cloud services. The literature confirms that applying queuing theory to optimize service selection and scheduling load balancing (SOSL) while taking into account logistics is still scarce and an open issue for practical implementation of cloud manufacturing. This reason motivates our attempts to present a cloud manufacturing queuing system (CMfgQS) as well as a load balancing heuristic algorithm based on task process times (LBPT), simultaneously among the first studies in this research area. Hence, a novel optimization model as mixed‐integer linear programming is developed by implementing both CMfgQs and LBPT. Due to the natural complexity of the problem proposed, this study applies a genetic algorithm to solve the developed optimization model in large instances. Finally, the computational results ensure the effectiveness of the proposed model as well as the performance of the employed heuristic algorithm.
ABSTRACT Recently, there is a great deal of attention in Cloud Manufacturing (CMfg) as a new service-oriented manufacturing paradigm. To integrate the activities and services through a CMfg, both Service Load balancing and Transportation... more
ABSTRACT Recently, there is a great deal of attention in Cloud Manufacturing (CMfg) as a new service-oriented manufacturing paradigm. To integrate the activities and services through a CMfg, both Service Load balancing and Transportation Optimisation (SLTO) are two major issues to ease the success of CMfg. Based on this motivation, this study presents a new queuing network for parallel scheduling of multiple processes and orders from customers to be supplied. Another main contribution of this paper is a new heuristic algorithm based on the process time of the tasks of the orders (LBPT) to solve the proposed problem. To formulate it, a novel multi-objective mathematical model as a Mixed Integer Linear Programming (MILP) is developed. Accordingly, this study employs the multi-choice multi-objective goal programming with a utility function to model the introduced SLTO problem. To better solve the problem, a Particle Swarm Optimisation (PSO) algorithm is developed to tackle this optimisation problem. Finally, a comparative study with different analyses through four scenarios demonstrates that there are some improvements on the sum of process and transportation costs by 6.1%, the sum of process and transportation times by 10.6%, and the service load disparity by 48.6% relative to the benchmark scenario.
The cloud computing paradigm is an important service in the Internet for sharing and providing resources in a cost‐efficient way. Modeling of a cloud system is not an easy task because of the complexity and large scale of such systems.... more
The cloud computing paradigm is an important service in the Internet for sharing and providing resources in a cost‐efficient way. Modeling of a cloud system is not an easy task because of the complexity and large scale of such systems. Cloud reliability could be improved by modeling the various aspects of cloud systems, including scheduling, service time, wait time, and hardware and software failures. The aim of this study is to survey research studies done on the modeling of cloud computing using the queuing system in order to identify where more emphasis should be placed in both current and future research directions. This paper follows the goal by investigating the articles published between 2008 and January 2017 in journals and conferences. A systematic mapping study combined with a systematic literature review was performed to find the related literature, and 71 articles were selected as primary studies that were classified in relation to the focus, research type, and contribution type. We classified the modeling techniques of cloud computing using the queuing theory in seven categories based on their focus area: (1) performance, (2) quality of service, (3) workflow scheduling, (4) energy savings, (5) resource management, (6) priority‐based servicing, and (7) reliability. A majority of the primary articles focus on performance (37%), 15% of them focus on resource management, 14% of them focus on quality of service, 13% of them focus on workflow scheduling, 13% of them focus on energy savings, 4% of them focus on priority‐based servicing for requests, and 4% of them focus on reliability. This work summarizes and classifies the research efforts conducted on applying queue theory for modeling of cloud computing (AQTMCC), providing a good starting point for further research in this area.
Cloud manufacturing (CMfg) is a new manufacturing paradigm over computer networks aiming at using distributed resources in the form of manufacturing capabilities, hardware, and software. Some modern technologies such as cloud computing,... more
Cloud manufacturing (CMfg) is a new manufacturing paradigm over computer networks aiming at using distributed resources in the form of manufacturing capabilities, hardware, and software. Some modern technologies such as cloud computing, Internet of Things (IoT), service-oriented, and radio-frequency identification (RFID) play a key role in CMfg. In CMfg, all resources needed for manufacturing such as hardware, software, and manufacturing capabilities are virtualized; the services are provided by manufacturing resources. In this paper, the key characteristics, concepts, challenges, open issues, and future trends of cloud manufacturing are presented to direct the future researches. Accordingly, five directions of advances in CMfg are introduced and the articles in five categories are reviewed and analyzed: (1) studies focused on the architecture and platform design of CMfg; (2) studies concentrated on resource description and encapsulation; (3) studies focused on service selection and composition; (4) studies aimed at resource allocation and service scheduling; and (5) studies aimed at service searching and matching. The article also aims at providing a development diagram in the area of CMfg as a roadmap for future research opportunities and practice.
Sybil attack is one of the well-known dangerous attacks against wireless sensor networks in which a malicious node attempts to propagate several fabricated identities. This attack significantly affects routing protocols and many network... more
Sybil attack is one of the well-known dangerous attacks against wireless sensor networks in which a malicious node attempts to propagate several fabricated identities. This attack significantly affects routing protocols and many network operations, including voting and data aggregation. The mobility of nodes in mobile wireless sensor networks makes it problematic to employ proposed Sybil node detection algorithms in static wireless sensor networks, including node positioning, RSSI-based, and neighbour cooperative algorithms. This paper proposes a dynamic, light-weight, and efficient algorithm to detect Sybil nodes in mobile wireless sensor networks. In the proposed algorithm, observer nodes exploit neighbouring information during different time periods to detect Sybil nodes. The proposed algorithm is implemented by J-SIM simulator and its performance is compared with other existing algorithm by conducting a set of experiments. Simulation results indicate that the proposed algorithm ...
Abstract Matrix Factorization (MF) has been proven to be an effective approach for the generation of a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness... more
Abstract Matrix Factorization (MF) has been proven to be an effective approach for the generation of a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness of the user–item matrix in collaborative filtering models. Moreover, they suffer from scalability issues when applied to large-scale real-world tasks. To tackle these issues, a social regularization method, called TrustANLF, is proposed, which incorporates users’ social trust information in a nonnegative matrix factorization framework. The proposed method integrates trust statements as an additional information source along with rating values into the recommendation model to deal with the data sparsity and cold-start issues. Moreover, the alternating direction optimization method is used for solving the trust-based nonnegative MF model in order to improve convergence speed as well as reduce computational and memory costs. To evaluate the effectiveness of the proposed method, several experiments are performed on three real-world datasets. The obtained results demonstrate the significant improvements of the proposed method over several state-of-the-art methods.
Abstract Artificial Bee Colony (ABC) is an effective swarm optimization method featured with higher global search ability, less control parameters and easier implementation compared to other population-based optimization methods. Although... more
Abstract Artificial Bee Colony (ABC) is an effective swarm optimization method featured with higher global search ability, less control parameters and easier implementation compared to other population-based optimization methods. Although ABC works well at exploration, its main drawback is poor exploitation affecting the convergence speed in some cases. In this paper, an efficient ABC-based optimization method is proposed to deal with high dimensional optimization tasks. The proposed method performs two modifications to the original ABC in order to improve its performance. First, it employs a chaos system to generate initial individuals, which are fully diversified in the search space. A chaos-based search method is used to find new solutions during ABC search process to enhance the exploitation capability of the algorithm and avoid premature convergence. Second, it incorporates a new search mechanism to improve the exploration ability of ABC. Experimental results performed on benchmark functions reveals superiority of the proposed method over state-of-the-art methods.
Abstract Clustering is a major approach in data mining and machine learning and has been successful in many real-world applications. Density peaks clustering (DPC) is a recently published method that uses an intuitive to cluster data... more
Abstract Clustering is a major approach in data mining and machine learning and has been successful in many real-world applications. Density peaks clustering (DPC) is a recently published method that uses an intuitive to cluster data objects efficiently and effectively. However, DPC and most of its improvements suffer from some shortcomings to be addressed. For instance, this method only considers the global structure of data which leading to missing many clusters. The cut-off distance affects the local density values and is calculated in different ways depending on the size of the datasets, which can influence the quality of clustering. Then, the original label assignment can cause a “chain reaction” , whereby if a wrong label is assigned to a data point, and then there may be many more wrong labels subsequently assigned to the other points. In this paper, a density peaks clustering method called DPC-DLP is proposed. The proposed method employs the idea of k-nearest neighbors to compute the global cut-off parameter and the local density of each point. Moreover, the proposed method uses a graph-based label propagation to assign labels to remaining points and form final clusters. The proposed label propagation can effectively assign true labels to those of data instances which located in border and overlapped regions. The proposed method can be applied to some applications. To make the method practical for image clustering, the local structure is used to achieve low-dimensional space. In addition, proposed method considers label space correlation, to be effective in the gene expression problems. Several experiments are performed to evaluate the performance of the proposed method on both synthetic and real-world datasets. The results demonstrate that in most cases, the proposed method outperformed some state-of-the-art methods.
Abstract Features of cloud computing services have created a significant trend of organizations opting for these services. In this situation, many consumers, who want a certain service, and many providers, who supply those services, form... more
Abstract Features of cloud computing services have created a significant trend of organizations opting for these services. In this situation, many consumers, who want a certain service, and many providers, who supply those services, form a competitive market. Consumers compete with each other to get a service with the desired functional and nonfunctional requirements. There is also a competition space among the providers to sell their services to the consumers. Since there is a dynamic competitive market and conflicting requirements of consumers and providers, it is essential that automated negotiation mechanisms be proposed to reach agreements and provide the maximum utility for both trading parties. To the best of our knowledge, no comprehensive study and review of cloud computing service negotiation (CCSN) exist as of now. In this article, we study the CCSN models. We investigate frameworks, techniques, protocols, and strategies proposed by the CCSN researches. Some questions will be proposed, and answers to the questions may help researchers design more efficient models in future works. Valuable innovations are explained and evaluated to find the answers to the questions. Statistics can specify some defects that must be addressed in future researches. The contributions can be summarized as follows: (1) reviewing CCSN works to understand open issues and challenges in providing negotiation systems in the cloud environments, (2) identifying the characteristics of an efficient CCSN that can provide the maximum utility for the trading parties, and (3) pointing out the defects of the current works to help researchers improve the CCSN systems in future researches.
Abstract With the abundance of information produced by users on items (e.g., purchase or rating histories), recommender systems are a major ingredient of online systems such as e-stores and service providers. Recommendation algorithms use... more
Abstract With the abundance of information produced by users on items (e.g., purchase or rating histories), recommender systems are a major ingredient of online systems such as e-stores and service providers. Recommendation algorithms use information available from users–items interactions and their contextual data to provide a list of potential items for each user. These algorithms are constructed based on similarity between users and/or items (e.g., a user is likely to purchase the same items as his/her most similar users). In this work, we introduce a novel time-aware recommendation algorithm that is based on identifying overlapping community structure among users. Users’ interests might change over time, and accurate modeling of dynamic users’ preferences is a challenging issue in designing efficient personalized recommendation systems. The users–items interaction network is often highly sparse in real systems, for which many recommenders fail to provide accurate predictions. The proposed overlapping community structure amongst the users helps in minimizing the sparsity effects. We apply the proposed algorithm on two real-world benchmark datasets and show that it overcomes these challenges. The proposed algorithm shows better precision than a number of state-of-the-art recommendation methods.
The next generation electrical power grid, known as smart grid (SG), requires a communication infrastructure to gather generated data by smart sensors and household appliances. Depending on the quality of service (QoS) requirements, this... more
The next generation electrical power grid, known as smart grid (SG), requires a communication infrastructure to gather generated data by smart sensors and household appliances. Depending on the quality of service (QoS) requirements, this data is classified into event-driven (ED) and fixed-scheduling (FS) traffics and is buffered in separated queues in smart meters. Due to the operational importance of ED traffic, it is time sensitive in which the packets should be transmitted within a given maximum latency. In this paper, considering QoS requirements of ED and FS traffics, we propose a two-stage wireless SG traffic scheduling model, which results in developing a SG traffic scheduling algorithm. In the first stage, delay requirements of ED traffic is satisfied by allocating the SG bandwidth to ED queues in smart meters. Then, in the second stage, the SG rest bandwidth is going to the FS traffic in smart meters considering maximizing a weighted utility measure. Numerical results demonstrate the effectiveness of the proposed model in terms of satisfying latency requirement and efficient bandwidth allocation.
The Internet is an important part of modern life that can perform lots of tasks in an easy way. The Internet is often depicted as a network of networks as it is not a single physical entity. Moreover, the Internet interconnects millions... more
The Internet is an important part of modern life that can perform lots of tasks in an easy way. The Internet is often depicted as a network of networks as it is not a single physical entity. Moreover, the Internet interconnects millions of networks and links to hundreds of thousands of computers around the world. As the Internet develops, the question of how the Internet should be governed becomes more complex. This paper sets out a deeper discussion on the status of the Internet in the Kurdistan Region of Iraq. Furthermore, it illustrates how the government controls private companies that provide the Internet. These companies receive the Internet from several countries, namely, Iraq, Iran, Turkey and the others such as Azerbaijan. However, any problems in the Internet of these countries will result in the Internet disconnection in Kurdistan. Due to the lack of Internet filtering, lots of websites have caused many social issues such as child abuse and pornography. This research also...
Cloud computing has become popular due to its attractive features. The load on the cloud is increasing tremendously with the development of new applications. Load balancing is an important part of cloud computing environment which ensures... more
Cloud computing has become popular due to its attractive features. The load on the cloud is increasing tremendously with the development of new applications. Load balancing is an important part of cloud computing environment which ensures that all devices or processors perform same amount of work in equal amount of time. Different models and algorithms for load balancing in cloud computing has been developed with the aim to make cloud resources accessible to the end users with ease and convenience. In this paper, we aim to provide a structured and comprehensive overview of the research on load balancing algorithms in cloud computing. This paper surveys the state of the art load balancing tools and techniques over the period of 2004-2015. We group existing approaches aimed at providing load balancing in a fair manner. With this categorization we provide an easy and concise view of the underlying model adopted by each approach.
Due to rapidly growing size of database, storage considers one of the obstacles for database performance. The structure of current hard disk media contains traditional technology with some changes, which cannot maintain the database... more
Due to rapidly growing size of database, storage considers one of the obstacles for database performance. The structure of current hard disk media contains traditional technology with some changes, which cannot maintain the database workloads. On the other hand, flash memory promises faster performance for database applications, flash memory has ended up to be the significant consistent data storage tool for mobile and embedded devices.
Flash memory-based development platform supplies the adaptability to provide a wide range of hardware and software implementations in the cost-performance trade off scale. In this paper we have reviewed flash memory technology, the flash memory database trend. We present the role of flash memory in: importance of small database in small businesses, and database security. We presented advantages and weaknesses, utilizing this technology required an efficient flash memory-based database system; also, this issue counts one of the challenges.
Research Interests:
Research Interests: