Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
In the present age of Internet, data is accumulated at a dramatic pace. The accumulated huge data has no relevance, unless it provides certain useful information pertaining to the interest of the organization. But the real challenge lies... more
In the present age of Internet, data is accumulated at a dramatic pace. The accumulated huge data has no relevance, unless it provides certain useful information pertaining to the interest of the organization. But the real challenge lies in hiding sensitive information in order to provide privacy. Therefore, attribute reduction becomes an important aspect for handling such huge database by eliminating superfluous or redundant data to enable a sensitive rule hiding in an efficient manner before it is disclosed to the public. In this paper we propose a privacy preserving model to hide sensitive fuzzy association rules. In our model we use two processes, named a pre-process and post-process to mine fuzzified association rules and to hide sensitive rules. Experimental results demonstrate the viability of the proposed research.
Neutrosophy is the study of neutralities, which is an extension of discussing the truth of opinions. Neutrosophic logic can be applied to any field, to provide the solution for indeterminacy problem. Many of the real-world data have a... more
Neutrosophy is the study of neutralities, which is an extension of discussing the truth of opinions. Neutrosophic logic can be applied to any field, to provide the solution for indeterminacy problem. Many of the real-world data have a problem of inconsistency, indeterminacy and incompleteness. Fuzzy sets provide a solution for uncertainties, and intuitionistic fuzzy sets handle incomplete information, but both concepts failed to handle indeterminate information. To handle this complicated situation, researchers require a powerful mathematical tool, naming, neutrosophic sets, which is a generalised concept of fuzzy and intuitionistic fuzzy sets. Neutrosophic sets provide a solution for both incomplete and indeterminate information. It has mainly three degrees of membership such as truth, indeterminacy and falsity. Boolean values are obtained from the three degrees of membership by cut relation method. Data items which contrast from other objects by their qualities are outliers. The weighted density outlier detection method based on rough entropy calculates weights of each object and attribute. From the obtained weighted values, the threshold value is fixed to determine outliers. Experimental analysis of the proposed method has been carried out with neutrosophic movie dataset to detect outliers and also compared with existing methods to prove its performance.
Handover decision-based establishment of access point plays a vital role in LTE networks. For making a proper handover decision it is essential to implement various techniques. This gives a break less network connection for a fast-moving... more
Handover decision-based establishment of access point plays a vital role in LTE networks. For making a proper handover decision it is essential to implement various techniques. This gives a break less network connection for a fast-moving node. Billions of mobile nodes need connectivity with a major access point with a balanced cell load and signal strength. This is possible with a successful handover. However, making handover results in ping-pong effect, channel fading and an increase in call drop rate. This can be effectively avoided by choosing proper neighbouring access point within a time slot frequently. The node when varied rapidly from one location to another it needs a proper network connection. The previously used techniques in LTE network are not so effective. Hence, we propose a novel Dynamic magnetic Force Optimization (DMFO) technique for choosing access point appropriately and fuzzy logic for a successful handover. The implemented system model terminates the negative impacts of the previously existing algorithms and the results of comparison are represented. Hence, it proves that the proposed model is a better solution for making handover decision.
Master data acquired from many legacy systems and heterogeneous sources, hence it's possible to get duplicated and it influences the quality of the master data services such as all CRUD operations & other business services. The data... more
Master data acquired from many legacy systems and heterogeneous sources, hence it's possible to get duplicated and it influences the quality of the master data services such as all CRUD operations & other business services. The data deduplication process is one of the most important requirements for building comprehensive master data management (MDM). There are several custom and hybrid models available for data quality improvement, but can't resolve all the quality issues such as data duplication. This article portraits, how duplication happens, and what are the measures to prevent duplicate data. The proposed solution-pragmatic and scalable approach to identify and eliminate the duplicate data from the repository. The solution is based on a heuristic pruning algorithm with a cosine similarity measure for eliminating data duplication. This algorithm yields better time O () complexity compared to the normal cosine algorithm. This approach is built with a two-step process, the first step is the deduplication process of existing data, and the second step is to prevent the duplicate data being entered into the real-time system. It is been observed with the proposed solution that, the quality of data is improved close to 90% as part of the deduplication process.
Multimodal imaging techniques of the same organ help in getting anatomical as well as functional details of a particular body part. Multimodal imaging of the same organs can help doctors diagnose a disease cost-effectively. In this paper,... more
Multimodal imaging techniques of the same organ help in getting anatomical as well as functional details of a particular body part. Multimodal imaging of the same organs can help doctors diagnose a disease cost-effectively. In this paper, a hybrid approach using transfer learning and discrete wavelet transform is used to fuse multimodal medical images. As the access to medical data is limited, transfer learning is used for feature extractor and save training time. The features are fused with a pre-trained VGG19 model. Discrete Wavelet Transform is used to decompose the multimodal images in different sub-bands. In the last phase, Inverse Wavelet Transform is used to obtain a fused image from the four bands generated. The proposed model is executed on Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) datasets. The experimental results show that the proposed approach performs better than other approaches and the significance of the obtained fused image is measured using qualitative metrics.
The rough set theory is a powerful numerical model used to handle the impreciseness and ambiguity of data. Many existing multigranulation rough set models were derived from the multigranulation decision-theoretic rough set framework. The... more
The rough set theory is a powerful numerical model used to handle the impreciseness and ambiguity of data. Many existing multigranulation rough set models were derived from the multigranulation decision-theoretic rough set framework. The multigranulation rough set theory is very desirable in many practical applications such as high-dimensional knowledge discovery, distributional information systems, and multisource data processing. So far research works were carried out only for multigranulation rough sets in extraction, selection of features, reduction of data, decision rules, and pattern extraction. The proposed approach mainly focuses on anomaly detection in qualitative data with multiple granules. The approximations of the dataset will be derived through multiequivalence relation, and then, the rough set-based entropy measure with weighted density method is applied on every object and attribute. For detecting outliers, threshold value fixation is performed based on the estimated weight. The performance of the algorithm is evaluated and compared with existing outlier detection algorithms. Datasets such as breast cancer, chess, and car evaluation have been taken from the UCI repository to prove its efficiency and performance.
Multimodal image fusion combines information from multiple modalities to generate a composite image containing complementary information. Multimodal image fusion is challenging due to the heterogeneous nature of data, misalignment and... more
Multimodal image fusion combines information from multiple modalities to generate a composite image containing complementary information. Multimodal image fusion is challenging due to the heterogeneous nature of data, misalignment and nonlinear relationships between input data, or incomplete data during the fusion process. In recent years, several attention mechanisms have been introduced to enhance the performance of deep learning models. However, little literature is available on multimodal image fusion using attention mechanisms. This paper aims to study and analyze the latest deep-learning approaches, including attention mechanisms for multimodal image fusion. As a result of this study, the graphical taxonomy based on the different image modalities, various fusion strategies, fusion levels, and metrics for fusion tasks has been put forth. The focus has been on various Multimodal image fusion frameworks based on deep-learning techniques as their core methodology. This paper also sheds light on the challenges and future research directions in this field, application domains, and benchmark datasets used for multimodal fusion tasks. This paper contributes to the research on Multimodal image fusion and can help researchers select a suitable methodology for their applications.
Big Data analytics has been the main focus in all the industries today. It is not overstating that if an enterprise is not using Big Data analytics, it will be a stray and incompetent in their businesses against their Big Data enabled... more
Big Data analytics has been the main focus in all the industries today. It is not overstating that if an enterprise is not using Big Data analytics, it will be a stray and incompetent in their businesses against their Big Data enabled competitors. Big Data analytics enables business to take proactive measure and create a competitive edge in their industry by highlighting the business insights from the past data and trends. The main aim of this review article is to quickly view the cutting-edge and state of art work being done in Big Data analytics area by different industries. Since there is an overwhelming interest from many of the academicians, researchers and practitioners, this review would quickly refresh and emphasize on how Big Data analytics can be adopted with available technologies, frameworks, methods and models to exploit the value of Big Data analytics.
The rough set philosophy is based on the concept that there is some information associated with each object of the universe. The set of all objects of the universe under consideration for particular discussion is considered as a universal... more
The rough set philosophy is based on the concept that there is some information associated with each object of the universe. The set of all objects of the universe under consideration for particular discussion is considered as a universal set. So, there is a need to classify objects of the universe based on the indiscernibility relation (equivalence relation) among them. In the view of granular computing, rough set model is researched by single granulation. The granulation in general is carried out based on the equivalence relation defined over a universal set. It has been extended to multi-granular rough set model in which the set approximations are defined by using multiple equivalence relations on the universe simultaneously. But, in many real life scenarios, an information system establishes the relation with different universes. This gave the extension of multi-granulation rough set on single universal set to multi-granulation rough set on two universal sets. In this paper, we define multi-granulation rough set for two universal sets U and V. We study the algebraic properties that are interesting in the theory of multi-granular rough sets. This helps in describing and solving real life problems more accurately.