A multi-channel wireless EEG (electroencephalogram) acquisition and recording system is developed in this work. The system includes an EEG sensing and transmission unit and a digital processing circuit. The former is composed of... more
A multi-channel wireless EEG (electroencephalogram) acquisition and recording system is developed in this work. The system includes an EEG sensing and transmission unit and a digital processing circuit. The former is composed of pre-amplifiers, filters, and gain amplifiers. The kernel of the later digital processing circuit is a micro-controller unit (MCU, TI-MSP430), which is utilized to convert the EEG signals into digital signals and fulfill the digital filtering. By means of Bluetooth communication module, the digitized signals are sent to the back-end such as PC or PDA. Thus, the patient's EEG signal can be observed and stored without any long cables such that the analogue distortion caused by long distance transmission can be reduced significantly. Furthermore, an integrated classification method, consisting of non-linear energy operator (NLEO), autoregressive (AR) model, and bisecting k-means algorithm, is also proposed to perform EEG off-line clustering at the back-end. ...
Currently, many vendors offer multiple data loss prevention solutions, but they need to cope with the challenges of the modern world like access control problem or social network analysis problem to better protect the increasing volume of... more
Currently, many vendors offer multiple data loss prevention solutions, but they need to cope with the challenges of the modern world like access control problem or social network analysis problem to better protect the increasing volume of data. To handle this task DLP-systems have to apply modern statistical algorithms to data protection. In this paper, we will provide an overview of contemporary machine learning algorithms that can help DLP-systems to better detect and analyze data.
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist... more
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist researchers in extracting useful information from the rapidly growing volumes of digital data.
Data Leakage or Loss Prevention (DLP) or information leak prevention (ILP) or information protection and control (IPC) technology has been developed to prevent data from intentionally or accidentally leaking out. Data loss prevention... more
Data Leakage or Loss Prevention (DLP) or information leak prevention (ILP) or information protection and control (IPC) technology has been developed to prevent data from intentionally or accidentally leaking out. Data loss prevention systems differ from conventional security controls such as firewalls or intrusion detection systems (IDS) in terms of dedication and proactivity. Conventional security controls have less dedication to the actual content of the data. Although there are some academic DLP studies in the literature, very few studies on industrial solutions. This study established a DLP system for the Social Security Institution (Sosyal Güvenlik Kurumu, SGK) of Turkey. SGK is Turkey's one of the biggest institutions with 28,000 employees. Installation methods and experienced problems were noted objectively. And important things about the implementation of industrial DLP systems in large institutions have been marked.
Data mining plays an important role in internet with the computer technology this makes easy to collect the information from the related data sets. The different methods used in this paper are decision tree algorithm, the decision tree... more
Data mining plays an important role in internet with the computer technology this makes easy to collect the information from the related data sets. The different methods used in this paper are decision tree algorithm, the decision tree algorithm used hears is to classify the data elements by considering a set of constraints, we consider this method to suppress the data by doing so we can secure the data. We extend our work on micro data suppression (1) to prevent not only probabilistic but also decision tree classification based inference, and (2) to handle not only single but also multiple confidential data value suppression to reduce the side-effects. The paper aims to enhance the Data classification and Data Generalization. It shows that how the data is secured using ‗Generalization' and moreover. It provides efficiency in Data Generalization and discusses some of the major challenges for what kind of data to be suppressed. We consider the following privacy problem: a data ho...
The fair trade is at the crossroads. This concept, which defines itself as an alternative approach to conventional trade, was born around fifty years ago. In the late 1980s, non-profit fair trade organizations began labelling fair... more
The fair trade is at the crossroads. This concept, which defines itself as an alternative approach to conventional trade, was born around fifty years ago. In the late 1980s, non-profit fair trade organizations began labelling fair products to facilitate their entry into the large- scale distribution. The sales figures are very hopeful in North America, but a decline appears in some precursory European countries. This article presents the fair trade concept and its evolution, as well as the debate on the introduction of fair products into the large-scale distribution. Some data on price and shelf spacing have been collected in the coffee department of supermarkets in Paris and close suburbs. The results of the discriminant analysis show that the fair trade is the first argument which explains the difference between the stores and that their strategy is related to the retail store chain. Mots clés : commerce équitable, grande distribution, analyse des données
Internet and Web technology starts to penetrate many aspects of our daily life. Its importance as a medium for business transactions will grow exponentially during the next years. In terms of the involved market volume the B2B area will... more
Internet and Web technology starts to penetrate many aspects of our daily life. Its importance as a medium for business transactions will grow exponentially during the next years. In terms of the involved market volume the B2B area will hereby be the most interesting area. Also it will be the place, where the new technology will lead to drastic changes
The paper proposes a novel method for extremely fast inverse kinematics computation suitable for fast moving manipulators and their path planning, and for the animation of anthropomorphic limbs. In a preprocessing phase, the workspace of... more
The paper proposes a novel method for extremely fast inverse kinematics computation suitable for fast moving manipulators and their path planning, and for the animation of anthropomorphic limbs. In a preprocessing phase, the workspace of the robot is decomposed into small cells, and data sets for joint angle vectors (configurations) and hand positions/orientations (postures) are generated randomly in each cell
Within the security scope, Authentication is considered as a core model to control accessing any system. Password is one of the most significant mechanisms which diagnose the authorized user from others. However, it is facing many... more
Within the security scope, Authentication is considered as a core model to control accessing any system. Password is one of the most significant mechanisms which diagnose the authorized user from others. However, it is facing many problems such as spoofing and man in the middle attack(MitMA). When unauthorized user has got the correct password. Then, this user would be able to access into the data and change previous password which causes significant loss in efforts and cost. Similarly, the hacker "who don't have a password" is also trying to penetrate the system through predicted a set of words. In fact, both of authorized and hacker users work to input a wrong password, but authorized user may have only one or two wrong characters while the hacker inputs a whole wrong password. The aim of this paper, established an algorithm under the name of "Confidence Range ". The main tasks of this algorithm are monitoring all the activities which associated with the password on time, error, and style to the authorized user to recognize any suspicious activity. For that reason, a unique EPSB, " Electronic Personal Synthesis Behavior " , has been generated to the authorized user by the application of confidence range algorithm.
Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall... more
Multi-dimensional data classification is an important and challenging problem in many astro-particle experiments. Neural networks have proved to be versatile and robust in multi-dimensional data classification. In this article we shall study the classification of gamma from the hadrons for the MAGIC Experiment. Two neural networks have been used for the classification task. One is Multi-Layer Perceptron based on supervised learning and other is Self-Organising Map (SOM), which is based on unsupervised learning technique. The results have been shown and the possible ways of combining these networks have been proposed to yield better and faster classification results.
This paper concerns the thesaurus approach which is a part of the theory of leitmotif. This method is mainly used in philology and aesthetics and involves text corpus creation and text indexing by keywords. We can assume that the founder... more
This paper concerns the thesaurus approach which is a part of the theory of leitmotif. This method is mainly used in philology and aesthetics and involves text corpus creation and text indexing by keywords. We can assume that the founder of this approach in Russia was Korney Chukovsky. Chukovsky’s method allows to perform bibliographic and semantic information retrieval, as well as to perform different data classifications.
Рассматривается тезаурусный подход в литературоведении, являющийся частью теории лейтмотива. Основоположником такого подхода в России является К.И.Чуковский. Этот подход включает в себя формирование информационного массива и индексирование текста с помощью оценочных ключевых слов, характеризующих автора и его произведения. Метод Чуковского позволяет производить авторский, документальный лексикографический поиск, а также производить различного рода классификационные операции.
Data clustering is one of the most essential, common and interesting task to classification of patterns in different areas such as data mining, pattern recognition, artificial intelligence and etc. The objective of data clustering is to... more
Data clustering is one of the most essential, common and interesting task to classification of patterns in different areas such as data mining, pattern recognition, artificial intelligence and etc. The objective of data clustering is to classification of similar entities. There are so many different techniques of data clustering available for different nature of applications. Data clustering techniques are categorizing into two types – Partitioning Procedures and Hierarchical Procedures. Hierarchical clustering creates hierarchy of clusters, look like tree. Results of hierarchical Clusters are shown in dendrogram shape. Partitioning method-clustering makes various partitions of objects and evaluates them by some standard. In this paper, we introduce a critical review on few papers and found some strengths and weaknesses of different clustering techniques. The purpose of this overview is to compare and evaluate each clustering techniques and find their pros and cons. This comparison concludes the better approach for future research in data clustering.
Structural health monitoring is a problem which can be addressed at many levels. One of the more promising approaches used in damage assessment problems is based on pattern recognition. The idea is to extract features from the data that... more
Structural health monitoring is a problem which can be addressed at many levels. One of the more promising approaches used in damage assessment problems is based on pattern recognition. The idea is to extract features from the data that characterize only the normal condition and to use them as a template or reference. During structural monitoring, data are measured and the appropriate features are extracted as well as compared (in some sense) to the reference. Any significant deviations from the reference are considered as signal novelty or damage. In this paper, the corpus of symbolic data analysis (SDA) is applied on the one hand for classifying different structural behaviors and on the other hand for comparing any structural behavior to the previous classification when new data become available. For this purpose, raw information (acceleration measurements) and also processed information (modal data) are used for feature extraction. Some SDA techniques are applied for data classification: hierarchy-divisive methods, dynamic clustering and hierarchy-agglomerative schemes. Results regarding experimental tests performed on a railway bridge in France are presented in order to show the efficiency of the described methodology. The results show that the SDA methods are efficient to classify and to discriminate structural modifications either considering the vibration data or the modal parameters. In general, both hierarchy-divisive and dynamic cloud methods produce better results compared to those obtained by using the hierarchy-agglomerative method. More robust results are given by modal data than by measurement data.
Decision-in decision-out fusion architecture can be used to fuse the outputs of multiple classifiers from different diagnostic sources. In this paper, Dempster-Shafer Theory (DST) has been used to fuse classification results of breast... more
Decision-in decision-out fusion architecture can be used to fuse the outputs of multiple classifiers from different diagnostic sources. In this paper, Dempster-Shafer Theory (DST) has been used to fuse classification results of breast cancer data from two different sources: gene-expression patterns in peripheral blood cells and Fine-Needle Aspirate Cytology (FNAc) data. Classification of individual sources is done by Support Vector Machine (SVM) with linear, polynomial and Radial Base Function (RBF) kernels. Out put belief of classifiers of both data sources are combined to arrive at one final decision. Dynamic uncertainty assessment is based on class differentiation of the breast cancer. Experimental results have shown that the new proposed breast cancer data fusion methodology have outperformed single classification models.
With huge amounts of biomedical data being generated day by day extracting statistical information about the chemicals mentioned in such huge databases manually is tedious and time consuming. Our system is mainly designed for naive users,... more
With huge amounts of biomedical data being generated day by day extracting statistical information about the chemicals mentioned in such huge databases manually is tedious and time consuming. Our system is mainly designed for naive users, which aims to automate data collection and knowledge extraction from chemical literature in a user friendly and efficient way on the hadoop platform. The system downloads the abstracts related to the disease of interest from Pubmed database. The text of the abstracts is then extensively parsed for chemicals such as protein/gene names and chemical compound names and classified into different classes. This analysis would prove to be helpful in various biomedical and pharmaceutical industries. The extraction of important information will be done using the Ling Pipe API wherein a training dataset is given to this Ling Pipe which classifies the extracted bioentities in the respective classes. The system being deployed on hadoop platform provides a scala...