Document Clustering With Map Reduce Using Hadoop Framework
Document Clustering With Map Reduce Using Hadoop Framework
Document Clustering With Map Reduce Using Hadoop Framework
Volume: 3 Issue: 1
ISSN: 2321-8169
409 - 413
_______________________________________________________________________________________________
M. Ramakrishna Murty
Department of IT,
GMRIT, Rajam, AP, India
msatishmtech@gmail.com
Department of CSE
GMRIT, Rajam, AP, India
ramakrishna.malla@gmail.com
AbstractBig data is a collection of data sets. It is so enormous and complex that it becomes difficult to processes and analyse using normal
database management tools or traditional data processing applications. Big data is having many challenges. The main problem of the big data is
store and retrieve of the data from the search engines. Document data is also growing rapidly in the eon of internet. Analysing document data is
very important for many applications. Document clustering is the one of the important technique to analyse the document data. It has many
applications like organizing large document collection, finding similar documents, recommendation system, duplicate content detection, search
optimization. This work is motivated by the reorganization of the need for a well efficient retrieve of the data from massive resources of data
repository through the search engines. In this work mainly focused on document clustering for collection of documents in efficient manner
using with MapReduce.
KeywordsDocument Clustering, Map-Reduce, Hadoop, Document pre-processing
__________________________________________________*****_________________________________________________
I. INTRODUCTION
Document clustering is the use of cluster analysis of
textual documents. It has many applications like organizing
large document collection, finding similar documents,
recommendation system, duplicate content detection, search
optimization. Document clustering has been considered for use
in a number of different areas of text mining and information
retrieval. There are many search engines that are using for
information retrieval, but the main challenge in front of the
search engine is to present relevant results of the user. Even
though there are many knowledge discovery tools to filter,
order, classify or cluster their search results exists, still user
make extra effects to find the required document. In order to
provide solution, combining the entire web mining based data
mining techniques. The web documents in each cluster can be
pre-processed clustered on Map Reduce.
Now a days huge data is producing by many social
networking websites, e-commerce websites, and many
organizations. Analysing this huge data is tedious task for any
organization. To analyse the huge data, database management
techniques may not be sufficient, so the big data came into
existence.
Big data is a collection of data sets that are enormous
and complex to manipulate or cross-examine with standard
algorithms or techniques. Now a days Big data is a very
popular term used to designate the exponential growth and
availability of both structured and unstructured data. Here the
question is what big data is and what small data is. For some
organizations small data can be a big data for some other
organizations.
Big data has challenges can be categorized into 3
challenges. They are 1. Volume: Big volume of data is gathered
and growing and growing daily , 2. Variety: Data is all sorts of
variety, and the data is not organized data, that is some data is
audios, videos, images, text messages, emails, documents,
books, log files, public and private records, transactions, so this
Big data contains both structured and unstructured data. Big
data is really challenging. Its challenging because of the
variety of data that it covers from structured data, such as
transactions we made daily or we calculate and store, to
unstructured data such as audio files, multimedia presentations
_______________________________________________________________________________________
ISSN: 2321-8169
409 - 413
_______________________________________________________________________________________________
The underlying Hadoop Distributed File System
(HDFS) utilized by the Hadoop framework is targeted at
providing high throughput at the cost of increased latency.
_______________________________________________________________________________________
ISSN: 2321-8169
409 - 413
_______________________________________________________________________________________________
B. Stemming
Stemming is the process for reducing modified words
to their stem, base or root forms generally a written word form.
Stem is a part of a word like ing, er, etc., The term is used with
slightly different meanings. An algorithm for removing
derivations endings and inflectional in order to reduce word
forms to a common stem. In this stemming algorithm the
suffixes and prefixes were eliminated according to the
conditions by which the stemming procedure was applied.
C. Clustering
Clustering is a group of similar objects. Cluster is a
collection of data objects, that are contains all similar objects,
and these objects are dissimilar to the other clusters objects. A
cluster of data objects can be treated collectively as one group
and so may be considered as a form of data compression.
Objects within the cluster have the high similarity in
comparison to one another but are dissimilar to objects in the
other clusters. Clustering principle: Maximizing the intra class
similarity and minimizing the inter class similarity.
III. MAP REDUCE
Map reduce is data parallel paradigm with message
passing. It is a pipelined procedure with two phases, that is map
phase and the reduce phase. Its a higher level abstraction,
programmers needs to specify what mapper needs to do and
reducer needs to do. Map-Reduce is the software framework
for writing the programs easily which processes huge amounts
of data in parallel on large clusters of commodity hardware in a
fault-tolerant, reliable manner. A Map-Reduce job usually
splits the input data into independent portions which are
processed by the map tasks in a parallel manner. The
framework sorts the outputs coming from the maps, and gives
to the reduce tasks as input. Naturally the input and the output
of the jobs are stored in the file system. The Map-Reduce
operates completely on <key, value> pairs, that is, the
framework sees the input to the job as a set of <key,
value> pairs and produces a set of <key, value> pairs as output
of the job. Map function is applied to the every input key-value
pair and it generates intermediate key-value pairs. These
intermediate <key, value> pairs are sorted and grouped by the
key. Reduce function is applied to sorted and grouped
intermediate key values. Reduce function emits the resultant
key-values.
A. Map/Reduce Algorithm:
Mapper:
1.
B. K-means Clustering:
K-Means clustering choose k initial points and mark
each as a center point for one of the k sets. Then for every item
in the total data set it marks which of the k sets it is closest to.
It then finds the average center of each set, by averaging the
points which are closest to the set. With the new set of centers
(centroid), it repeats the algorithm until convergence has been
reached.
The implementation of document clustering on
MapReduce accepts two input directories: one is the documents
_______________________________________________________________________________________
ISSN: 2321-8169
409 - 413
_______________________________________________________________________________________________
a record containing the entire documents data and its chosen kcenter. The Reduce function receives a k-center and all
documents which are bound to this k-center. It should calculate
a new k-center, and put the new k-center in centers directory.
To evaluate the distance between any two documents, we use
the cosine similarity metric of term frequency, and use
arithmetic average to calculate the new k center. Note that the
contents in document directory will not change during the
process.
Document clustering for large collection can be
efficiently implemented with MapReduce. Initially we perform
the pre-processing on the dataset. In the document preprocessing stage, we design a new iterative algorithm to
calculate tfidf weight on MapReduce in order to evaluate how
important a term is to a document in a corpus. Then, a KMeans clustering is implemented on MapReduce to partition all
documents into k clusters in which each documents belongs to
the cluster with the same meaning. The map and reduce
functions run on distributed nodes in parallel. Each map
operation can be processed independently on each node and all
the operations can be performed in parallel. In map function,
the master node makes parts the input into sub problems, and
In the following screen we can see that the input to the algorithm is 7 documents.
412
IJRITCC | January 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
ISSN: 2321-8169
409 - 413
_______________________________________________________________________________________________
V. CONCLUSION AND FUTURE WORK
The main focus in this work is documents clustering
using Hadoop framework. In this work stopword elimination
and stemming methods are used to pre-process the input
documents and find keywords of every document and make
them into vector space model for document clustering. We used
iterative algorithm to calculate tfidf weight on MapReduce in
order to evaluate important a term is to a document in a corpus.
In the process of document clustering used K-means algorithm
procedure with distributed environment. We used a
MapReduce algorithm for efficiently computing pair wise
document similarity in large document collections. The map
and reduce functions run on distributed nodes in parallel. Each
map operation can be processed independently on each node
and all the operations can be performed in parallel.
Hadoop with Map/Reduce motivates the needs to
propose new algorithms for the existing applications that have
had algorithms for sequential computation. In the future work
we need to improve the scalability of the algorithm and also
cluster the document with different versions of the K-means
algorithm like incremental, bisecting and fuzzy c-means for
efficient results of document clustering.
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
REFERENCES
[1]
[2]
[13]
[14]
413
IJRITCC | January 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________