The drastic development of the World Wide Web in the recent times has made the concept of Web Cra... more The drastic development of the World Wide Web in the recent times has made the concept of Web Crawling receive remarkable significance. The voluminous amounts of web documents swarming the web have posed huge challenges to the web search engines making their results less relevant to the users. The presence of duplicate and near duplicate web documents in abundance has created additional overheads for the search engines critically affecting their performance and quality. The detection of duplicate and near duplicate web pages has long been recognized in web crawling research community. It is an important requirement for search engines to provide users with the relevant results for their queries in the first page without duplicate and redundant results. In this paper, we have presented a novel and efficient approach for the detection of near duplicate web pages in web crawling. Detection of near duplicate web pages is carried out ahead of storing the crawled web pages in to repositories. At first, the keywords are extracted from the crawled pages and the similarity score between two pages is calculated based on the extracted keywords. The documents having similarity scores greater than a threshold value are considered as near duplicates. The detection has resulted in reduced memory for repositories and improved search engine quality.
Index Terms— Web Crawling, Generic Crawling, Focused Crawling, Web Mining, Search Indexing, Web Parsing, Keywords Extracted, Page Ranking, Similarity Score Calculation
The drastic development of the World Wide Web in the recent times has made the concept of Web Cra... more The drastic development of the World Wide Web in the recent times has made the concept of Web Crawling receive remarkable significance. The voluminous amounts of web documents swarming the web have posed huge challenges to the web search engines making their results less relevant to the users. The presence of duplicate and near duplicate web documents in abundance has created additional overheads for the search engines critically affecting their performance and quality. The detection of duplicate and near duplicate web pages has long been recognized in web crawling research community. It is an important requirement for search engines to provide users with the relevant results for their queries in the first page without duplicate and redundant results. In this paper, we have presented a novel and efficient approach for the detection of near duplicate web pages in web crawling. Detection of near duplicate web pages is carried out ahead of storing the crawled web pages in to repositories. At first, the keywords are extracted from the crawled pages and the similarity score between two pages is calculated based on the extracted keywords. The documents having similarity scores greater than a threshold value are considered as near duplicates. The detection has resulted in reduced memory for repositories and improved search engine quality.
Index Terms— Web Crawling, Generic Crawling, Focused Crawling, Web Mining, Search Indexing, Web Parsing, Keywords Extracted, Page Ranking, Similarity Score Calculation
Uploads
Papers by Sairam Sharma
Index Terms— Web Crawling, Generic Crawling, Focused Crawling, Web Mining, Search Indexing, Web Parsing, Keywords Extracted, Page Ranking, Similarity Score Calculation
Index Terms— Web Crawling, Generic Crawling, Focused Crawling, Web Mining, Search Indexing, Web Parsing, Keywords Extracted, Page Ranking, Similarity Score Calculation