- At present working as Assistant Professor in Department of Computer Science & Engineering (CSE), Babu Banarasi Das No... moreAt present working as Assistant Professor in Department of Computer Science & Engineering (CSE), Babu Banarasi Das Northern India Institute of Technology (BBDNIIT), Lucknow, UP, India. Job includes teaching and guiding students.edit
Research Interests: Computer Science, Visualization, Information Visualization, Visual Analytics, Coronaviruses, and 14 moreData Visualization, Data Analytics, Tableau, Big Data Analytics, Big Data / Analytics / Data Mining, Dashboards, Tableau Software, Learning Analytics Dashboards, Coronavirus COVID-19, CoVid, COVID-19 Epidemic, COVID-19 PANDEMIC, Economic effects of COVID-19 , and Tableau Dashboards
In the today scenario technological intelligence is a higher demand after commodity even in traffic-based systems. These intelligent systems do not only help in traffic monitoring but also in commuter safety, law enforcement and... more
In the today scenario technological intelligence is a higher demand after commodity even in traffic-based systems. These intelligent systems do not only help in traffic monitoring but also in commuter safety, law enforcement and commercial applications. The proposed Saudi Arabia Vehicle License plate recognition system splits into three major parts, firstly extraction of a license plate region secondly segmentation of the plate characters and lastly recognition of each character. This act is quite challenging due to the multiformity of plate formats and the nonuniform outdoor illumination conditions during image collection. In this paper recognition of the license plates is achieved by the implementation of the Learning Vector Quantization artificial neural network. Their results are based upon their completeness in the Saudi Arabia Vehicle License plate character recognition and theirs have obtained encouraging results from proposed technique.
Research Interests:
The data mining techniques have the ability to discover hidden patterns or correlation among the objects in the medical data. There are many areas that adapt data mining techniques, namely marketing, stock, health care sector and so on.... more
The data mining techniques have the ability to discover hidden patterns or correlation among the objects in the medical data. There are many areas that adapt data mining techniques, namely marketing, stock, health care sector and so on. In the health care industry produces gigantic quantities of data that clutches complex information relating to the sick person and their medical conditions. The data mining has an infinite potential to make use of healthcare data more effectually and efficiently to predict various kinds of disease. The present-time healthcare industry heart ailment is a term that assigns to an enormous number of health care circumstances related to heart. These medical circumstances relate to the unexpected health circumstance that straight control the cardiac. In this paper we are using a ROCK algorithm because it uses Jaccard coefficient on the contrary using the distance measures to find the similarity between the data or documents to classify the clusters and th...
Research Interests:
At present time huge numbers of research articles are available on World Wide Web in any domain. The research scholar explores a research papers to get the appropriate information and it takes time and effort of the researcher. In this... more
At present time huge numbers of research articles are available on World Wide Web in any domain. The research scholar explores a research papers to get the appropriate information and it takes time and effort of the researcher. In this scenario, there is the need for a researcher to search a research based on its research article. In the present paper a method of Knowledge ablation from a collection of research articles, is presented to evolve a system research paper recommendation system (RPRS), which would generate the recommendations for research article based on researcher choice. The RPRS accumulate the knowledge ablated from the pertinent research articles in the form of semantic tree. It accumulates all the literal sub parts with their reckoning in nodes. These parts are arranged based on their types in such a way that the leaf nodes stores the words with its prospect, the higher layer gives details about dictum with its reckoning, next to it an abstract. A Bayesian network is applied to construct a verisimilitude model which would quotation the pertinent tidings from the knowledge tree to construct the recommendation and word would be scored through TF-IDF value.
Research Interests: Recommender Systems, Bayesian Network Classifiers, Bayesian Networks, Knowledge extraction (Data mining, rough set, neural networks), Text Classification, and 8 moreText Clustering, Recommender System, Texture Classification, Text Mining, Document clustering, Data Mining, Context Aware Recommender Systems, Knowledge Extraction, Bayesian Network, and Bayesian Belief Networks
As there is an enormous amount of online research material available, finding pertinent information for specific purposes has become a tedious chore. So there is a requirement of the research paper recommendation system to facilitate... more
As there is an enormous amount of online research material available, finding pertinent information for specific purposes has become a tedious chore. So there is a requirement of the research paper recommendation system to facilitate research scholars in finding their interested and relevant research papers. There are many paper recommendation systems available, most of them are depending on paper assemblage, references, user profile, mind maps. This information is generally not easily available. The majority of the prevailing recommender system is based on collaborative filtering that rely on other user's proclivity. On the other hand, content-based methods use information regarding an item itself to make a recommendation. In this paper, we present a research paper recommendation method that is based on single paper. Our method uses content-based recommendation approach that employs information extraction and text categorization.. It performs the profile learning by using naive Bayesian text classifier and generates recommendation on the basis of an individual's preference.
Research Interests: Information Extraction, Recommendations (Social Computing), Knowledge extraction (Data mining, rough set, neural networks), Text Classification, Recommendation, and 8 moreData mining, Information Extraction, Deep Web, Texture Classification, Recommendation Systems, Recommendations, Online Information Extraction, Data Extraction, Learning Profile, and Profile Learning
The gigantic growth of information on the Internet makes discovery information challenging and time consuming. We are encircled by a plethora of data in the form of blogs, papers, reviews, and comments on different websites. Recommender... more
The gigantic growth of information on the Internet makes discovery information challenging and time consuming. We are encircled by a plethora of data in the form of blogs, papers, reviews, and comments on different websites. Recommender systems endow a solution to this situation by automatically capturing user interests and recommending respective information the user may also find relevant. The purpose of developing recommender systems is to detract information overload by retrieving the most pertinent knowledge and services from an enormous amount of data, thereby providing personalized services. The most vital feature of a recommender system is its proficiency to "supposition" a user's preferences and interests by examining the behavior of this user and/or the behavior of other users to originate personalized recommendations. So several research works have been done in this area, but nothing consolidated has been appraised. In this paper, we are going to discuss a brief summary of imperfection in the available recommender system. We are also trying to figure out these shortcomings of the available recommender system to generate a new method that improves these shortcomings.
Research Interests: Recommender Systems, Semantic similarity, Information Extraction, Media Stereotyping, Data mining, Information Extraction, Deep Web, and 8 moreStereotyping, Concept mining, Context Aware Recommender Systems, Music Recommender Systems, Concept Mining, AI, Content-Based Filtering, Ontology, Content-Based Filtering, and Semantic Similarity Calculation
Soft computing approaches have different capabilities in error optimization for controlling the complex system parameters, Soft computing approaches provide a learning and desion making support from the relevant datasets or others experts... more
Soft computing approaches have different capabilities in error optimization for controlling the complex system parameters, Soft computing approaches provide a learning and desion making support from the relevant datasets or others experts review experiences. Soft computing optimization approaches can be variety of many environmental and alsostability related uncertainties. This paper explain the different soft computing approaches viz., Genetic algorithms, Fuzzy logics, results of different error optimization control case studies. Mathematical Models refer to the Conventional error optimization control , which define dynamic control Conventional controllers are often inferior to the intelligent controllers, due to lack in comprehensibility. The results that controllers provide better control on errors than conventional controllers. Hybridization of technique such as fuzzy logic with genetic algorithms etc., provide a better optimization control for the designing and developing of intelligent systems.
Research Interests: Fuzzy Logic, Genetic Algorithms, Neural Networks, Artficial Neural Networks, Artificial Neural Networks, and 6 moreFuzzy Logic Programming, Fuzzy Logic and Its Application, Artifical Neural Networks, 18. Genetic and evolutionary algorithms, Soft Computing Approaches, and Genetic algorithms and evolutionary computing
Recommender systems are extensively seen as an effective means to combat information overload, as they redound us both narrow down the number of items to choose. They are seen as assistance us make better decisions at a lower transaction... more
Recommender systems are extensively seen as an effective means to combat information overload, as they redound us both narrow down the number of items to choose. They are seen as assistance us make better decisions at a lower transaction cost. Hence, recommender systems have become omnipresent in e-commerce and are also increasingly used in services in different other domains both online and offline where the number of items exceeds our potentiality to consider them all individually. The research papers recommender systems are software applications or systems that help individual users to discover the most relevant research papers to their needs. These systems use filtering techniques to create recommendations. These techniques are categorized majorly into collaborative-based filtering, content-based technique, and hybrid algorithm. In addition, they assist in decision making by providing product information both personalized and non-personalized, summarizing community opinion, search research papers, and providing community critiques. As a result, recommender systems have been shown to ameliorate the decision.
Research Interests: Information Retrieval, Recommender Systems, Music Information Retrieval, Tagging Technologies, Collaborative Filtering, and 9 moreRecommender system, Collaborative filtering, Tagging, Context Aware Recommender Systems, Content-Based and Collaborative Filtering, E-Commerce, E Commerce, Electronic Commerce, Collaborative filtering and knowledge management, and Recommendations systems (RS)
In the last twelve years, the number of web user increases, so intensely leading to intense advancement in web services which leads to enlargement the usage data at higher rates. The purpose of a recommender System is to generate... more
In the last twelve years, the number of web user increases, so intensely leading to intense advancement in web services which leads to enlargement the usage data at higher rates. The purpose of a recommender System is to generate meaningful recommendations to a collection of users for items or products that might interest them. Recommender systems differ in the way they analyze these data sources to develop notions of congeniality between users and items which can be used to identify well-matched pairs. The recommender system technology intentions to help users in finding items that match their personal interests. It has a successful usage in e-commerce applications to deal with problems related to information overload proficiently. In this paper, we will extensively present a survey of six existing recommendation system. The Collaborative Filtering systems analyze historical interactions alone, while Content-Based Filtering systems are based on profile attributes, Hybrid Techniques attempt to combine both of these designs, Demographic Based Recommender systems aim to categorize the user based on personal attributes and make recommendations based on demographic classes, while Knowledge-Based Recommendation attempts to suggest objects based on inferences about a user's needs and preferences, and Utility-Based Recommender systems make recommendations based on the computation of the utility of each item for the user. In this paper, we have recognized 60 research papers on recommender systems, which were published between 1971 and 2014. Finally, few research papers had an influence on research paper recommender systems in practice. We also recognized a lack of authority and long term research interest in the field, 78% of the authors published no more than one paper on research paper recommender systems, and there was miniature cooperation among different co-author groups.
Research Interests: Collaborative Filtering, Knowledge-Based Systems, Recommender system, Collaborative filtering, Recommendation Systems, Utility-Based Recommendation, and 8 moreHybrid Methods, Knowledge Sources, Recommendations systems (RS), content-based Multimedia Recommendation Systems, Utility Based, Contents Based Methods, Contents Based, and Demographic Based
The Big data is the name used ubiquitously now a day in distributed paradigm on the web. As the name point out it is the collection of sets of very large amounts of data in pet bytes, Exabyte etc. related systems as well as the algorithms... more
The Big data is the name used ubiquitously now a day in distributed paradigm on the web. As the name point out it is the collection of sets of very large amounts of data in pet bytes, Exabyte etc. related systems as well as the algorithms used to analyze this enormous data. Hadoop technology as a big data processing technology has proven to be the go to solution for processing enormous data sets. MapReduce is a conspicuous solution for computations, which requirement one-pass to complete, but not exact efficient for use cases that need multi-pass for computations and algorithms. The Job output data between every stage has to be stored in the file system before the next stage can begin. Consequently, this method is slow, disk Input/output operations and due to replication. Additionally, Hadoop ecosystem doesn't have every component to ending a big data use case. Suppose we want to do an iterative job, you would have to stitch together a sequence of MapReduce jobs and execute them in sequence. Every this job has high-latency, and each depends upon the completion of the previous stage. Apache Spark is one of the most widely used open source processing engines for big data, with wealthy language-integrated APIs and an extensive range of libraries. Apache Spark is a usual framework for distributed computing that offers high performance for both batch and interactive processing. In this paper, we aimed to demonstrate a close-up view about Apache Spark and its features and working with Spark using Hadoop. We are in a nutshell discussing about the Resilient Distributed Datasets (RDD), RDD operations, features, and limitation. Spark can be used along with MapReduce in the same Hadoop cluster or can be used lonely as a processing framework. In the last comparative analysis between Spark and Hadoop and MapReduce in this paper.
Research Interests:
Big Data make conversant with novel technology, skills and processes to your information architecture and the people that operate, design, and utilization them. The big data delineate a holistic information management contrivance that... more
Big Data make conversant with novel technology, skills and processes to your information architecture and the people that operate, design, and utilization them. The big data delineate a holistic information management contrivance that comprise and integrates numerous new types of data and data management together conventional data. The Hadoop is an unlocked source software framework licensed under the Apache Software Foundation, render for supporting data profound applications running on huge grids and clusters, to proffer scalable, credible, and distributed computing. This is invented to scale up from single servers to thousands of machines, every proposition local computation and storage. In this paper, we have endeavored to converse about on the taxonomy for big data and Hadoop technology. Eventually, the big data technologies are necessary in providing more actual analysis, which may leadership to more concrete decision-making consequence in greater operational capacity, cost deficiency, and detect risks for the business. In this paper, we are converse about the taxonomy of the big data and components of Hadoop.
Research Interests: Visualization, Information Visualization, Data Visualization, Hadoop, Big Data, and 15 moreYarn, Hamas, Big Data Analytics, Hama, Apache Hadoop, MapReduce and Hadoop, Hadoop , BIgdata , NOSQL, Storage Infrastructure, Big Data Technologies, Yarns, Big Data Applications, Bigdata Hadoop, Jobtracker, avro, and data domains
In the today scenario technological intelligence is a higher demand after commodity even in traffic-based systems. These intelligent systems do not only help in traffic monitoring but also in commuter safety, law enforcement and... more
In the today scenario technological intelligence is a higher demand after commodity even in traffic-based systems. These intelligent systems do not only help in traffic monitoring but also in commuter safety, law enforcement and commercial applications. The proposed Saudi Arabia Vehicle License plate recognition system splits into three major parts, firstly extraction of a license plate region secondly segmentation of the plate characters and lastly recognition of each character. This act is quite challenging due to the multiformity of plate formats and the nonuniform outdoor illumination conditions during image collection. In this paper recognition of the license plates is achieved by the implementation of the Learning Vector Quantization artificial neural network. Their results are based upon their completeness in the Saudi Arabia Vehicle License plate character recognition and theirs have obtained encouraging results from proposed technique.
Research Interests:
In order to survive and stay ahead in today’s competitive world companies are expanded to their limits in search for organizational skills and technologies. Out off those Supply chain Management and Enterprise Resource Planning are the... more
In order to survive and stay ahead in today’s competitive world companies are expanded to their limits in search for organizational skills and technologies. Out off those Supply chain Management and Enterprise Resource Planning are the two most primarily used terms. The utmost important factors here are to improve the speed of production with least cost and with more efficiency in order to stay
and survive in the competition in the present globalised economic scenario. There is a serious need for integration of information across the supply chains and proper planning of enterprises' resources. One
hand the supply chains will enhance the efficiency of movement of various inputs and on the other hand the ERP will improve the overall efficiency of the resources to bring down the cost of production
and operations. Once a firm is able to achieve the best quality output with least cost, it could attract more consumers towards its production. This research paper troughs light on how the integration of SCM(Supply Chain Management) and ERP(Enterprise Resource Planning) would be beneficial to the company to achieve greater Competitive Advantage with the Case Examples of Cadbury and Nokia.
and survive in the competition in the present globalised economic scenario. There is a serious need for integration of information across the supply chains and proper planning of enterprises' resources. One
hand the supply chains will enhance the efficiency of movement of various inputs and on the other hand the ERP will improve the overall efficiency of the resources to bring down the cost of production
and operations. Once a firm is able to achieve the best quality output with least cost, it could attract more consumers towards its production. This research paper troughs light on how the integration of SCM(Supply Chain Management) and ERP(Enterprise Resource Planning) would be beneficial to the company to achieve greater Competitive Advantage with the Case Examples of Cadbury and Nokia.
Research Interests:
In the present globalized Marketing environment, the marketers has a cherished dream to capture a sizeable market share compared to his competitors. Particularly, in the Indian context there are innumerable opportunities for FMCG (Fast... more
In the present globalized Marketing environment, the marketers has a cherished dream to capture a sizeable market share compared to his competitors. Particularly, in the Indian context there are innumerable opportunities for FMCG (Fast Moving Consumer Goods)industry that has high potential to garner rich returns to various companies. The researchers of this paper has focused on how to get the benefit of latest IT technologies in the retail supply chains with reference to C class towns at Tamil Nadu. The main emphasis in this paper was to identify the strategies followed by various FMCG companies to improve close customer relationships. To this end a study was conducted to know the role of IT usage to improve the performance of retail supply chains. This paper is exploratory and descriptive in nature, wherein much of the information about the existing supply chains has been extracted through interviews with the retailers and wholesalers of FMCG majors. Originality/value: We have made a suitable study on how these latest developments in IT is being used in FMCG sector in India particularly in supply chain management. We have taken some C class towns in Tamil Nadu of India for the purpose of this study by discussions and observation and mainly the companies like Dabur and HLL have taken for this study.
Research Interests:
In the age of digital and network, every high efficiency and high profit activity has to harmonize with internet. The business behaviors and activities always are the precursor for getting high efficiency and high profit. Consequently,... more
In the age of digital and network, every high efficiency and high profit activity has to harmonize with internet. The business behaviors and activities always are the precursor for getting high efficiency and high profit. Consequently, each business behavior and activities have to adjust for integrating with internet. Underlay on the internet, business extension and promotion behaviors and activities general are called the Electronic Commerce (E-commerce). The quality of web-based customer service is the capability of a firm's website to provide individual heed and attention. Today scenario personalization has become a vital business problem in various e-commerce applications, ranging from various dynamic web content presentations. In our paper Iterative technique partitions the customer in terms of frankly combining transactional data of various consumers that forms dissimilar customer behavior for each group, and best customers are acquired, by applying approach such as, IE (Iterative Evolution), ID (Iterative Diminution) and II (Iterative Intermingle) algorithm. The excellence of clustering is improved via Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). In this paper these two algorithms are compared and it is found that Iterative technique chorus Particle Swarm Optimization (PSO) is better than the other Ant Colony Optimization (ACO) algorithms. Additionally the results show that the Particle Swarm Optimization (PSO) algorithm outperforms other Ant Colony Optimization (ACO) algorithms methods. Finally quality is superior along with this response time higher and cost wise performance is increased and both accuracy and efficiency.
Research Interests: Clustering and Classification Methods, Ant Colony Optimization, Ant Colony Optimisation, Particle Swarm Optimization, Clustering, and 8 moreMeta-Heuristics (Tabu Search, Genetic Algorithms, Simulated Annealing, Ant-Colony, Particle Swarm Optimization), Preprocessing, Data Preprocessing, E-Commerce, Particle Swarm Optimization PSO), Electronic Commerce and E ‐business, Ant Colony Optimization (ACO), and Davies-Bouldin’s Index
My proposed work is inspired by the experiment that uses expert judgment for estimation of the cost on the basis of previous project results. In this paper estimator can use Analogical strategies as well as Algorithmic Strategies as they... more
My proposed work is inspired by the experiment that uses expert judgment for estimation of the cost on the basis of previous project results. In this paper estimator can use Analogical strategies as well as Algorithmic Strategies as they wish. The proposed method is divided into two phases. First phase computed the probability of each selected factors by ant colony system. Second phase combines the value of these factors to calculate the cost overhead for the project by using Bayesian belief network. Once this overhead is computed productivity is directly calculated which can be converted in effort and cost. Our computation gives the Cost Overhead that depends on various factors. Till date Ant Colony Optimization Algorithm has provided solutions for the problems that have multiple solution and user are interested in best solution. This algorithm provides a proper heuristic for the problem and computes the best possible solution. It gives the solutions in terms of probability, i.e. The most likely occurred solution and the best solution. It was first introduced in Traveling Salesman Problem for finding the minimum cost path. We have mapped our problem in a simple graph by using a questionnaire. That gives the minimum length path, the path that obtains minimum deviation from the nominal project for each factor and theirs encouraging results from proposed technique.