Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Sanjay Singh
  • Department of Information & Communication Techonology,
    Manipal Institute of Technology, Manipal-576104, India
  • +91-9611497808
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication systems. OFDM is a spectrally efficient modulation technique that... more
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication systems. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. A major drawback of OFDM is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal which can significantly impact the performance of the power amplifier. In this paper we have compared the PAPR reduction performance of Golay and Reed-Muller coded OFDM signal. From our simulation it has been found that the PAPR reduction performance of Golay coded OFDM is better than the Reed-Muller coded OFDM signal. Moreover, for the optimum PAPR reduction performance, code configuration for Golay and Reed-Muller codes has been identified.
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication system. OFDM is a spectrally efficient modulation technique that... more
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication system. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. However the price paid for this high spectral efficiency and less intensive equalization is low power efficiency. OFDM signals are very sensitive to nonlinear effects due to the high Peak-to-Average Power Ratio (PAPR), which leads to the power inefficiency in the RF section of the transmitter. This paper investigates the effect of PAPR reduction on the performance parameter of multicarrier communication system. Performance parameters considered are power consumption of Power Amplifier (PA) and Digital-to-Analog Converter (DAC), power amplifier efficiency, SNR of DAC and BER performance of the system. From our analysis it is ...
With the social media boom in today's world, we see people constantly uploading photos of themselves along with their friends and family on various social media platforms such as Facebook, Instagram, Twitter, Google+, etc. What if... more
With the social media boom in today's world, we see people constantly uploading photos of themselves along with their friends and family on various social media platforms such as Facebook, Instagram, Twitter, Google+, etc. What if they want to see all the photos in a categorized form such as photos with a particular person. In this paper, we extend the concept of Multiview Face Detection using Convolution Neural Networks (CNN) used by Farfade et al. by providing a tagging system for the detected faces. For the face detection, we use Deep Dense Face Detector, which uses a single model based on deep convolutional neural networks. All the detected faces are recognized using Local Binary Patterns Histograms (LBPH) method. Precision, recall, and F-measure are the parameters used to measure the performance of the algorithm. An accuracy of 85% is achieved for tagging the faces which are successfully detected.
The chroot system call implemented in Unix-like (or *nix) OSes changes a view of the file structure for the calling process and its children by changing the root directory for them. It was intended as an administrative tool and not a... more
The chroot system call implemented in Unix-like (or *nix) OSes changes a view of the file structure for the calling process and its children by changing the root directory for them. It was intended as an administrative tool and not a security one and the Linux implementation follows the Portable Operating System Interface (POSIX) standards. However, it is used as a security tool extensively. Difference in intended use and actual use of chroot in Linux implementation has resulted in labeling of some features as security vulnerabilities. Vulnerabilities could allow malicious users to completely circumvent the security aspect of chroot. The methods used in this paper removes the cause of those vulnerabilities which results in a more secure construct. Some of those are: not changing of the Current Working Directory (CWD), not closing file descriptors and allowing mounting of file systems inside the newly created environment. In this paper we try to address these specific issues by modifying the system calls in the system call table and more generally, present a solution with a good design. The proposed solutions aims to improve the design of chroot when used as a security construct.
A quadrotor Micro Aerial Vehicle (MAV) is designed to navigate a track using neural network approach to identify the direction of the path from a stream of monocular images received from a downward-facing camera mounted on the vehicle.... more
A quadrotor Micro Aerial Vehicle (MAV) is designed to navigate a track using neural network approach to identify the direction of the path from a stream of monocular images received from a downward-facing camera mounted on the vehicle. Current autonomous MAVs mainly employ computer vision techniques based on image processing and feature tracking for vision-based navigation tasks. It requires expensive onboard computation and can create latency in the real-time system when working with low-powered computers. By using a supervised image classifier, we shift the costly computational task of training a neural network to classify the direction of the track to an offboard computer. We make use of the learned weights obtained after training to perform simple mathematical operations to predict the class of the image on the onboard computer. We compare the computer vision based tracking approach with the proposed approach to navigate a track using a quadrotor and show that the processing rat...
Finding regions of interest is a challenge when traveling in an unfamiliar area. Users traveling in an unfamiliar region may like to travel in a direction which includes places that are interesting to them. In this paper we propose a... more
Finding regions of interest is a challenge when traveling in an unfamiliar area. Users traveling in an unfamiliar region may like to travel in a direction which includes places that are interesting to them. In this paper we propose a method of finding nearby regions for potential Points of Interest (POI) (e.g., sightseeing places and commercial centers) while traveling in an undefined path. A continuous algorithm is proposed to address these challenges. Conceptually, the algorithm searches for nearby spatial objects(POIs or geo-tagged tweets). Distance and density are the two factors used to progress as well as stop the search. The search space is constrained using density and distance threshold along with an adjustment factor to adjust the importance of the two domains. The performance of the continuous algorithm is measured based on experiments conducted on spatial data. The experimental results has shown to retrieve all the POIs on a unfamiliar path in the real time.
In Artificial Intelligence, games are the most challenging and exploited field. A language game is one such type of open-world game in which the word or phrase meaning plays an important role. Playing and solving such type of game is... more
In Artificial Intelligence, games are the most challenging and exploited field. A language game is one such type of open-world game in which the word or phrase meaning plays an important role. Playing and solving such type of game is based on the player's ability to find the solution which depends on the richness of the player's cultural background for answering the question by understanding the question very well. This paper presents a challenging game which tells about Indian mythology in the form of games and requires knowledge covering a broad range of sources to be stored, which provides the knowledge background for getting the candidate solutions. The primary motivation is to create a system that processes the query and the clues provided, by finding the hidden association between the query and the clues within the knowledge repository and generates a list of multiple candidate solutions. For retrieving the unique solution, the multiple candidate solutions must be rank...
Drivers of vehicles encounter various risks both external and internal during their course of operation of a vehicle on the road. One such risk is the proximity of vehicle with respect to another vehicle or an obstacle. This IoT based... more
Drivers of vehicles encounter various risks both external and internal during their course of operation of a vehicle on the road. One such risk is the proximity of vehicle with respect to another vehicle or an obstacle. This IoT based project examines a solution which directs/assists the driver to maintain a specified safe distance between vehicles on roads to and avoid unsafe conditions leading to accidents. In the event of lack of attention on behalf of the driver or temporary distraction, the system is designed to alert him to the required focus on the road. The project will attempt to examine the possible risk factors both qualitatively and quantitatively in order to incorporate data-based features into the system. Evaluation of the risk factors will be done through the study of published causes of road accidents in India. The results shall be in the form of a sensor graph which can be used to design the system and offer a standalone solution. The sensor data in the form of sens...
Face recognition has gained a great importance in recent years due to the increasing demands of the real-world applications. The modality-independent face recognition also known as heterogeneous face recognition is useful in many... more
Face recognition has gained a great importance in recent years due to the increasing demands of the real-world applications. The modality-independent face recognition also known as heterogeneous face recognition is useful in many applications. Here modality refers to different lighting scenarios in which the picture of the subject is taken. Modality-independent face recognition addresses the issues of low illumination, discrepancies based on the modalities that is recognition of one image from the other image captured from two different sources for the same subject. To address this issue we have made use of mutual information concept. The mutual information between input and gallery images has been used to find the appropriate match between multi-modal images. The idea here is to find the appropriate match between a sketch and a photograph of the same subject. The proposed methodology is useful in crime investigation, law enforcement and for surveillance purposes.
In Non cooperative Game Theory, Nash Equilibrium can be computed by finding the best response strategy for each player. However this problem cannot be solved deterministically in polynomial time. For some finite games, there might be more... more
In Non cooperative Game Theory, Nash Equilibrium can be computed by finding the best response strategy for each player. However this problem cannot be solved deterministically in polynomial time. For some finite games, there might be more than one pure strategy Game Equilibrium. In such cases, the most optimal set of solutions give the Game Equilibria. Evolutionary Algorithms and specifically Genetic Algorithms, based on Pareto dominance used in multi-objective optimization do not incorporate the Nash dominance and the extent of dominance in finding the equilibria. Many pairs of solutions do not dominate each other based on the generative relation of Pareto dominance and Nash Ascendancy. In this paper a fitness function based on the generative relation of Nash Ascendancy has been proposed to enhance the comparison of two individuals in a population. It assigns a better fitness value to pair of individuals that do not dominate each other.
Now a days almost everybody is having a portable communication device, be it a laptop, a tablet or smart phones. The user would like to have all the services at his fingertips and access them through the portable device he owns. The user... more
Now a days almost everybody is having a portable communication device, be it a laptop, a tablet or smart phones. The user would like to have all the services at his fingertips and access them through the portable device he owns. The user would exchange data with the other user or the service provider or control the smart appliances at his home. The interactions between the user’s device and the service provider must be secure enough regardless of the type of device used to access or utilize the services. In this paper we propose a ”Three Way Authentication (TWA)” technique intended to preserve the user privacy and to accomplish ownership authentication in order to securely deliver the services to the user devices. This technique will also help the users or the service providers to check whether the device is compromised or not with the help of the encrypted pass-phrases that are being exchanged. The users use the devices to store most of the valuable information and will prove risky...
Partial derivatives are used to describe the trend of a dependent categorical variable. This is used to extract quality relations from categorical data through the definition of a Probabilistic Discrete Qualitative Partial Derivative (PDQ... more
Partial derivatives are used to describe the trend of a dependent categorical variable. This is used to extract quality relations from categorical data through the definition of a Probabilistic Discrete Qualitative Partial Derivative (PDQ PD). This has been covered in the Qube algorithm. However, on analysis of the current method, it is found that a large amount of the time is taken in the attribute selection phase. The objective of this paper is to improve the efficiency of this algorithm in particular by decreasing the time taken for attribute selection. In this paper we have modified the Qube algorithm, the modified algorithm is able to reduce the time taken by searching the data set for rows with the same variable values. The ordering is then replicated in each case. This has been found to improve the efficiency of the algorithm especially in data sets where there are multiple items with the same values.
Software Defined Networking (SDN) is an emerging architecture. SDN has made much impact on networks by providing network programmability which helps to handle the explosive growth of the smart network. The design and capabilities of the... more
Software Defined Networking (SDN) is an emerging architecture. SDN has made much impact on networks by providing network programmability which helps to handle the explosive growth of the smart network. The design and capabilities of the underlying SDN infrastructure influence the performance of the conventional network tasks. Many research effort has been made to study the performance characteristics of the SDN network. In this paper, we propose a mathematical model to analyze the performance of out band SDN network architecture. We make use of the capabilities of M/G/1 model to examine the effect of the size of flow table, switch load and the availability of the rule on mean sojourn time. The model is validated using the simulation.
Social media sites such as Twitter have given a platform to people to express their thoughts even on the most sensitive of topics. In this paper, we analyze tweets which are posted by people who acknowledged their sexuality and shared... more
Social media sites such as Twitter have given a platform to people to express their thoughts even on the most sensitive of topics. In this paper, we analyze tweets which are posted by people who acknowledged their sexuality and shared their friends and families' response publicly. In our experiment, we have used a set of Tweets data with specific hashtags to analyze the response. Context-based topic modeling was used to gather tweets surrounding this topic to get the relevant tweets. The unwanted tweets discovered by the topic model was discarded. The sentiment of these tweets was then extracted using novel methods. The motivation for this process was to find out how their declaration of sexuality was perceived. Unfortunately, homophobia is one of the battles humanity continues to fight.
The aim of this paper is to provide a framework to understand and analyse the intelligence of chat-bots. With the ever increasing number of chat-bots available, we have considered intelligence analysis to be a functional parameter to... more
The aim of this paper is to provide a framework to understand and analyse the intelligence of chat-bots. With the ever increasing number of chat-bots available, we have considered intelligence analysis to be a functional parameter to determine the usefulness of a bot. For our analysis, we consider Microsoft's Twitter bot Tay released for online interaction in March 2016. We perform various natural language processing tasks on the tweets tweeted by and tweeted at Tay and discuss the implications of the results. We perform classification, text categorization, entity extraction, latent Dirichlet allocation analysis, frequency analysis and model the vocabulary used by the bot using a word2vec system to achieve this goal. Using the results from our analysis we define a metric called the bot intelligence score to evaluate and compare the intelligence of bots in general.
In todays world, Deep Learning is an area of research that has ever increasing applications. It deals with the use of neural networks to bring improvements in areas like speech recognition, computer vision, natural language processing and... more
In todays world, Deep Learning is an area of research that has ever increasing applications. It deals with the use of neural networks to bring improvements in areas like speech recognition, computer vision, natural language processing and several automated systems. Training deep neural networks involves careful selection of appropriate training examples, tuning of hyperparameters and scheduling step sizes, finding a proper combination of all these is a tedious and time-consuming task. In the recent times, a few learning-to-learn models have been proposed, that can learn automatically. The time and accuracy of the model are exceedingly important. A technique named meProp was proposed to accelerate Deep Learning with reduced over-fitting. meProp is a method that proposes a sparsified back propagation method which reduces the computational cost. In this paper, we propose an application of meProp to the learning-to-learn models to focus on learning of the most significant parameters which are consciously chosen. We demonstrate improvement in accuracy of the learning-to-learn model with the proposed technique and compare the performance with that of the unmodified learning-to-learn model.
A job seeker’s resume contains several sections, including educational qualifications. Educational qualifications capture the knowledge and skills relevant to the job. Machine processing of the education sections of resumes has been a... more
A job seeker’s resume contains several sections, including educational qualifications. Educational qualifications capture the knowledge and skills relevant to the job. Machine processing of the education sections of resumes has been a difficult task. In this paper, we attempt to identify educational institutions’ names and degrees from a resume’s education section. Usually, a significant amount of annotated data is required for neural network-based named entity recognition techniques. A semi-supervised approach is used to overcome the lack of large annotated data. We trained a deep neural network model on an initial (seed) set of resume education sections. This model is used to predict entities of unlabeled education sections and is rectified using a correction module. The education sections containing the rectified entities are augmented to the seed set. The updated seed set is used for retraining, leading to better accuracy than the previously trained model. This way, it can provi...
With the exponential rise in the number of Internet users, Social Networking platforms have become one of the major means of communication all over the globe. Many major players in this field exist including the likes of Facebook,... more
With the exponential rise in the number of Internet users, Social Networking platforms have become one of the major means of communication all over the globe. Many major players in this field exist including the likes of Facebook, Twitter, Google+ etc. Impressed by the number of users an individual can reach using the existing Social Networking platforms, most organizations and celebrities make use of them to keep in touch with their fans and followers continuously. Social Networking platforms also allow the organizations or celebrities to publicize any event and to update information regarding their business just to keep themselves active in the market. Most Social Networking platforms provide some form of metric which can be used to define the popularity of an user such as the number of followers on Twitter, number of likes on Facebook etc. However, in recent years, it has been observed that many users attempt to manipulate their popularity metric with the help of fake accounts to look more popular. In this paper, we have devised a method which can be used to detect all the fake followers within a social graph network based on features related to the centrality of all the nodes in the graph and training a classifier based on a subset of the data. Using only graph based centrality measures, the proposed method yielded very high accuracy on fake follower detection. The proposed method is generic in nature and can be used irrespective of the social network platform under consideration.
In today’s information age, a comprehensive stock trading decision support system which aids a stock investor in decision making without relying on random guesses and reading financial news from various sources is the need of the hour.... more
In today’s information age, a comprehensive stock trading decision support system which aids a stock investor in decision making without relying on random guesses and reading financial news from various sources is the need of the hour. This paper investigates the predictive power of technical, sentiment and stock market analysis coupled with various machine learning and classification tools in predicting stock trends over the short term for a specific company. Large dataset stretching over a duration of ten years has been used to train, test and validate our system. The efficacy of supervised non-shallow and prototyping learning architectures are illustrated by comparison of results obtained through myriad optimization, classification and clustering algorithms. The results obtained from our system reveals a significant improvement over the efficient market hypothesis for specific companies and thus strongly challenges it. Technical parameters and algorithms used have shown a significant impact on the predictive power of the system. The predictive accuracy obtained is as high as 70–75% using linear vector quantization. It has been found that sentiment analysis has strong correlation with the future market trends. The proposed system provides a comprehensive decision support system which aids in decision making for stock trading. We also present a novel application of the BDI framework to systematically apply the learning and prediction phases.
ABSTRACT Botnets are a group of compromised computers that act in a coordinated manner against a target determined by a single point of control. Meta-analysis of botnets is crucial as it results in knowledge about the botnet, often... more
ABSTRACT Botnets are a group of compromised computers that act in a coordinated manner against a target determined by a single point of control. Meta-analysis of botnets is crucial as it results in knowledge about the botnet, often providing valuable information to researchers who are looking to eradicate it. However, meta-analysis has not been applied from a research standpoint for botnets detection and analysis. This paper proposes a framework that uses modified implementation of Apriori data mining algorithms on data-sets derived from end-user logs for meta-analysis. It also presents a case study following the proposed approach. The results of this case study present some interesting heuristics that can be used to eradicate the botnet. These heuristics include the indication of vulnerabilities, new trends in botnet malware among others.
Efforts to make online media accessible to a regional audience have picked up pace in recent years with multilingual captioning and keyboards. However, techniques to extend this access to people with hearing loss are limited. Further,... more
Efforts to make online media accessible to a regional audience have picked up pace in recent years with multilingual captioning and keyboards. However, techniques to extend this access to people with hearing loss are limited. Further, owing to a lack of structure in the education of hearing impaired and regional differences, the issue of standardization of Indian Sign Language (ISL) has been left unaddressed, forcing educators to rely on the local language to support the ISL structure, thereby creating an array of correlations for each object, hindering the language building skills of a student. This paper aims to present a useful technology that can be used to leverage online resources and make them accessible to the hearing-impaired community in their primary mode of communication. Our tool presents an avenue for the early development of language learning and communication skills essential for the education of children with a profound hearing loss. With the proposed technology, we aim to provide a standardized teaching and learning medium to a classroom setting that can utilize and promote ISL. The goals of our proposed system involve reducing the burden of teachers to act as a valuable teaching aid. The system allows for easy translation of any online video and correlation with ISL captioning using a 3D cartoonish avatar aimed to reinforce classroom concepts during the critical period. First, the video gets converted to text via subtitles and speech processing methods. The generated text is understood through NLP algorithms and then mapped to avatar captions which are then rendered to form a cohesive video alongside the original content. We validated our results through a 6-month period and a consequent 2-month study, where we recorded a 37% and 70% increase in performance of students taught using Sign captioned videos against student taught with English captioned videos. We also recorded a 73.08% increase in vocabulary acquisition through signed aided videos.
Multi-agent systems deal with interactive agents that possess some degree of autonomy and cooperate towards the achievement of their goals. Multi-agent systems often need to operate in unsafe environments, and hence there is a need for... more
Multi-agent systems deal with interactive agents that possess some degree of autonomy and cooperate towards the achievement of their goals. Multi-agent systems often need to operate in unsafe environments, and hence there is a need for situational awareness and risk assessment security mechanism that can evaluate agent data to detect threats and analyse the risk posed by them. Basic belief-desire-intention (BDI) architecture lacks a framework that would facilitate effective communication, to bring about situational awareness among agents in an insecure environment. This paper addresses the problems faced by a group of agents lacking global knowledge and global communications, by the introduction of a belief sharing and risk assessment mechanism. We have extended basic BDI architecture with the concept of situational awareness and adaptive risk management. With the proposed architecture, agents can gain awareness by exchanging beliefs, ascertaining the truth of such beliefs and measuring the credibility of a peer agent. Moreover agents can modify their level of alertness by monitoring the risks faced by them and by their peers. This enables the agents to detect and assess risks faced by them in an efficient manner, thereby increasing operational efficiency and resistance against attacks.
Social media such as Twitter, Google+, Facebook, etc has an undeniable effect on the way information is stored and processed by us. The information available on the web is abound and hence it is essential to mine the important information... more
Social media such as Twitter, Google+, Facebook, etc has an undeniable effect on the way information is stored and processed by us. The information available on the web is abound and hence it is essential to mine the important information and avoid the irrelevant details. Along with this, it is beneficial to consider information that is contextually similar to information related to a particular topic as it provides a big picture. Tweets contains keywords known as hashtags which provide useful information for the purpose of sentiment analysis, named entity recognition, event detection, etc. In this paper, we have analyzed Twitter data based on their hashtags, which is widely used nowadays. We have extracted tweets pertaining to a single keyword and to contextually similar keywords. For the purpose of finding similar words we have used word embeddings that capture contextual information successfully. We have used topic modeling to expose the latent structure of the documents based on probability distribution. The proposed framework helps user to find relevant tweets pertaining to a specific and to contextually similar hashtags.
This study investigates the efficiency of various models used to forecast unemployment rates. The objective of the study is to find the model which most accurately predicts the unemployment rates. It starts with auto regressive models... more
This study investigates the efficiency of various models used to forecast unemployment rates. The objective of the study is to find the model which most accurately predicts the unemployment rates. It starts with auto regressive models like autoregressive moving average model and smooth transition auto regressive model and then continues to explore four types of neural networks, namely multi layer perceptron, recurrent neural network, psi sigma neural network and radial basis function neural network. In addition to these, it also uses learning vector quantization in a combination with radial basis neural network. The results have shown that the combination of learning vector quantization and radial basis function neural network outperforms all the other forecasting models. It further uses ensemble techniques like support vector regression, simple average, to give even more accurate results.
Images are the easiest medium through which people can express their emotions on social networking sites. Social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment... more
Images are the easiest medium through which people can express their emotions on social networking sites. Social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Significant progress has been made with this technology, however, there is little research focus on the picture sentiments. In this work, an image sentiment prediction framework is built with Convolutional Neural Networks (CNN). Specifically, this framework is pretrained on a large scale data for object recognition to further perform transfer learning. Extensive experiments were conducted on manually labeled Flickr image dataset. To make use of such labeled data, we employ a progressive strategy of domain specific fine tuning of the deep network. The results show that the proposed CNN training can achieve better performance in image sentiment analysis than competing networks.
ABSTRACT
Research Interests:
ABSTRACT In this information age, providing security over the Internet is a major issue. Internet security is all about trust at a remote distance, because we are dealing with everyone remotely and not able to confirm identity or... more
ABSTRACT In this information age, providing security over the Internet is a major issue. Internet security is all about trust at a remote distance, because we are dealing with everyone remotely and not able to confirm identity or authenticity in the traditional sense. To increase the password authentication a scheme was proposed by Chun-Ta Li which is basically a smart card-based password authentication and update scheme. The Chun’s scheme provides user anonymity and eviction of unauthorized users. In our research work we have crypt analyze the Chun’s method and shown that it is vulnerable to various types of attacks: insider attack, off line password verifier attack, stolen verifier attack and impersonating attack. To overcome the security vulnerability of Chun’s scheme, we have proposed an advance scheme of password authentication and user anonymity using Elliptic Curve Cryptography (ECC) and stegnography. The proposed scheme also provides privacy to the client. Based on scheme performance criteria such as immunity to known attacks and functional features, we came to the conclusion that the proposed scheme is much efficient and solves several hard security threats.
Abstract The analytical computation of average bit error probability over fading channel is very difficult if the probability of bit error over an AWGN involves square of Gaussian Q function or error function. In this paper curve fitting... more
Abstract The analytical computation of average bit error probability over fading channel is very difficult if the probability of bit error over an AWGN involves square of Gaussian Q function or error function. In this paper curve fitting technique has been applied to represent bit error probability of any digital modulation technique in terms of a simple Gaussian function. Using this approximated function, a generalized closed form relation for computing average probability of error over fading channel has been obtained.
A mobile device like a smart phone is becoming one of main information processing devices for users these days. Using it, a user not only receives and makes calls, but also performs information processing tasks such as retrieving... more
A mobile device like a smart phone is becoming one of main information processing devices for users these days. Using it, a user not only receives and makes calls, but also performs information processing tasks such as retrieving information about nearest restaurants, ATMs etc. With the rapid improvement in technology of these mobile computing devices, Location Based Services (LBS) have been gaining a lot of attention over the years. The location service provider uses the geographical position of the user to provide services to the end ...
ABSTRACT Ours is an age of computing power where we have millions of computing devices present and developing every day. Multinational organizations, local bodies and even individual users are all dependent on computers for their day to... more
ABSTRACT Ours is an age of computing power where we have millions of computing devices present and developing every day. Multinational organizations, local bodies and even individual users are all dependent on computers for their day to day computing needs and this need is ever increasing. This involves storing and manipulating critical to personal data. Thus giving rise to various security and misuse issues related to user's data. Employees of these organizations are found chatting, playing games, wasting time on social networking and other useless web-sites during the office hours. Many web-sites are not allowed by the organizations proxy, however there are very simple ways to bypass the proxy and browse these web-sites. This gave rise to the need of a monitoring system. This paper aims to present a solution, which is a developed software which provides better security mechanism by incorporating USB attachment detection, desktop screen shot capture and network connection detection with visited website information, all at one point only. The developed software can be used as a desktop application by any enterprise or an individual user. The software can be installed by the administrator of the system and can monitor user's activity on the system. It keeps a log which the administrator can view any time he wants and monitor his machine. There's also an additional feature to email the log to the admin or send him an alert SMS on his cell-phone so that he can have correct information about the system even when he is located at a different location. It can be used by an enterprise to monitor the activity of their employees or even by individuals to safeguard their system from misuse and hence making employees to make optimal use of the office time.
ABSTRACT The Extensible Authentication Protocol (EAP) is a framework for transporting authentication credentials. EAP offers simpler interoperability and compatibility across authentication methods. In this paper, we have modeled the... more
ABSTRACT The Extensible Authentication Protocol (EAP) is a framework for transporting authentication credentials. EAP offers simpler interoperability and compatibility across authentication methods. In this paper, we have modeled the Extensible Authentication Protocol is modeled as a finite state machine. Then the model is checked for conformance with its specifications to detect possible flaws. The various entities in our model are Authenticator, EAP Server, User and User Database. The messages exchanged between various entities are modeled as transitions. The model is represented in PROMELA. Then the model is verified using SPIN model checker. This enables us to check working of protocol before implementation.
ABSTRACT Web technologies provide a platform for Internet users around the world to communicate and express their opinions. Analysis of developing Web opinions is potentially valuable for discovering ongoing topics of interest like... more
ABSTRACT Web technologies provide a platform for Internet users around the world to communicate and express their opinions. Analysis of developing Web opinions is potentially valuable for discovering ongoing topics of interest like religion, politics and crime detection, understanding how topics evolve together with the underlying social interaction between participants, and identifying important participants who have great influence in various topics of discussions. In this paper, we investigate the density-based clustering algorithm and use the scalable distance-based clustering technique for Web opinion clustering which gives more reliable and accurate results. We have conducted experiments and benchmarked with the density-based algorithm to show that the new algorithm has better performance. This Web opinion clustering technique enables the identification of themes within discussions in Web social networks their development, as well as the interactions of active participants. With the help of interactive visualization tools, we make use of the identified topic clusters to display social network development, the network topology, similarity between topics, and the similarity values between participants. Using this we can successfully compare different threads on social networking sites, extract useful information from them and identify the underlying themes of discussions.
ABSTRACT In this paper, we address the problems faced by a group of agents lacking global knowledge and global communications, by the introduction of a belief sharing mechanism. The Belief-Desire-Intention (BDI) architecture lacks a... more
ABSTRACT In this paper, we address the problems faced by a group of agents lacking global knowledge and global communications, by the introduction of a belief sharing mechanism. The Belief-Desire-Intention (BDI) architecture lacks a framework that would facilitate effective communication, to bring about situational awareness among agents in an insecure environment. We extend the BDI architecture with the concept of situational awareness. Agents can gain awareness by exchanging beliefs, ascertaining the truth of such retrieved beliefs and measuring the credibility of a peer agent. Aware-BDI mainly increases the situational awareness of the agents, thereby increasing operational efficiency and probability of attack detection.

And 48 more

Over the past couple of years, the extent of the services provided on the mobile devices has increased rapidly. A special class of service among them is the Location Based Service(LBS) which depends on the geographical position of the... more
Over the past couple of years, the extent of the services provided on the mobile devices has increased
rapidly. A special class of service among them is the Location Based Service(LBS) which depends on the
geographical position of the user to provide services to the end users. However, a mobile device is still
resource constrained, and some applications usually demand more resources than a mobile device can a
ord. To alleviate this, a mobile device should get resources from an external source. One of such sources is
cloud computing platforms. We can predict that the mobile area will take on a boom with the advent of this
new concept. The aim of this paper is to exchange messages between user and location service provider in
mobile device accessing the cloud by minimizing cost, data storage and processing power. Our main goal
is to provide dynamic location-based service and increase the information retrieve accuracy especially on
the limited mobile screen by accessing cloud application. In this paper we present location based
restaurant information retrieval system and we have developed our application in Android.
Research Interests:
The back-haul networks of 5G are formed by heterogeneous links which need to handle massive traffic. The service providers are not able to provide good QoS for their users. The technology like Software Defined Networks(SDN) and Network... more
The back-haul networks of 5G are formed by heterogeneous links which need to handle massive traffic. The service providers are not able to provide good QoS for their users. The technology like Software Defined Networks(SDN) and Network Slicing helps a little for a service provider to providing QoS for multiple links. The service providers face a challenge in the efficient utilization of resources to fulfill the QoS requirement of users to comply with the growth and thereby increasing the revenue. These problems require an accurate traffic model to determine the steady-state of the system. The proposed model uses an architecture that has the combination of two technologies: SDN and network slicing, which empowers an administrator a flexible, programmable network, and the best management of network resources. Heterogeneous application is well managed by creating multiple logical networks called slicing. The slicing can be modeled using multi-class queuing networks. These technologies encourage service providers to fulfill QoS and revenue growth. To leverage the benefits of these technologies in allocating QoS is to identify the performance of the system, which requires a precise model of traffic to decide the steady-state condition of the framework. In this paper, we focus on SDN and slicing in mobile networks and quantify the performance measure considering an in-band OpenFlow architecture for a single node and homogeneous traffic class, which is further extended to the multi-class heterogeneous class queuing model and analyzed. The results obtained help a service provider to monitor the utilization of resources in every node by every class of core network, which in turn helps to allocate the resources precisely to fulfill QoS requirements.

And 7 more