The world of software has demonstrated the remarkable appeal of communal software development. La... more The world of software has demonstrated the remarkable appeal of communal software development. Large number of software projects can leverage, reuse, and coordinate their work through internet and web-based technology. For example, Source-Forge currently hosts about sixty thousand software systems, similar strategies have suggested for corporate software development. With thousands of projects, manually locating related projects can be difficult. Hence to use automatic software categorization to find clusters of related software projects using only the source code from projects, automatic categorization of software experiments with a set of programs. Automatic categorization of software systems is a novel and intriguing challenge on software archive. Evolution has focused on determining intracomponent relations of given software system also increase to differentiate between categories. Function oriented produces better result than the object oriented. Automatic categorization of software has provided better results than LSA retrieval techniques in terms of Precision and Recall with multinomial Naïve Bayes scheme has outperformed all other approaches and shows better results than the existing approach (SVD), being used by some open source code repositories e.g. Source forge Hence, the tool can also be utilized for the automatic categorization of software components and this kind of automation may improve.
Zenodo (CERN European Organization for Nuclear Research), Apr 27, 2009
Biometric measures of one kind or another have been used to identify people since ancient times, ... more Biometric measures of one kind or another have been used to identify people since ancient times, with handwritten signatures, facial features, and fingerprints being the traditional methods. Of late, Systems have been built that automate the task of recognition, using these methods and newer ones, such as hand geometry, voiceprints and iris patterns. These systems have different strengths and weaknesses. This work is a two-section composition. In the starting section, we present an analytical and comparative study of common biometric techniques. The performance of each of them has been viewed and then tabularized as a result. The latter section involves the actual implementation of the techniques under consideration that has been done using a state of the art tool called, MATLAB. This tool aids to effectively portray the corresponding results and effects.
World Academy of Science, Engineering and Technology, International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering, Jun 20, 2008
Phylogenetic tree is a graphical representation of the evolutionary relationship among three or m... more Phylogenetic tree is a graphical representation of the evolutionary relationship among three or more genes or organisms. These trees show relatedness of data sets, species or genes divergence time and nature of their common ancestors. Quality of a phylogenetic tree requires parsimony criterion. Various approaches have been proposed for constructing most parsimonious trees. This paper is concerned about calculating and optimizing the changes of state that are needed called Small Parsimony Algorithms. This paper has proposed enhanced small parsimony algorithm to give better score based on number of evolutionary changes needed to produce the observed sequence changes tree and also give the ancestor of the given input.
Zenodo (CERN European Organization for Nuclear Research), Feb 25, 2009
This paper is on the general discussion of memory consistency model like Strict Consistency, Sequ... more This paper is on the general discussion of memory consistency model like Strict Consistency, Sequential Consistency, Processor Consistency, Weak Consistency etc. Then the techniques for implementing distributed shared memory Systems and Synchronization Primitives in Software Distributed Shared Memory Systems are discussed. The analysis involves the performance measurement of the protocol concerned that is Multiple Writer Protocol. Each protocol has pros and cons. So, the problems that are associated with each protocol is discussed and other related things are explored.
Zenodo (CERN European Organization for Nuclear Research), Aug 26, 2009
Software effort estimation is the process of predicting the most realistic use of effort required... more Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.
Zenodo (CERN European Organization for Nuclear Research), Feb 27, 2009
This paper is a survey of current component-based software technologies and the description of pr... more This paper is a survey of current component-based software technologies and the description of promotion and inhibition factors in CBSE. The features that software components inherit are also discussed. Quality Assurance issues in componentbased software are also catered to. The feat research on the quality model of component based system starts with the study of what the components are, CBSE, its development life cycle and the pro & cons of CBSE. Various attributes are studied and compared keeping in view the study of various existing models for general systems and CBS. When illustrating the quality of a software component an apt set of quality attributes for the description of the system (or components) should be selected. Finally, the research issues that can be extended are tabularized.
Nowadays computer science is getting more and more involved in agricultural and food science. Var... more Nowadays computer science is getting more and more involved in agricultural and food science. Various AI and soft computing techniques are used for fruit classification and defect detection to provide better quality product at the consumer end. This article focuses on the advances in automatic fruit classification using soft computing techniques for ten types of fruit viz. apple, dates, blueberries, grapes, peach, pomegranate, watermelon, banana, orange, and mango.
World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, Jul 25, 2010
In literature, there are metrics for identifying the quality of reusable components but the frame... more In literature, there are metrics for identifying the quality of reusable components but the framework that makes use of these metrics to precisely predict reusability of software components is still need to be worked out. These reusability metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the software component and hence improve the productivity due to probabilistic increase in the reuse level. As CK metric suit is most widely used metrics for extraction of structural features of an object oriented (OO) software; So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO and LCOM, is used to obtain the structural analysis of OO-based software components. An algorithm has been proposed in which the inputs can be given to K-Means Clustering system in form of tuned values of the OO software component and decision tree is formed for the 10-fold cross validation of data to evaluate the in terms of linguistic reusability value of the component. The developed reusability model has produced high precision results as desired.
To overcome the problem of exponentially increasing protein data, drug discoverers need efficient... more To overcome the problem of exponentially increasing protein data, drug discoverers need efficient machine learning techniques to predict the functions of proteins which are responsible for various diseases in human body. The existing decision tree induction methodology C4.5 uses the entropy calculation for best attribute selection. The proposed method develops a new decision tree induction technique in which uncertainty measure is used for best attribute selection. This is based on the study of priority based packages of SDFs (Sequence Derived Features). The present research work results the creation of better decision tree in terms of depth than the existing C4.5 technique. The tree with greater depth ensures more number of tests before functional class assignment and thus results in more accurate predictions than the existing prediction technique. For the same test data, the percentage accuracy of the new HPF (Human Protein Function) predictor is 72% and that of the existing prediction technique is 44%.
International Journal of Network Security, Nov 1, 2014
This research paper aims to propose a new improved approach for Information Security in RGB Color... more This research paper aims to propose a new improved approach for Information Security in RGB Color Images using a Hybrid Feature detection technique; Two Component based Least Significant Bit (LSB) Substitution Technique and Adaptive LSB substitution technique for data hiding. Advanced Encryption Standard (AES) is used to provide Two Tier Security; Random Pixel Embedding imparts resistant to attacks and Hybrid Filtering makes it immune to various disturbances like noise. An image is combination of edge and smooth areas which gives an ample opportunity to hide information in it. The proposed work is direct implementation of the principle that edge areas being high in contrast, color, density and frequency can tolerate more changes in their pixel values than smooth areas, so can be embedded with a large number of secret data while retaining the original characteristics of image. The proposed approach achieved Improved Imperceptibility, Capacity than the various existing techniques along with Better Resistance to various Steganalysis attacks like Histogram Analysis, Chi-Square and RS Analysis as proven experimentally.
World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, Oct 20, 2008
Most fingerprint recognition techniques are based on minutiae matching and have been well studied... more Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Pacific Asia Conference on Information Systems, 2005
In the era of Computerization Object Oriented Paradigm is becoming more and more pronounced. This... more In the era of Computerization Object Oriented Paradigm is becoming more and more pronounced. This has provoked the need of high quality object oriented software, as the traditional metrics cannot be applied on the object-oriented systems. Although CK suit of metric is the widely accepted metrics but when analyzed, according to their validation criteria on which these are based, these metrics can't satisfy certain axioms. This paper gives the evaluation of CK suit of metrics and suggests the refinements and extensions to these metrics so that these metrics should reflect accurate and precise results for OO based systems.
Zenodo (CERN European Organization for Nuclear Research), Oct 28, 2008
Various models have been derived by studying large number of completed software projects from var... more Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Zenodo (CERN European Organization for Nuclear Research), Jun 22, 2008
Protein structure determination and prediction has been a focal research subject in the field of ... more Protein structure determination and prediction has been a focal research subject in the field of bioinformatics due to the importance of protein structure in understanding the biological and chemical activities of organisms. The experimental methods used by biotechnologists to determine the structures of proteins demand sophisticated equipment and time. A host of computational methods are developed to predict the location of secondary structure elements in proteins for complementing or creating insights into experimental results. However, prediction accuracies of these methods rarely exceed 70%.
The proposed system is an approach used to embed text into gray image (BMP).It enables the user t... more The proposed system is an approach used to embed text into gray image (BMP).It enables the user to provide the system with both text and cover, and obtain a resulting image that contains the hidden text inside. The system uses the least significant Bit (LSB) method to embed the secret text in image after encrypt the secret text using RC4 stream cipher method and store the text in non sequential pixel in image by using variable hope value power of 2 32].The Proposed system aim to provide improved robustness, security due to multi-level security architecture along with faster embedding and extraction process irrespective of size of embedded text.
In over deployed sensor networks, one approach to Conserve energy is to keep only a small subset ... more In over deployed sensor networks, one approach to Conserve energy is to keep only a small subset of sensors active at Any instant. For the coverage problems, the monitoring area in a set of points that require sensing, called demand points, and consider that the node coverage area is a circle of range R, where R is the sensing range, If the Distance between a demand point and a sensor node is less than R, the node is able to cover this point. We consider a wireless sensor network consisting of a set of sensors deployed randomly. A point in the monitored area is covered if it is within the sensing range of a sensor. In some applications, when the network is sufficiently dense, area coverage can be approximated by guaranteeing point coverage. In this case, all the points of wireless devices could be used to represent the whole area, and the working sensors are supposed to cover all the sensors. We also introduce Hybrid Algorithm and challenges related to coverage in sensor networks.
To support mobility in ATM networks, a number of technical challenges need to be resolved. The im... more To support mobility in ATM networks, a number of technical challenges need to be resolved. The impact of handoff schemes in terms of service disruption, handoff latency, cost implications and excess resources required during handoffs needs to be addressed. In this paper, a one phase handoff and route optimization solution using reserved PVCs between adjacent ATM switches to reroute connections during inter-switch handoff is studied. In the second phase, a distributed optimization process is initiated to optimally reroute handoff connections. The main objective is to find the optimal operating point at which to perform optimization subject to cost constraint with the purpose of reducing blocking probability of inter-switch handoff calls for delay tolerant traffic. We examine the relation between the required bandwidth resources and optimization rate. Also we calculate and study the handoff blocking probability due to lack of bandwidth for resources reserved to facilitate the rapid re...
The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for f... more The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for face recognition. The designed program works in a manner where a unique identification number is given to each face under trial. These faces are kept in a database from where any particular face can be matched and found out of the available test faces. The Karhunen –Loeve Algorithm has been implemented to find out the appropriate right face (with same features) with respect to given input image as test data image having unique identification number. The procedure involves usage of Eigen faces for the recognition of faces.
The cost of developing the software from scratch can be saved by identifying and extracting the r... more The cost of developing the software from scratch can be saved by identifying and extracting the reusable components from already developed and existing software systems or legacy systems [6]. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. We have used metric based approach for characterizing a software module. In this present work, the metrics McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component are used as input attributes to the different types of Neural Network system and reusability of the software component is calculated. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).
As the majority of faults are found in a few of its modules so there is a need to investigate the... more As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
The world of software has demonstrated the remarkable appeal of communal software development. La... more The world of software has demonstrated the remarkable appeal of communal software development. Large number of software projects can leverage, reuse, and coordinate their work through internet and web-based technology. For example, Source-Forge currently hosts about sixty thousand software systems, similar strategies have suggested for corporate software development. With thousands of projects, manually locating related projects can be difficult. Hence to use automatic software categorization to find clusters of related software projects using only the source code from projects, automatic categorization of software experiments with a set of programs. Automatic categorization of software systems is a novel and intriguing challenge on software archive. Evolution has focused on determining intracomponent relations of given software system also increase to differentiate between categories. Function oriented produces better result than the object oriented. Automatic categorization of software has provided better results than LSA retrieval techniques in terms of Precision and Recall with multinomial Naïve Bayes scheme has outperformed all other approaches and shows better results than the existing approach (SVD), being used by some open source code repositories e.g. Source forge Hence, the tool can also be utilized for the automatic categorization of software components and this kind of automation may improve.
Zenodo (CERN European Organization for Nuclear Research), Apr 27, 2009
Biometric measures of one kind or another have been used to identify people since ancient times, ... more Biometric measures of one kind or another have been used to identify people since ancient times, with handwritten signatures, facial features, and fingerprints being the traditional methods. Of late, Systems have been built that automate the task of recognition, using these methods and newer ones, such as hand geometry, voiceprints and iris patterns. These systems have different strengths and weaknesses. This work is a two-section composition. In the starting section, we present an analytical and comparative study of common biometric techniques. The performance of each of them has been viewed and then tabularized as a result. The latter section involves the actual implementation of the techniques under consideration that has been done using a state of the art tool called, MATLAB. This tool aids to effectively portray the corresponding results and effects.
World Academy of Science, Engineering and Technology, International Journal of Mathematical, Computational, Physical, Electrical and Computer Engineering, Jun 20, 2008
Phylogenetic tree is a graphical representation of the evolutionary relationship among three or m... more Phylogenetic tree is a graphical representation of the evolutionary relationship among three or more genes or organisms. These trees show relatedness of data sets, species or genes divergence time and nature of their common ancestors. Quality of a phylogenetic tree requires parsimony criterion. Various approaches have been proposed for constructing most parsimonious trees. This paper is concerned about calculating and optimizing the changes of state that are needed called Small Parsimony Algorithms. This paper has proposed enhanced small parsimony algorithm to give better score based on number of evolutionary changes needed to produce the observed sequence changes tree and also give the ancestor of the given input.
Zenodo (CERN European Organization for Nuclear Research), Feb 25, 2009
This paper is on the general discussion of memory consistency model like Strict Consistency, Sequ... more This paper is on the general discussion of memory consistency model like Strict Consistency, Sequential Consistency, Processor Consistency, Weak Consistency etc. Then the techniques for implementing distributed shared memory Systems and Synchronization Primitives in Software Distributed Shared Memory Systems are discussed. The analysis involves the performance measurement of the protocol concerned that is Multiple Writer Protocol. Each protocol has pros and cons. So, the problems that are associated with each protocol is discussed and other related things are explored.
Zenodo (CERN European Organization for Nuclear Research), Aug 26, 2009
Software effort estimation is the process of predicting the most realistic use of effort required... more Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.
Zenodo (CERN European Organization for Nuclear Research), Feb 27, 2009
This paper is a survey of current component-based software technologies and the description of pr... more This paper is a survey of current component-based software technologies and the description of promotion and inhibition factors in CBSE. The features that software components inherit are also discussed. Quality Assurance issues in componentbased software are also catered to. The feat research on the quality model of component based system starts with the study of what the components are, CBSE, its development life cycle and the pro & cons of CBSE. Various attributes are studied and compared keeping in view the study of various existing models for general systems and CBS. When illustrating the quality of a software component an apt set of quality attributes for the description of the system (or components) should be selected. Finally, the research issues that can be extended are tabularized.
Nowadays computer science is getting more and more involved in agricultural and food science. Var... more Nowadays computer science is getting more and more involved in agricultural and food science. Various AI and soft computing techniques are used for fruit classification and defect detection to provide better quality product at the consumer end. This article focuses on the advances in automatic fruit classification using soft computing techniques for ten types of fruit viz. apple, dates, blueberries, grapes, peach, pomegranate, watermelon, banana, orange, and mango.
World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, Jul 25, 2010
In literature, there are metrics for identifying the quality of reusable components but the frame... more In literature, there are metrics for identifying the quality of reusable components but the framework that makes use of these metrics to precisely predict reusability of software components is still need to be worked out. These reusability metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the software component and hence improve the productivity due to probabilistic increase in the reuse level. As CK metric suit is most widely used metrics for extraction of structural features of an object oriented (OO) software; So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO and LCOM, is used to obtain the structural analysis of OO-based software components. An algorithm has been proposed in which the inputs can be given to K-Means Clustering system in form of tuned values of the OO software component and decision tree is formed for the 10-fold cross validation of data to evaluate the in terms of linguistic reusability value of the component. The developed reusability model has produced high precision results as desired.
To overcome the problem of exponentially increasing protein data, drug discoverers need efficient... more To overcome the problem of exponentially increasing protein data, drug discoverers need efficient machine learning techniques to predict the functions of proteins which are responsible for various diseases in human body. The existing decision tree induction methodology C4.5 uses the entropy calculation for best attribute selection. The proposed method develops a new decision tree induction technique in which uncertainty measure is used for best attribute selection. This is based on the study of priority based packages of SDFs (Sequence Derived Features). The present research work results the creation of better decision tree in terms of depth than the existing C4.5 technique. The tree with greater depth ensures more number of tests before functional class assignment and thus results in more accurate predictions than the existing prediction technique. For the same test data, the percentage accuracy of the new HPF (Human Protein Function) predictor is 72% and that of the existing prediction technique is 44%.
International Journal of Network Security, Nov 1, 2014
This research paper aims to propose a new improved approach for Information Security in RGB Color... more This research paper aims to propose a new improved approach for Information Security in RGB Color Images using a Hybrid Feature detection technique; Two Component based Least Significant Bit (LSB) Substitution Technique and Adaptive LSB substitution technique for data hiding. Advanced Encryption Standard (AES) is used to provide Two Tier Security; Random Pixel Embedding imparts resistant to attacks and Hybrid Filtering makes it immune to various disturbances like noise. An image is combination of edge and smooth areas which gives an ample opportunity to hide information in it. The proposed work is direct implementation of the principle that edge areas being high in contrast, color, density and frequency can tolerate more changes in their pixel values than smooth areas, so can be embedded with a large number of secret data while retaining the original characteristics of image. The proposed approach achieved Improved Imperceptibility, Capacity than the various existing techniques along with Better Resistance to various Steganalysis attacks like Histogram Analysis, Chi-Square and RS Analysis as proven experimentally.
World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, Oct 20, 2008
Most fingerprint recognition techniques are based on minutiae matching and have been well studied... more Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Pacific Asia Conference on Information Systems, 2005
In the era of Computerization Object Oriented Paradigm is becoming more and more pronounced. This... more In the era of Computerization Object Oriented Paradigm is becoming more and more pronounced. This has provoked the need of high quality object oriented software, as the traditional metrics cannot be applied on the object-oriented systems. Although CK suit of metric is the widely accepted metrics but when analyzed, according to their validation criteria on which these are based, these metrics can't satisfy certain axioms. This paper gives the evaluation of CK suit of metrics and suggests the refinements and extensions to these metrics so that these metrics should reflect accurate and precise results for OO based systems.
Zenodo (CERN European Organization for Nuclear Research), Oct 28, 2008
Various models have been derived by studying large number of completed software projects from var... more Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Zenodo (CERN European Organization for Nuclear Research), Jun 22, 2008
Protein structure determination and prediction has been a focal research subject in the field of ... more Protein structure determination and prediction has been a focal research subject in the field of bioinformatics due to the importance of protein structure in understanding the biological and chemical activities of organisms. The experimental methods used by biotechnologists to determine the structures of proteins demand sophisticated equipment and time. A host of computational methods are developed to predict the location of secondary structure elements in proteins for complementing or creating insights into experimental results. However, prediction accuracies of these methods rarely exceed 70%.
The proposed system is an approach used to embed text into gray image (BMP).It enables the user t... more The proposed system is an approach used to embed text into gray image (BMP).It enables the user to provide the system with both text and cover, and obtain a resulting image that contains the hidden text inside. The system uses the least significant Bit (LSB) method to embed the secret text in image after encrypt the secret text using RC4 stream cipher method and store the text in non sequential pixel in image by using variable hope value power of 2 32].The Proposed system aim to provide improved robustness, security due to multi-level security architecture along with faster embedding and extraction process irrespective of size of embedded text.
In over deployed sensor networks, one approach to Conserve energy is to keep only a small subset ... more In over deployed sensor networks, one approach to Conserve energy is to keep only a small subset of sensors active at Any instant. For the coverage problems, the monitoring area in a set of points that require sensing, called demand points, and consider that the node coverage area is a circle of range R, where R is the sensing range, If the Distance between a demand point and a sensor node is less than R, the node is able to cover this point. We consider a wireless sensor network consisting of a set of sensors deployed randomly. A point in the monitored area is covered if it is within the sensing range of a sensor. In some applications, when the network is sufficiently dense, area coverage can be approximated by guaranteeing point coverage. In this case, all the points of wireless devices could be used to represent the whole area, and the working sensors are supposed to cover all the sensors. We also introduce Hybrid Algorithm and challenges related to coverage in sensor networks.
To support mobility in ATM networks, a number of technical challenges need to be resolved. The im... more To support mobility in ATM networks, a number of technical challenges need to be resolved. The impact of handoff schemes in terms of service disruption, handoff latency, cost implications and excess resources required during handoffs needs to be addressed. In this paper, a one phase handoff and route optimization solution using reserved PVCs between adjacent ATM switches to reroute connections during inter-switch handoff is studied. In the second phase, a distributed optimization process is initiated to optimally reroute handoff connections. The main objective is to find the optimal operating point at which to perform optimization subject to cost constraint with the purpose of reducing blocking probability of inter-switch handoff calls for delay tolerant traffic. We examine the relation between the required bandwidth resources and optimization rate. Also we calculate and study the handoff blocking probability due to lack of bandwidth for resources reserved to facilitate the rapid re...
The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for f... more The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for face recognition. The designed program works in a manner where a unique identification number is given to each face under trial. These faces are kept in a database from where any particular face can be matched and found out of the available test faces. The Karhunen –Loeve Algorithm has been implemented to find out the appropriate right face (with same features) with respect to given input image as test data image having unique identification number. The procedure involves usage of Eigen faces for the recognition of faces.
The cost of developing the software from scratch can be saved by identifying and extracting the r... more The cost of developing the software from scratch can be saved by identifying and extracting the reusable components from already developed and existing software systems or legacy systems [6]. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. We have used metric based approach for characterizing a software module. In this present work, the metrics McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component are used as input attributes to the different types of Neural Network system and reusability of the software component is calculated. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).
As the majority of faults are found in a few of its modules so there is a need to investigate the... more As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
Uploads
Papers by Parvinder Sandhu