2017 International Conference on Consumer Electronics and Devices (ICCED)
Dirty Copy on Write also known as Dirty COW is a Linux based server vulnerability. This vulnerabi... more Dirty Copy on Write also known as Dirty COW is a Linux based server vulnerability. This vulnerability allows attackers to escalate the file system protection of Linux Kernel, get root privilege and thus compromise the whole system. Linux kernel version 2.6.22 and above are affected by this vulnerability. The patch for this vulnerability has been released very recently. Thus most of the Linux servers are still vulnerable to this attack. This study focuses on the techniques of Dirty COW vulnerability and its impact of the servers of newly emerged IT country such as Bangladesh.
This paper is focused on the measurement and prediction of indoor signal propagation for ISM band... more This paper is focused on the measurement and prediction of indoor signal propagation for ISM band system in frequency bands 2.4 GHz and 5.3 GHz. In this research, two basic radio propagation models are studied and compared with theoretical and practical data. This comparison result is implemented on the test indoor wireless network. Based on the consideration, this paper proposes an enhancement to the path loss model in the indoor environment for improved accuracy in the relationship between distance and received signal strength. The model can be used as a prediction model that can be further developed to fit in other indoor scenarios too.
Data intensive applications in Life Sciences extensively use the Hidden Web as a platform for inf... more Data intensive applications in Life Sciences extensively use the Hidden Web as a platform for information sharing. Access to Hidden Web resources is limited through the use of predefined web forms and interactive interfaces that users must navigate manually. ...
2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS), 2017
Test case prioritization involves prioritized the test cases for regression testing which improve... more Test case prioritization involves prioritized the test cases for regression testing which improve the effectiveness of the testing process. By improving test case scheduling we can optimize time and cost as well as can produce better tested products. There are a number of methods to do prioritized test cases but not that effective or practical for the real-life large commercial systems. Most of the technique deals with finding defects or covering more test cases. In this paper, we will extend the previous work to incorporate real life practical aspects to schedule test cases. This will cover most of the businesses functionally based on the practical aspects. This approach covers more business area and ensure more defects. By prioritized test cases with this technique we will cover most important business functionally with less number of test cases.
2019 IEEE 1st International Conference on Energy, Systems and Information Processing (ICESIP), 2019
This paper is aimed at the method of raising the efficiency of the solar panel by using a dual ax... more This paper is aimed at the method of raising the efficiency of the solar panel by using a dual axis solar tracker. This is designed with four LDRs, which are the main sensor inputs, an Arduino Uno microcontroller, which is the main processing unit and a well-planned platform design consisting of two servo motors and a metal structure. The structure is suitable for maneuvering a 20W solar panel that weights around 2 Kg, while consuming very little energy to function. The main objective of this paper is to prove the fact that this dual axis solar tracker design is a more efficient one for harnessing the maximum amount of solar energy than existing similar trackers. A 37.76 percent increase in efficiency has been analyzed when compared to a fixed configuration solar panel that is tilted at 25 degrees throughout the day.
In this paper, a data model for autonomous data integration is proposed that uses an algebraic la... more In this paper, a data model for autonomous data integration is proposed that uses an algebraic language named Integra for semantic data integration, with a powerful schema matching strategy named Coherent Automated Schema Matcher. Initially the data model is being mapped based on schema matching result then two operators named Link and Combine was implemented for record linkage and duplicate cleaning. Link operator vertically adds more data value into an existing column and increases the degree of relation between the datasets while the combine operator horizontally combines both datasets into an integrated dataset. The precision recall curve for the three major operations presents an average score of 0.7, which is indeed an efficient performance for the integration of real time data.
The work described in this paper is an investigation of applying Markov chain techniques to measu... more The work described in this paper is an investigation of applying Markov chain techniques to measure software reliability. An example is taken from database based application software to develop two stochastic models called usage model and testing model for the software. The log likelihood ratio i.e. D (U, T) of two stochastic processes tells us how similar these processes are. This information is used to determine reliability of the example software. The results prove the viability and effectiveness of the approach to measure software reliability for large and complicated systems.
2016 International Conference on Informatics and Computing (ICIC), 2016
Most central people have more influence on business communication, knowledge diffusion, viral mar... more Most central people have more influence on business communication, knowledge diffusion, viral marketing and some other fields over other persons in a social network. Networks for which digital information is available such as email communication, phone call etc., one can easily find central actors using social network analysis tools. But for networks where no such information is available, for example farmers in a remote village, secret networks of a criminal organization etc., finding central actors is challenging. We call such network as unknown network. In this research, we have propose a method based on the idea of friendship paradox (FP) to find the most prominent actors from an unknown network. Extensive simulation results demonstrate that our method finds the most centrals by exploring only a small population of an unknown social network with a high accuracy.
Ninth International Conference on Digital Information Management (ICDIM 2014), 2014
In software industries, various open source projects utilize the services of Bug Tracking Systems... more In software industries, various open source projects utilize the services of Bug Tracking Systems that let users submit software issues or bugs and allow developers to respond to and fix them. The users label the reports as bugs or any other relevant class. This classification helps to decide which team or personnel would be responsible for dealing with an issue. A major problem here is that users tend to wrongly classify the issues, because of which a middleman called a bug triager is required to resolve any misclassifications. This ensures no time is wasted at the developer end. This approach is very time consuming and therefore it has been of great interest to automate the classification process, not only to speed things up, but to lower the amount of errors as well. In the literature, several approaches including machine learning techniques have been proposed to automate text classification. However, there has not been an extensive comparison on the performance of different natural language classifiers in this field. In this paper we compare general natural language data classifying techniques using five different machine learning algorithms: Naive Bayes, kNN, Pegasos, Rocchio and Perceptron. The performance comparison of these algorithms was done on the basis of their apparent error rates. The data-set involved four different projects, Httpclient, Jackrabbit, Lucene and Tomcat5, that used two different Bug Tracking Systems - Bugzilla and Jira. An experimental comparison of pre-processing techniques was also performed.
2017 International Conference on Consumer Electronics and Devices (ICCED)
Dirty Copy on Write also known as Dirty COW is a Linux based server vulnerability. This vulnerabi... more Dirty Copy on Write also known as Dirty COW is a Linux based server vulnerability. This vulnerability allows attackers to escalate the file system protection of Linux Kernel, get root privilege and thus compromise the whole system. Linux kernel version 2.6.22 and above are affected by this vulnerability. The patch for this vulnerability has been released very recently. Thus most of the Linux servers are still vulnerable to this attack. This study focuses on the techniques of Dirty COW vulnerability and its impact of the servers of newly emerged IT country such as Bangladesh.
This paper is focused on the measurement and prediction of indoor signal propagation for ISM band... more This paper is focused on the measurement and prediction of indoor signal propagation for ISM band system in frequency bands 2.4 GHz and 5.3 GHz. In this research, two basic radio propagation models are studied and compared with theoretical and practical data. This comparison result is implemented on the test indoor wireless network. Based on the consideration, this paper proposes an enhancement to the path loss model in the indoor environment for improved accuracy in the relationship between distance and received signal strength. The model can be used as a prediction model that can be further developed to fit in other indoor scenarios too.
Data intensive applications in Life Sciences extensively use the Hidden Web as a platform for inf... more Data intensive applications in Life Sciences extensively use the Hidden Web as a platform for information sharing. Access to Hidden Web resources is limited through the use of predefined web forms and interactive interfaces that users must navigate manually. ...
2017 8th IEEE International Conference on Software Engineering and Service Science (ICSESS), 2017
Test case prioritization involves prioritized the test cases for regression testing which improve... more Test case prioritization involves prioritized the test cases for regression testing which improve the effectiveness of the testing process. By improving test case scheduling we can optimize time and cost as well as can produce better tested products. There are a number of methods to do prioritized test cases but not that effective or practical for the real-life large commercial systems. Most of the technique deals with finding defects or covering more test cases. In this paper, we will extend the previous work to incorporate real life practical aspects to schedule test cases. This will cover most of the businesses functionally based on the practical aspects. This approach covers more business area and ensure more defects. By prioritized test cases with this technique we will cover most important business functionally with less number of test cases.
2019 IEEE 1st International Conference on Energy, Systems and Information Processing (ICESIP), 2019
This paper is aimed at the method of raising the efficiency of the solar panel by using a dual ax... more This paper is aimed at the method of raising the efficiency of the solar panel by using a dual axis solar tracker. This is designed with four LDRs, which are the main sensor inputs, an Arduino Uno microcontroller, which is the main processing unit and a well-planned platform design consisting of two servo motors and a metal structure. The structure is suitable for maneuvering a 20W solar panel that weights around 2 Kg, while consuming very little energy to function. The main objective of this paper is to prove the fact that this dual axis solar tracker design is a more efficient one for harnessing the maximum amount of solar energy than existing similar trackers. A 37.76 percent increase in efficiency has been analyzed when compared to a fixed configuration solar panel that is tilted at 25 degrees throughout the day.
In this paper, a data model for autonomous data integration is proposed that uses an algebraic la... more In this paper, a data model for autonomous data integration is proposed that uses an algebraic language named Integra for semantic data integration, with a powerful schema matching strategy named Coherent Automated Schema Matcher. Initially the data model is being mapped based on schema matching result then two operators named Link and Combine was implemented for record linkage and duplicate cleaning. Link operator vertically adds more data value into an existing column and increases the degree of relation between the datasets while the combine operator horizontally combines both datasets into an integrated dataset. The precision recall curve for the three major operations presents an average score of 0.7, which is indeed an efficient performance for the integration of real time data.
The work described in this paper is an investigation of applying Markov chain techniques to measu... more The work described in this paper is an investigation of applying Markov chain techniques to measure software reliability. An example is taken from database based application software to develop two stochastic models called usage model and testing model for the software. The log likelihood ratio i.e. D (U, T) of two stochastic processes tells us how similar these processes are. This information is used to determine reliability of the example software. The results prove the viability and effectiveness of the approach to measure software reliability for large and complicated systems.
2016 International Conference on Informatics and Computing (ICIC), 2016
Most central people have more influence on business communication, knowledge diffusion, viral mar... more Most central people have more influence on business communication, knowledge diffusion, viral marketing and some other fields over other persons in a social network. Networks for which digital information is available such as email communication, phone call etc., one can easily find central actors using social network analysis tools. But for networks where no such information is available, for example farmers in a remote village, secret networks of a criminal organization etc., finding central actors is challenging. We call such network as unknown network. In this research, we have propose a method based on the idea of friendship paradox (FP) to find the most prominent actors from an unknown network. Extensive simulation results demonstrate that our method finds the most centrals by exploring only a small population of an unknown social network with a high accuracy.
Ninth International Conference on Digital Information Management (ICDIM 2014), 2014
In software industries, various open source projects utilize the services of Bug Tracking Systems... more In software industries, various open source projects utilize the services of Bug Tracking Systems that let users submit software issues or bugs and allow developers to respond to and fix them. The users label the reports as bugs or any other relevant class. This classification helps to decide which team or personnel would be responsible for dealing with an issue. A major problem here is that users tend to wrongly classify the issues, because of which a middleman called a bug triager is required to resolve any misclassifications. This ensures no time is wasted at the developer end. This approach is very time consuming and therefore it has been of great interest to automate the classification process, not only to speed things up, but to lower the amount of errors as well. In the literature, several approaches including machine learning techniques have been proposed to automate text classification. However, there has not been an extensive comparison on the performance of different natural language classifiers in this field. In this paper we compare general natural language data classifying techniques using five different machine learning algorithms: Naive Bayes, kNN, Pegasos, Rocchio and Perceptron. The performance comparison of these algorithms was done on the basis of their apparent error rates. The data-set involved four different projects, Httpclient, Jackrabbit, Lucene and Tomcat5, that used two different Bug Tracking Systems - Bugzilla and Jira. An experimental comparison of pre-processing techniques was also performed.
Uploads
Papers by Shazzad Hosain