Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
— Fault localization is intensively investigated field with a plethora of useful methods and tools. Most of the tools rely on analysis of code and bug repositories by means of automated solutions and metrics. Suspiciousness is one of such... more
— Fault localization is intensively investigated field with a plethora of useful methods and tools. Most of the tools rely on analysis of code and bug repositories by means of automated solutions and metrics. Suspiciousness is one of such metrics demonstrated to localize possible faulty code with acceptable precision. Our aim is to investigate a method in which a crowd could be enlisted to improve fault localization. Such general strategy is not new and has been in place for a number of years in the form of beta tests. Nevertheless, both approaches have proven applicability and efficacy, combining them poses interesting questions for feasibility and efficacy. Our approach consists of preparing a system for acceptance test in order to collect information on user satisfaction (ok, not ok) and code execution log traces. Moreover, we defined two metrics for suspiciousness at the level of function calls. In this paper, we demonstrate our approach and a proposal for a quantitative evaluation.
Research Interests:
In the field of evaluation research, computer scientists live constantly upon dilemmas and conflicting theories. As evaluation is differently perceived and modeled among educational areas, it is not difficult to become trapped in... more
In the field of evaluation research, computer scientists live constantly upon dilemmas and conflicting theories. As evaluation is differently perceived and modeled among educational areas, it is not difficult to become trapped in dilemmas, which reflects an epistemological weakness. Additionally, designing and developing a computer-based learning scenario is not an easy task. Advancing further, with end-users probing the system in realistic settings, is even harder. Computer science research in evaluation faces an immense challenge, having to cope with contributions from several conflicting and controversial research fields. We believe that deep changes must be made in our field if we are to advance beyond the CBT (computer-based training) learning model and to build an adequate epistemology for this challenge. The first task is to relocate our field by building upon recent results from philosophy, psychology, social sciences, and engineering. In this article we locate evaluation in respect to communication studies. Evaluation presupposes a definition of goals to be reached, and we suggest that it is, by many means, a silent communication between teacher and student, peers, and institutional entities. If we accept that evaluation can be viewed as set of invisible rules known by nobody, but somehow understood by everybody, we should add anthropological inquiries to our research toolkit. The paper is organized around some elements of the social communication and how they convey new insights to evaluation research for computer and related scientists. We found some technical limitations and offer discussions on how we relate to technology at same time we establish expectancies and perceive others work.
Software debugging comprises most of the software maintenance time and is notorious for requiring high-level skills and application specific knowledge. Crowdsourcing software debugging could lower those barriers by having each programmer... more
Software debugging comprises most of the software maintenance time and is notorious for requiring high-level skills and application specific knowledge. Crowdsourcing software debugging could lower those barriers by having each programmer perform small, self-contained and parallelizable tasks, hence accommodating different levels of availability and expertise. Therefore, such new approach might enable society to tackle massive software development efforts, as for instance, setting a task force of hundreds of programmers to debug and adapt the existing software to be used in an emergency response to a natural catastrophe. This type of effort is unimaginable nowadays due to the high latency in mobilizing the right programmers and organizing their work. Crowdsourcing assists in overcoming these challenges due to the availability of a large base of contributors working towards a common goal. Debugging process is not a sequential task and this leads to the primary issue of dividing the debugging task into microtasks and asking the appropriate questions based on the microtasks for analysis of the software by the crowd. Our paper attempts to provide the solution of dividing the main task into several microtasks by leveraging the structure of the task followed by associating template questions with each of the microtasks. This can assist in reducing the overhead of the individual developer during the debugging process and make crowd debugging a reality.
Research Interests:
Traceability is commonly adopted as an aid to manage test cases in face of changing requirements. Our approach to traceability is to help decide which tests and bugs should be prioritized in order to minimize the time necessary to execute... more
Traceability is commonly adopted as an aid to manage test cases in face of changing requirements. Our approach to traceability is to help decide which tests and bugs should be prioritized in order to minimize the time necessary to execute acceptance test activities of testing, debugging and fixing. Our approach is to apply a set of models based on traceability among requirements, tests, and software components. In this paper we demonstrate the facts motivating the model and how the model is to be operated. We also offer as future work some research questions and possible extensions
Research Interests:
Research Interests:
With larger memory capacities and the ability to link into wireless networks, more and more students uses palmtop and handheld computers for learning activities. However, existing software for Web-based learning is not well-suited for... more
With larger memory capacities and the ability to link into wireless networks, more and more students uses palmtop and handheld computers for learning activities. However, existing software for Web-based learning is not well-suited for such mobile devices, both due to constrained user interfaces as well as communication effort required. A new generation of applications for the learning domain that is explicitly designed to work on these kinds of small mobile devices has to be developed. For this purpose, we introduce CARLA, a cooperative learning system that is designed to act in hybrid wireless networks. As a cooperative environment, CARLA aims at disseminating teaching material, notes, and even components of itself through both fixed and mobile networks to interested nodes. Due to the mobility of nodes, CARLA deals with upcoming problems such as network partitions and synchronization of teaching material, resource dependencies, and time constraints.
Research Interests:
Research Interests:
Research Interests: