Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CHAPTER

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

CHAPTER-1

INTRODUCTION

1.1. INTRODUCTION TO PROJECT

When the testers encounter ‘n’ no. of Defects, he generates a unique id number for each
individual Defect. The Defect information along with its id is mailed to the project
manager and developer. This is “Defect Report”. These are stored in the database. This is
useful for further reference. Defect information includes the Defect id, Defect name,
Defect priority, project name, Defect location, Defect type. This whole process continues
until all the Defects are got fixed in the application.

The Defect report is mailed to the project manager and the developer as soon as the Defect
is identified. This makes that no error will go unfixed because of poor communication.
Defect Tracking System plays a vital role in the testing phase. But it supports assigning
projects for the developer, tester by the project manager. The Defect Tracking System
maintains the different users separately i.e., it provides separate environments for project
manager, developer and tester.

The main objectives of the Defect Tracking System are:

➢ Identifying the Defects in the developed application.

➢ No Defect will be unfixed in the developed application.

➢ Not merely identifying the Defects but also providing the Defect information.

➢ As soon as the Defects are identified. They are reported to the project manager.

➢ To ensure that who needs to know about the Defect can learn soon after it is reported.

1
1.2. PURPOSE OF THE PROJECT
Its purpose is to create a track of bug data and to generate a report which gives the
information of the bug data.

1.3. EXISTING SYSTEM & DISADVANTAGES


❖ To investigate the relationships in bug data, Sandusky et al. form a bug report network to
examine the dependency among bug reports.
❖ Besides studying relationships among bug reports, Hong et al. build a developer social
network to examine the collaboration among developers based on the bug data in Mozilla
project. This developer social network is helpful to understand the developer community
and the project evolution.
❖ By mapping bug priorities to developers, Xuan et al. identify the developer prioritization in
open source bug repositories. The developer prioritization can distinguish developers and
assist tasks in software maintenance.
❖ To investigate the quality of bug data, Zimmermann et al. design questionnaires to
developers and users in three open source projects. Based on the analysis of questionnaires,
they characterize what makes a good bug report and train a classifier to identify whether
the quality of a bug report should be improved.
❖ Duplicate bug reports weaken the quality of bug data by delaying the cost of handling
bugs. To detect duplicate bug reports, Wang et al. design a natural language processing
approach by matching the execution information.
❖ Traditional software analysis is not completely suitable for the large-scale and complex
data in software repositories.
❖ In traditional software development, new bugs are manually triaged by an expert
developer, i.e., a human triager. Due to the large number of daily bugs and the lack of
expertise of all the bugs, manual bug triage is expensive in time cost and low in accuracy.

2
1.4. PROPOSED SYSTEM & ITS ADVANTAGES

❖ In this paper, we address the problem of data reduction for bug triage, i.e., how to reduce
the bug data to save the labour cost of developers and improve the quality to facilitate the
process of bug triage.
❖ Data reduction for bug triage aims to build a small-scale and high-quality set of bug data
by removing bug reports and words, which are redundant or non-informative.
❖ In our work, we combine existing techniques of instance selection and feature selection to
simultaneously reduce the bug dimension and the word dimension. The reduced bug data
contain fewer bug reports and fewer words than the original bug data and provide similar
information over the original bug data. We evaluate the reduced bug data according to two
criteria: the scale of a data set and the accuracy of bug triage.
❖ In this paper, we propose a predictive model to determine the order of applying instance
selection and feature selection. We refer to such determination as prediction for reduction
orders.
❖ Drawn on the experiences in software metrics, 1 we extract the attributes from historical
bug data sets. Then, we train a binary classifier on bug data sets with extracted attributes
and predict the order of applying instance selection and feature selection for a new bug
data set.
❖ Experimental results show that applying the instance selection technique to the data set can
reduce bug reports but the accuracy of bug triage may be decreased.
❖ Applying the feature selection technique can reduce words in the bug data and the accuracy
can be increased.
❖ Meanwhile, combining both techniques can increase the accuracy, as well as reduce bug
reports and words.
❖ Based on the attributes from historical bug data sets, our predictive model can provide the
accuracy of 71.8 percent for predicting the reduction order.
❖ We present the problem of data reduction for bug triage. This problem aims to augment the
data set of bug triage in two aspects, namely a) to simultaneously reduce the scales of the
bug dimension and the word dimension and b) to improve the accuracy of bug triage.

3
❖ We propose a combination approach to addressing the problem of data reduction. This can
be viewed as an application of instance selection and feature selection in bug repositories.
❖ We build a binary classifier to predict the order of applying instance selection and feature
selection. To our knowledge, the order of applying instance selection and feature selection
has not been investigated in related domain.

CHAPTER 2
SYSTEM ANALYSIS

2.1. STUDY OF THE SYSTEM

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential. Three key considerations involved in the feasibility
analysis are,

➢ ECONOMICAL FEASIBILITY
➢ TECHNICAL FEASIBILITY
➢ SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

4
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the available
technical resources. This will lead to high demands on the available technical resources. This
will lead to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.

2.2. INPUT & OUTPOUT REPRESENTETION

The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:
➢ What data should be given as input?
➢ How the data should be arranged or coded?
➢ The dialog to guide the operating personnel in providing input.
➢ Methods for preparing input validations and steps to follow when error occur.

5
OBJECTIVES
1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user
Will not be in maize of instant. Thus the objective of input design is to create an input layout
that is easy to follow

OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following
objectives.

6
2.3. SYSTEM ARCHITECTURE

7
CHAPTER-3
REQUIREMENTS SPECIFICATION

3.1. FUNCTIONAL REQUIREMENTS


MODULES:
➢ Dataset Collection
➢ Pre-processing Method
➢ Feature Selection/ Instance Selection
➢ Bug Data Reduction
➢ Performance Evaluation

MODULES DESCSRIPTION:
Dataset Collection:
To collect and/or retrieve data about activities, results, context and other factors. It is important
to consider the type of information it want to gather from your participants and the ways you will
analyze that information. The data set corresponds to the contents of a single database table, or a
single statistical data matrix, where every column of the table represents a particular variable.
After collecting the data to store the Database.
Pre-processing Method:
Data Pre-processing or Data cleaning, Data is cleansed through processes such as filling in
missing values, smoothing the noisy data, or resolving the inconsistencies in the data. And also
used to removing the unwanted data. Commonly used as a preliminary data mining practice, data
pre-processing transforms the data into a format that will be more easily and effectively
processed for the purpose of the user.
Feature Selection/ Instance Selection:
The combination of instance selection and feature selection to generate a reduced bug data set.
We replace the original data set with the reduced data set for bug triage. Instance selection is a
technique to reduce the number of instances by removing noisy and redundant instances. By
removing uninformative words, feature selection improves the accuracy of bug triage. It recovers
the accuracy loss by instance selection.

8
Bug Data Reduction:
The data set can reduce bug reports but the accuracy of bug triage may be decreased. It improves
the accuracy of bug triage. It tends to remove these words to reduce the computation for bug
triage. The bug data reduction to reduce the scale and to improve the quality of data in bug
repositories. It is reducing duplicate and noisy bug reports to decrease the number of historical
bugs.
Performance Evaluation:
In this Performance evaluation, algorithm can provide a reduced data set by removing non-
representative instances. The quality of bug triage can be measured with the accuracy of bug
triage. To reduce noise and redundancy in bug data sets.

3.2. PERFORMANCE REQUIREMENTS


Performance is measured in terms of the output provided by the application. Requirement
specification plays an important part in the analysis of a system. Only when the requirement
specifications are properly given, it is possible to design a system, which will fit into required
environment. It rests largely with the users of the existing system to give the requirement
specifications because they are the people who finally use the system. This is because the
requirements have to be known during the initial stages so that the system can be designed
according to those requirements. It is very difficult to change the system once it has been
designed and on the other hand designing a system, which does not cater to the requirements of
the user, is of no use.

The requirement specification for any system can be broadly stated as given below:

➢ The system should be able to interface with the existing system


➢ The system should be accurate
➢ The system should be better than the existing system
➢ The existing system is completely dependent on the user to perform all the duties.

9
3.3. SOFTWARE REQUIREMENTS:

➢ Operating system : Windows XP.


➢ Coding Language : J2EE
➢ Data Base : MYSQL
➢ Web Server : Tomcat

3.4. HARDWARE REQUIREMENTS:

➢ System : Pentium IV 2.4 GHz.


➢ Hard Disk : 40 GB.
➢ Floppy Drive : 1.44 Mb.
➢ Monitor : 15 VGA Colour.
➢ Mouse : Logitech.
➢ Ram : 512 Mb.

CHAPTER-4

4.1. Java Technology


Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be characterized by all of
the following buzzwords:

• Simple
• Architecture neutral
• Object oriented
• Portable
• Distributed

10
• High performance
• Interpreted
• Multithreaded
• Robust
• Dynamic
• Secure

With most programming languages, you either compile or interpret a program so that you can
run it on your computer. The Java programming language is unusual in that a program is both
compiled and interpreted. With the compiler, first you translate a program into an intermediate
language called Java byte codes the platform-independent codes interpreted by the interpreter on
the Java platform. The interpreter parses and runs each Java byte code instruction on the
computer. Compilation happens just once; interpretation occurs each time the program is
executed. The following figure illustrates how this works.
You can think of Java byte codes as the machine code instructions for the Java Virtual Machine
(Java VM). Every Java interpreter, whether it’s a development tool or a Web browser that can
run applets, is an implementation of the Java VM. Java byte codes help make “write once, run
anywhere” possible. You can compile your program into byte codes on any platform that has a
Java compiler. The byte codes can then be run on any implementation of the Java VM. That
means that as long as a computer has a Java VM, the same program written in the Java
programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

4.2. The Java Platform


A platform is the hardware or software environment in which a program runs. We’ve already
mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and MacOS.
Most platforms can be described as a combination of the operating system and hardware. The
Java platform differs from most other platforms in that it’s a software-only platform that runs on
top of other hardware-based platforms.
The Java platform has two components:
➢ The Java Virtual Machine (Java VM)
➢ The Java Application Programming Interface (Java API)

11
You’ve already been introduced to the Java VM. It’s the base for the Java platform and is ported
onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that provide many useful
capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into
libraries of related classes and interfaces; these libraries are known as packages. The next
section, What Can Java Technology Do? Highlights what functionality some of the packages in
the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the figure shows,
the Java API and the virtual machine insulate the program from the hardware.

Native code is code that after you compile it, the compiled code runs on a specific hardware
platform. As a platform-independent environment, the Java platform can be a bit slower than
native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code
compilers can bring performance close to that of native code without threatening portability.

What Can Java Technology Do?


The most common types of programs written in the Java programming language are applets and
applications. If you’ve surfed the Web, you’re probably already familiar with applets. An applet
is a program that adheres to certain conventions that allow it to run within a Java-enabled
browser.
However, the Java programming language is not just for writing cute, entertaining applets for the
Web. The general-purpose, high-level Java programming language is also a powerful software
platform. Using the generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java platform. A special kind of
application known as a server serves and supports clients on a network. Examples of servers are
Web servers, proxy servers, mail servers, and print servers. Another specialized program is a
servlet. A servlet can almost be thought of as an applet that runs on the server side. Java Servlets
are a popular choice for building interactive web applications, replacing the use of CGI scripts.

12
Servlets are similar to applets in that they are runtime extensions of applications. Instead of
working in browsers, though, servlets run within Java Web servers, configuring or tailoring the
server.

How does the API support all these kinds of programs? It does so with packages of software
components that provides a wide range of functionality. Every full implementation of the Java
platform gives you the following features:
➢ The essentials: Objects, strings, threads, numbers, input and output, data structures,
system properties, date and time, and so on.
➢ Applets: The set of conventions used by applets.
➢ Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram
Protocol) sockets, and IP (Internet Protocol) addresses.
➢ Internationalization: Help for writing programs that can be localized for users
worldwide. Programs can automatically adapt to specific locales and be displayed in the
appropriate language.
➢ Security: Both low level and high level, including electronic signatures, public and
private key management, access control, and certificates.
➢ Software components: Known as JavaBeans TM
, can plug into existing component
architectures.
➢ Object serialization: Allows lightweight persistence and communication via Remote
Method Invocation (RMI).
➢ Java Database Connectivity (JDBCTM): Provides uniform access to a wide range of
relational databases.
➢ The Java platform also has APIs for 2D and 3D graphics, accessibility, servers,
collaboration, telephony, speech, animation, and more. The following figure depicts what
is included in the Java 2 SDK.

13
Java uses
We can’t promise you fame, fortune, or even a job if you learn the Java programming language.
Still, it is likely to make your programs better and requires less effort than other languages. We
believe that Java technology will help you do the following:
➢ Get started quickly: Although the Java programming language is a powerful object-
oriented language, it’s easy to learn, especially for programmers already familiar with C
or C++.
➢ Write less code: Comparisons of program metrics (class counts, method counts, and so
on) suggest that a program written in the Java programming language can be four times
smaller than the same program in C++.
➢ Write better code: The Java programming language encourages good coding practices,
and its garbage collection helps you avoid memory leaks. Its object orientation, its
JavaBeans component architecture, and its wide-ranging, easily extendible API let you
reuse other people’s tested code and introduce fewer bugs.
➢ Develop programs more quickly: Your development time may be as much as twice as
fast versus writing the same program in C++. Why? You write fewer lines of code and it
is a simpler programming language than C++.

14
➢ Avoid platform dependencies with 100% Pure Java: You can keep your program
portable by avoiding the use of libraries written in other languages. The 100% Pure Java
TM
Product Certification Program has a repository of historical process manuals, white
papers, brochures, and similar materials online.
➢ Write once, run anywhere: Because 100% Pure Java programs are compiled into
machine-independent byte codes, they run consistently on any Java platform.
➢ Distribute software more easily: You can upgrade applets easily from a central server.
Applets take advantage of the feature of allowing new classes to be loaded “on the fly,”
without recompiling the entire program.

4.3. ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for
application developers and database systems providers. Before ODBC became a de facto
standard for Windows programs to interface with database systems, programmers had to use
proprietary languages for each database they wanted to connect to. Now, ODBC has made the
choice of the database system almost irrelevant from a coding perspective, which is as it should
be. Application developers have much more important things to worry about than the syntax that
is needed to port their program from one database to another when business needs suddenly
change.
Through the ODBC Administrator in Control Panel, you can specify the particular database that
is associated with a data source that an ODBC application program is written to use. Think of an
ODBC data source as a door with a name on it. Each door will lead you to a particular database.
For example, the data source named Sales Figures might be a SQL Server database, whereas the
Accounts Payable data source could refer to an Access database. The physical database referred
to by a data source can reside anywhere on the LAN.
From a programming perspective, the beauty of ODBC is that the application can be written to
use the same set of function calls to interface with any data source, regardless of the database
vendor. The source code of the application doesn’t change whether it talks to Oracle or SQL
Server. We only mention these two as an example. There are ODBC drivers available for several
dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into
data sources.

15
4.3. JDBC
In an effort to set an independent database standard API for Java; Sun Microsystems developed
Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism
that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved
through the use of “plug-in” database connectivity modules, or drivers. If a database vendor
wishes to have JDBC support, he or she must provide the driver for each platform that the
database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you
discovered earlier in this chapter, ODBC has widespread support on a variety of platforms.
Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than
developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public review that ended
June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after.
The remainder of this section will cover enough information about JDBC for you to know what it
is about and how to use it effectively. This is by no means a complete overview of JDBC. That
would fill an entire book.

JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that, because of its
many goals, drove the development of the API. These goals, in conjunction with early reviewer
feedback, have finalized the JDBC class library into a solid framework for building database
applications in Java.
The goals that were set for JDBC are important. They will give you some insight as to why
certain classes and functionalities behave the way they do. The eight design goals for JDBC are
as follows:

SQL Level API


The designers felt that their main goal was to define a SQL interface for Java. Although not
the lowest database interface level possible, it is at a low enough level for higher-level tools and
APIs to be created. Conversely, it is at a high enough level for application programmers to use it

16
confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to
hide many of JDBC’s complexities from the end user.
SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort to support
a wide variety of vendors, JDBC will allow any query statement to be passed through it to the
underlying database driver. This allows the connectivity module to handle non-standard
functionality in a manner that is suitable for its users.
JDBC must be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows
JDBC to use existing ODBC level drivers by the use of a software interface. This interface
would translate JDBC calls to ODBC and vice versa.
Provide a Java interface that is consistent with the rest of the Java system
Because of Java’s acceptance in the user community thus far, the designers feel that they should
not stray from the current design of the core Java system.
Keep it simple
This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt
that the design of JDBC should be very simple, allowing for only one method of completing a
task per mechanism. Allowing duplicate functionality only serves to confuse the users of the
API.
Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also, less error
appear at runtime.
Keep the common cases simple
Because more often than not, the usual SQL calls used by the programmer are simple
SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should be simple to perform
with JDBC. However, more complex SQL statements should also be possible.

17
4.4. Networking
TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless


protocol.

18
IP datagram’s

The IP layer provides a connectionless and unreliable delivery system. It considers each
datagram independently of the others. Any association between datagram must be supplied by
the higher layers. The IP layer supplies a checksum that includes its own header. The header
includes the source and destination addresses. The IP layer handles routing through an Internet. It
is also responsible for breaking up large datagram into smaller ones for transmission and
reassembling them at the other end.

UDP

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of
the datagram and port numbers. These are used to give a client/server model - see later

TCP

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual
circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for
machines so that they can be located. The address is a 32 bit integer which gives the IP address.
This encodes a network ID and more addressing. The network ID falls into various classes
according to the size of the network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B
uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

19
Subnet address

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub
network and uses 10-bit addressing, allowing 1024 different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines
that can be on the subnet.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a
message to a server, you send it to the port for that service of the host that it is running on. This
is not location transparency! Certain of these ports are "well known".

Sockets

A socket is a data structure maintained by the system to handle network connections. A socket is
created using the call socket. It returns an integer that is like a file descriptor. In fact, under
Windows, this handle can be used with Read File and Write File functions.

#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);

Here "family" will be AF_INET for IP communications, protocol will be zero, and type will
depend on whether TCP or UDP is used. Two processes wishing to communicate over a network
create a socket each. These are similar to two ends of a pipe

20
CHAPTER-5

SYSTEM DESIGN

5.1. INTRODUCTION
Introduction: Systems design is the process or art of defining the architecture, components,
modules, interfaces, and data for a system to satisfy specified requirements. One could see it as
the application of systems theory to product development. There is some overlap and synergy
with the disciplines of systems analysis, systems architecture and systems engineering.

UML DIAGRAMS

Unified Modelling Language:


The Unified Modelling Language allows the software engineer to express an analysis model
using the modelling notation that is governed by a set of syntactic semantic and pragmatic rules.

A UML system is represented using five different views that describe the system from distinctly
different perspective. Each view is defined by a set of diagram, which is as follows.

User Model View

This view represents the system from the user’s perspective. The analysis representation
describes a usage scenario from the end-users perspective. Structural model view In this model
the data and functionality are arrived from inside the system. This model view models the static
structures.

Behavioural Model View

It represents the dynamic of behavioural as parts of the system, depicting the interactions of
collection between various structural elements described in the user model and structural model
view.

21
Implementation Model View
In this the structural and behavioural as parts of the system are represented as they are to be built.

Environmental Model View

In this the structural and behavioural aspects of the environment in which the system is to be
implemented are represented.

UML is specifically constructed through two different domains they are:

➢ UML Analysis modelling, this focuses on the user model and structural model views of
the system.
➢ UML design modelling, which focuses on the behavioural modelling, implementation
modelling and environmental model views.

Use case Diagrams represent the functionality of the system from a user’s point of view. Use
cases are used during requirements elicitation and analysis to represent the functionality of the
system. Use cases focus on the behaviour of the system from external point of view.

5.2. UML DIAGRAMS

USE CASE DIAGRAM

22
23
CLASS DIAGRAM

24
SEQUENCE DIAGRAM

25
ACTIVITY DIAGRAM

26
5.3. E-R DIAGRAM

27
CHAPTER-6
MODULE CODING
package bugtriage;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.*;
public class login
extends JFrame implements ActionListener{
private JButton jButton11;
private JLabel jLabel2;
private JButton jButton13;
private JLabel jLabel4;
private JLabel jLabel5;
private JPanel jPanel2;

public login() {
this.initComponents();
}

private void initComponents() {

this.jPanel2 = new JPanel();


this.jLabel2 = new JLabel();

28
this.jLabel5 = new JLabel();
this.jLabel4 = new JLabel();
this.jButton11=new JButton();
this.jButton13=new JButton();
this.setTitle("Login")
this.setDefaultCloseOperation(3);
this.setMinimumSize(new Dimension(800, 630));
this.setResizable(false);
this.getContentPane().setLayout(null);
this.jPanel2.setBackground(new Color(255, 255, 255));
this.jPanel2.setBorder(BorderFactory.createEtchedBorder());
this.jPanel2.setLayout(null)
this.jButton11.setIcon(new ImageIcon(this.getClass().getResource("masn.png")));
this.jButton11.addActionListener(this);
this.jPanel2.add(this.jButton11)
this.jButton11.setBounds(40, 190, 280, 380);
this.jButton11.setText("manager");
this.jLabel2.setFont(new Font("Times New Roman", 1, 18));
this.jLabel2.setText("Manager");
this.jPanel2.add(this.jLabel2);
this.jLabel2.setBounds(90, 140, 130, 40)
this.jButton13.setIcon(new ImageIcon(this.getClass().getResource("deve.png")));
this.jButton13.addActionListener(this);
this.jButton13.setText("developer");
this.jPanel2.add(this.jButton13);
this.jButton13.setBounds(470, 190, 290, 400);
this.jLabel5.setFont(new Font("Times New Roman", 1, 18));
this.jLabel5.setText("Developer");
this.jPanel2.add(this.jLabel5);
this.jLabel5.setBounds(560, 140, 80, 40);

29
this.jLabel4.setFont(new Font("Times New Roman", 1, 18));
this.jLabel4.setText("Novel Approach on Towards Effective Bug Triage with
Software Data Reduction Techniques");
this.jPanel2.add(this.jLabel4);
this.jLabel4.setBounds(40, 40,800, 60);
this.getContentPane().add(this.jPanel2);
this.jPanel2.setBounds(0, 0, 800, 630);
this.pack();
}

private void jb1Clicked(ActionEvent e) {


new mnger().setVisible(true);
this.setVisible(false);
}

private void jb3Clicked(ActionEvent e) {


this.setVisible(false);
user u = new user();
u.setVisible(true);
}

public static void main(String[] args) {


try {
for (UIManager.LookAndFeelInfo info : UIManager.getInstalledLookAndFeels()) {

if (!"Nimbus".equals(info.getName())) continue;
UIManager.setLookAndFeel(info.getClassName());
break;
}
}
catch (ClassNotFoundException ex) {

30
Logger.getLogger(login.class.getName()).log(Level.SEVERE, null, ex);
}
catch (InstantiationException ex) {
Logger.getLogger(login.class.getName()).log(Level.SEVERE, null, ex);
}
catch (IllegalAccessException ex) {
Logger.getLogger(login.class.getName()).log(Level.SEVERE, null, ex);
}
catch (UnsupportedLookAndFeelException ex) {
Logger.getLogger(login.class.getName()).log(Level.SEVERE, null, ex);
}

new login().setVisible(true);

@Override
public void actionPerformed(ActionEvent e) {

String text=e.getActionCommand();

if(text.equals("manager"))
{
jb1Clicked(e);
}
else
{
if(text.equals("developer"))
{
jb3Clicked(e); }}
}}

31
CHAPTER-7
OUTPUT SCREENS

LOGIN PAGE

32
MANAGER LOGIN PAGE

33
DEVELOPER LOGIN PAGE

34
REGISTRATION PAGE

35
DATA PREPROCESSING PAGE

36
DATA REDUCTION PAGE

37
REPORT ANALYSIS PAGE

38
BUGS REPORT ANALYSIS

39
BUG RECTIFICATION STATUS PAGE

40
CHAPTER-8
CONCLUSION

Bug triage is an expensive step of software maintenance in both labour cost and time cost. In this
paper, we combine feature selection with instance selection to reduce the scale of bug data sets
as well as improve the data quality. To determine the order of applying instance selection and
feature selection for a new bug data set, we extract attributes of each bug data set and train a
predictive model based on historical data sets. We empirically investigate the data reduction for
bug triage in bug repositories of two large open source projects, namely Eclipse and Mozilla.
Our work provides an approach to leveraging techniques on data processing to form reduced and
high-quality bug data in software development and maintenance. In future work, we plan on
improving the results of data reduction in bug triage to explore how to prepare a high quality bug
data set and tackle a domain-specific software task. For predicting reduction orders, we plan to
pay efforts to find out the potential relationship between the attributes of bug data sets.

FUTURE ENHANCEMENT
The project has covered almost all the requirements. Further requirements and improvements can
easily be done since the coding is mainly structured or modular in nature.

41
CHAPTER-9
BIBLIOGRAPHY

[1] J. Anvik, L. Hiew, and G. C. Murphy, “Who should fix this bug?” in Proc. 28th Int. Conf.
Softw. Eng., May 2006, pp. 361–370.

[2] S. Artzi, A. Kie_zun, J. Dolby, F. Tip, D. Dig, A. Paradkar, and M. D. Ernst, “Finding bugs
in web applications using dynamic test generation and explicit-state model checking,” IEEE
Softw., vol. 36, no. 4, pp. 474–494, Jul./Aug. 2010.

[3] J. Anvik and G. C. Murphy, “Reducing the effort of bug report triage: Recommenders for
development-oriented decisions,” ACM Trans. Soft. Eng. Methodol., vol. 20, no. 3, article 10,
Aug. 2011.

[4] C. C. Aggarwal and P. Zhao, “Towards graphical models for text processing,” Knowl.
Inform. Syst., vol. 36, no. 1, pp. 1–21, 2013.

[5] Bugzilla, (2014). [Online]. Avaialble: http://bugzilla.org/

[6] K. Balog, L. Azzopardi, and M. de Rijke, “Formal models for expert finding in enterprise
corpora,” in Proc. 29th Annu. Int. ACM SIGIR Conf. Res. Develop. Inform. Retrieval, Aug.
2006, pp. 43–50.

[7] P. S. Bishnu and V. Bhattacherjee, “Software fault prediction using quad tree-based k-means
clustering algorithm,” IEEE Trans. Knowl. Data Eng., vol. 24, no. 6, pp. 1146–1150, Jun. 2012.

42

You might also like