Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ieb: Integrated Testing Environment On Loop Boundaries: R. Nagendra Babu, B.Chaitanya Krishna, Ch. V.Phani Krishna

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

R. NAGENDRA BABU * et al.

[IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY

ISSN: 22503676
Volume - 2, Special Issue - 1, 88 92

IEB: INTEGRATED TESTING ENVIRONMENT ON LOOP BOUNDARIES


R. Nagendra Babu1, B.Chaitanya Krishna2, Ch. V.Phani Krishna3
2

M.Tech (CSE), K L University, Andhra Pradesh, India, nrajaboina@yahoo.com Assistant Professor, CSE, K L University, Andhra Pradesh,India,chaitu2502@gmail.com 3 Associate Professor, CSE, K L University, Andhra Pradesh,India,phanik16@yahoo.co.in

Abstract
Specifications of unit behavior are usually informal and are often incomplete or ambiguous, leading to the development of overly general or incorrect unit tests. System Testing is executing every unit of code of entire system. But it mainly depends on unit test cases. If any test case has errors then the result will be propagated until system testing. To overcome this, test cases should be efficient in each phase. Some test cases execute on components perfectly, but when modules are integrated then those set of test cases will not be sufficient. Redundant test case execution or insufficient testing will be happened. In our proposed framework, we eliminate redundant test cases from execution sequence and we can identify more test points. It further reduces time by concentrating on loops. There is no need to check all values within loop boundaries. Our proposed System is proved as more efficient compared to existing approaches. Test cost, testing time will be reduced to enhance Integration Testing.

Index Terms:Module Testing, Reducing Test Cost, Module testing. --------------------------------------------------------------------- *** ---------------------------------------------------------------------1. INTRODUCTION
Test Engineers design individual unit test cases to test components. Single component will be executed perfectly but if they combined with modules there may be chance for time taking. Unit Test cases involves test case for particular variable or GUI component like button, text box and checkbox. This component will be worked w.r.to unit test case. But each component will have pages of logic to interact. It will work in a different manner in different modules. Throughout their lifetimes, software systems undergo numerous changes. Such changes are essential to accommodate new technologies and user needs, but they can also adversely impact the quality and reliability of the software. To address this problem, software test engineers perform module testing to revalidate software following modifications. Integration testing is important, but it is also expensive. For example, one of the authors of this article works with an industrial collaborator who has, for a software system of only about 20,000 lines of code, a test suite that requires seven CPU weeks to run. A second industrial collaborator, for a much larger system, runs tests continuously, in a rolling test cycle requiring over 30 days. Even much shorter module test cycles can be expensive, however, when they require human effort to set up for, execute, or validate outputs of tests. To reduce the costs associated with integration testing researchers have proposed several techniques. But they concentrate on unit test cases which are time taking. mainly

Generally unit test cases are completely executed one without fail. But executing loops involves more time compared to components. We will perform our testing on software in 4 approaches. Unit testing, component testing, Integration testing and finally system testing are 4 approaches. Every system engineer starts with unit testing. Unit testing involves testing one component or option in all cases without fail. If loop is to be checked then it will be checked for all values. Satisfying precondition and post condition is also tested. While system tests are an essential component of all practical software validation methods, they do have several disadvantages. They can be expensive to execute; for large systems, days or weeks, and considerable human effort may be needed for running a thorough suite of module tests [8]. In addition, even very thorough system testing may fail to exercise the full range of behavior implemented by a systems particular units; thus, system testing cannot be viewed as an effective replacement for unit testing. Finally, fault isolation and repair during system testing can be significantly more expensive than during unit testing. Thus System testing is not direct replacement for unit testing. Developing effective suites of unit test cases presents a number of challenges. Specifications of unit behavior are usually informal and are often incomplete or ambiguous,

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 88

R. NAGENDRA BABU * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY leading to the development of overly general or incorrect unit tests. Furthermore, such specifications may evolve independently of implementations requiring additional maintenance of unit tests even if implementations remain unchanged. Testers may find it difficult to imagine sets of unit input values that exercise the full range of unit behavior and thereby fail to exercise the different ways in which the unit will be used as a part of a system. An alternative approach to unit test development, which does not rely on specifications, is based on the analysis of a units implementation. Testers developing unit tests in this way may focus, for example, on achieving coverage-adequacy criteria in testing the target units code. Such tests, however, are inherently susceptible to errors of omission with respect to specified unit behavior and may thereby miss certain faults. Finally, unit testing requires the development of test harnesses or the setup of a testing framework to make the units executable in isolation. Software engineers also develop system tests, usually based on documents that are available for most software systems that describe the systems functionality from the users perspective, for example, requirement documents and users manuals. This makes system tests appropriate for determining the readiness of a system for release or its acceptability to customers. Additional benefits accrue from testing systemlevel behaviors directly. First, system tests can be developed without an intimate knowledge of the system internals, which reduces the level of expertise required by test developers and makes tests less sensitive to implementation-level changes that are behavior preserving. Second, system tests may expose faults that unit tests do not, for example, faults that emerge only when multiple units are integrated and jointly utilized. Finally, since they involve executing the entire system, no individual harnesses need to be constructed.

ISSN: 22503676
Volume - 2, Special Issue - 1, 88 92

large programs with sequences of versions drawn from the field, containing various modifications. The focus of this experiment, however, was strictly test suite composition: the independent variable manipulated was the number of individual test inputs constituting each test case in the suite. The work did not consider or measure change attributes, and no attempt was made to correlate attributes of change with technique performance. Two previous studies are more closely related to this one. To investigate the impact of integrated testing frequency on test selection, Kim et al. [13] varied the number of changes made in versions of several programs, and measured the impact of this number on test selection and fault detection effectiveness. Elbaum et al. [8] performed experiments exploring characteristics of program structure, test suite composition, and changes on prioritization, and identified several metrics characterizing these attributes that correlate with prioritization effectiveness. These studies differ from the work reported in this article in several ways. First, neither of them specifically examined how the type and magnitude of changes, their distribution throughout the program, and their relation to test coverage patterns, affect the cost-effectiveness of integrated test selection techniques. Second, although [8] did consider various change characteristics with respect to test case prioritization, it focused on identifying such characteristics, and not on measuring their effects. Moreover, both studies were performed on a set of small programs (an existing suite of seven programs of less than 1K LOC, and one of 6.2K LOC), whose modifications consisted solely of one- or two-line seeded faults, singly or in combination. This constrains the generality of the results, and leaves unresolved questions as to the scalability and applicability of the relationships determined. ATPG (acronym for both Automatic Test Pattern Generation and Automatic Test Pattern Generator) is an electronic design automation method/technology used to find an input (or test) sequence that, when applied to a digital circuit, enables testers to distinguish between the correct circuit behavior and the faulty circuit behavior caused by defects. The generated patterns are used to test semiconductor devices after manufacture, and in some cases to assist with determining the cause of failure (failure analysis.[1]) the effectiveness of ATPG is measured by the amount of modeled defects, or fault models, that are detected and the number of generated patterns. These metrics generally indicate test quality (higher with more fault detections) and test application time (higher with more patterns). ATPG efficiency is another important

2. RELATED WORK
Initially, most studies of integrated test selection and test case prioritization focused on the cost-effectiveness of individual techniques, the estimation of a technique's performance, or comparisons of techniques [3, 7, 10, and 12]. These studies showed that various techniques could be cost-effective, and suggested tradeoffs among them. However, the studies also revealed wide variances in performance, and attributed these two factors involving the programs under test, the test suites used to test them, and the types of modifications made to the programs. More recent studies have begun to examine these factors. We [10] studied the effects of test case granularity on integrated testing techniques, varying the composition of test suites and examining the effects on cost-effectiveness of test selection and prioritization. This experiment was performed on two

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 89

R. NAGENDRA BABU * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY consideration. It is influenced by the fault model under consideration, the type of circuit under test (full scan, synchronous sequential, or asynchronous sequential), the level of abstraction used to represent the circuit under test (gate, register-transistor. A defect is an error introduced into a device during the manufacturing process. A fault model is a mathematical description of how a defect alters design behavior. A fault is said to be detected by a test pattern if, when applying the pattern to the design, any logic value observed at one or more of the circuit's primary outputs differs between the original design and the design with the fault. The ATPG process for a targeted fault consists of two phases: fault activation and fault propagation. Fault activation establishes a signal value at the fault model site that is opposite of the value produced by the fault model. Fault propagation moves the resulting signal value, or fault effect, forward by sensitizing a path from the fault site to a primary output. ATPG can fail to find a test for a particular fault in at least two cases. First, the fault may be intrinsically undetectable, such that no patterns exist that can detect that particular fault. The classic example of this is a redundant circuit, designed so that no single fault causes the output to change. In such a circuit, any single fault will be inherently undetectable. Second, it is possible that a pattern(s) exist, but the algorithm cannot find it. Since the ATPG problem is NP-complete (by reduction from the Boolean satisfiability problem) there will be cases where patterns exist, but ATPG gives up since it will take an incredibly long time to find them In the past several decades, the most popular fault model used in practice is the single stuck-at fault model. In this model, one of the signal lines in a circuit is assumed to be stuck at a fixed logic value, regardless of what inputs are supplied to the circuit. Hence, if a circuit has n signal lines, there are potentially 2n stuck-at faults defined on the circuit, of which some can be viewed as being equivalent to others. The stuckat fault model is a logical fault model because no delay information is associated with the fault definition. It is also called a permanent fault model because the faulty effect is assumed to be permanent, in contrast to intermittent faults which occur (seemingly) at random and transient faults which occur sporadically, perhaps depending on operating conditions (e.g. temperature, power supply voltage) or on the data values (high or low voltage states) on surrounding signal lines. The single stuck-at fault model is structural because it is defined based on a structural gate-level circuit model.

ISSN: 22503676
Volume - 2, Special Issue - 1, 88 92

A pattern set with 100% stuck-at fault coverage consists of tests to detect every possible stuck-at fault in a circuit. 100% stuck-at fault coverage does not necessarily guarantee high quality, since faults of many other kinds -- such as bridging faults, opens faults, and transition (a delay) faults -- often occur.

3. FRAMEWORK OF IEB
IEB mainly concentrates on loops and their test cases. System engineer will design unit test cases for loops. These test cases will be executed perfectly in the part of unit testing. Above level on unit testing is module testing. This tested component will be tested under module testing again. The results will be saved. But again testing this component with respect to Integrated Testing is more costly one. Our approach has two steps 1. 2. Select loop for component Design test case

In first step select loop of component with boundaries like following one. for (i=9; i<=18; i++) or i:=9; For (; ;) { If(i<=18) break; i++; }

Public class Book Printer{ Public int index = 0; Public String current; Public int length; Public void print Book(){ While(index< length) { Current=read(); incIndex();

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 90

R. NAGENDRA BABU * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY } } In second step following algorithm is applied to reduce time and cost over unit testing. Already entire values of loop will be checked in unit testing. So checking entire values in integrated testing may lead to time consuming. Steps: 1) Design test case for loop Test Case ID Precondition Post Condition Expected Result Actual Result Comparison Modifications

ISSN: 22503676
Volume - 2, Special Issue - 1, 88 92

If (,) are range values for loop. Integration testing involves (--, ++) also. For testing ranges will be (-,,++) and for testing ranges will be (--,, ++). Testing these random values will be sufficient in integrated testing as it is already tested in unit testing.

4. CONCLUSION
This frame work reduces time while integrated testing over existing approaches. Minimization produced the smallest and the least effective test suites. Although fault detection is obviously important, there are cases where testing is very expensive. In these cases minimization may be cost-effective. For example, change size does not seem to be the predominant factor in determining the cost-effectiveness of techniques. Instead, the distribution of changes across functions and files, and whether the test cases reached those changes, seem to be the main contributors to the variation observed across all programs. A simple lines-of-code mentality used to evaluate prospective modifications will not produce effective results. Integration testing mainly concentrates on boundaries rather than all values. So that test cost will be reduced up to some extent over unit testing.

REFERENCES
[1] Carving and Replaying Differential Unit Test Cases from System Test Cases- Sebastian Elbaum, Hui Nee Chiny, Matthew B. Dwyerz, Matthew Jorde- January/February2009. [2]JTest,Jtest Product Overview, http://www.parasoft.com/jsp/ products/home.jsp?product=Jtest, Oct. 2005. [3] C. Pacheco and M.D. Ernst, Eclat: Automatic Generation and Classification of Test Inputs, Proc. 19th European Conf. Object- Oriented Programming, pp. 504-527, July 2005. [4] N. Juristo, A. M. Moreno, and S. Vegas. Reviewing 25 years of testing technique experiments. Empirical Software Engineering: An International Journal, 9(1), March 2004. [5] D. Saff and M.D. Ernst, An Experimental Evaluation of Continuous Testing During Development, Proc. Intl Symp. Software Testing and Analysis, pp. 76-85, July 2004. [6] T. Xie and D. Notkin. Macro and micro perspectives on strategic software quality assurance in resource constrained environments. In Proceedings of EDSER-4, May 2002. [7] J. Kim and A. Porter. A history-based test prioritization technique for regression testing in resource constrained environments. In Intl. Conf. Softw. Eng., May 2002.

2) Consider pre values and its range 3) Range for =(++ + -- )/2 4) Range for =( ++ + -- )/2 ((--, , ++), (--, , ++)) 5) Consider post values and its range 6) Consider random values to execute Return Rand () 7) Apply changes before integration testing Example: for(i=9;i<=18;i++) {.. .. if( i==11) ..... if(i==15) .. } 1) check this loop for 7,8,9 (pre condition values) 2) check this loop for 17,18,19 (post condition values) 3) check loop for 11,15 and random (main conditions values)

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 91

R. NAGENDRA BABU * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY [8] B. Kitchenham, S. Pfleeger, L. Pickard, P. Jones, D. Hoaglin, K. Emam, and J. Rosenberg. Preliminary guidelines for empirical research in software engineering. IEEE Transactions on Software Engineering, 28(8):721734, August 2002. [9] S. Elbaum, D. Gable, and G. Rothermel. Understanding and measuring the sources of variation in the prioritization of regression test suites. In Proceedings of the International Software Metrics Symposium, pages 169{179, Apr. 2001. [10] B. Weide, Modular Regression Testing: Connections to Component- Based Software, Proc. Fourth ICSE Workshop Component- Based Software Engineering, pp. 82-91, May 2001. [11] J. Bible, G. Rothermel, and D. Rosenblum. Coarse- and fine-grained safe regression test selection. ACM Transactions on Software Engineering and Methodologies, 10(2):149{183, Apr. 2001. [12] T. Ball. On the limit of control ow analysis for regression test selection. In Proceedings of the ACM International Symposium on Software Testing and Analysis, pages 134{142, Mar. 1998 [13] Y. Chen, D. Rosenblum, and K. Vo. TestTube: A system for selective regression testing. In Proceedings of the 16th International Conference on Software Engineering, pages 211{220, May 1994. [14] M. Balcer, W. Hasling, and T. Ostrand. Automatic generation of test scripts from formal test specifications. In Proceedings of the 3rd ACM Symposium on Software Testing, Analysis, and Verification, pages 210{218, Dec. 1989. [15] J. Bach, Useful Features of a Test Automation System (Part III), Testing Techniques Newsletter, Oct. 1996. [16] K. Beck, Extreme Programming Explained: Embrace Change, first ed. Addison-Wesley Professional, Oct. 1999. [17] K. Beck, Test Driven Development: By Example. Addison Wesley Longman, Nov. 2002. [18] R. Binder, Testing Object-Oriented Systems: Models, Patterns, and Tools, chapter 18, Object Technologies, pp. 943-951, first ed. Addison Wesley, Oct. 1999. [19] D. Binkley, Semantics Guided Regression Test Cost Reduction, IEEE Trans. Software Eng., vol. 23, no. 8, pp. 498-516, Aug. 1997. [20] G. Xu, A. Rountev, Y. Tang, and F. Qin, Efficient Checkpointing of Java Software Using Context-Sensitive

ISSN: 22503676
Volume - 2, Special Issue - 1, 88 92 Symp.

Capture and Replay, Proc. ACM SIGSOFT Foundations of Software Eng., pp. 85-94, Oct. 2007.

BIOGRAPHIES
Nagendra Babu Rajaboina ,he received B.Tech degree in Information Technology from AndhraUniversity and pursuing M.tech(CSE) in K L University.

B.Chaitanya Krishna, he is an assistant professor in Computer Science and Engineering Department in K L University.

CH.V.Phani Krishna, He is an Associate .professor in computer science and engineering Department in K L University. He is a life member in CSI & ISTE and having 10 years of experience in engineering discipline. His interests as a researcher in Software Engineering, and covers a variety of overlapping areas.

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 92

You might also like