Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they require... more
Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they require assembling a high-resolution and regularly sampled matrix from sparse and non-uniformly scattered input. Constructing and processing this immense matrix becomes a significant computational bottleneck. We describe a technique for performing basis decomposition directly from scattered measurements. Our approach is flexible in how the basis is represented and can accommodate any number of linear constraints on the factorization. Because its time- and space-complexity is proportional to the number of input measurements and the size of the output, we are able to decompose multi-gigabyte datasets faster and at lower error rates than currently available techniques. We evaluate our approach by representing measured spatially-varying reflectance within a reduced...
Regression testing is frequently performed in a time constrained environment. This paper explains how 0/1 knapsack solvers (e.g., greedy, dynamic programming, and the core algorithm) can identify a test suite reordering that rapidly... more
Regression testing is frequently performed in a time constrained environment. This paper explains how 0/1 knapsack solvers (e.g., greedy, dynamic programming, and the core algorithm) can identify a test suite reordering that rapidly covers the test requirements and always terminates within a specified testing time limit. We conducted experiments that reveal fundamental trade-offs in the (i) time and space costs that are associated with creating a reordered test suite and (ii) quality of the resulting prioritization. We find knapsack-based prioritizers that ignore the overlap in test case coverage incur a low time overhead and a moderate to high space overhead while creating prioritizations exhibiting a minor to modest decrease in effectiveness. We also find that the most sophisticated 0/1 knapsack solvers do not always identify the most effective prioritization, suggesting that overlap-aware prioritizers with a higher time overhead are useful in certain testing contexts.
Transient faults due to particle strikes are a key challenge in microprocessor design. Driven by exponentially increasing transistor counts, per-chip faults are a growing burden. To protect against soft errors, redundancy techniques such... more
Transient faults due to particle strikes are a key challenge in microprocessor design. Driven by exponentially increasing transistor counts, per-chip faults are a growing burden. To protect against soft errors, redundancy techniques such as redundant multithreading (RMT) are often used. However, these techniques assume that the probability that a structural fault will result in a soft error (i.e., the Architectural Vulnerability Factor (AVF)) is 100 percent, unnecessarily draining processor resources. Due to the high cost of redundancy, there have been efforts to throttle RMT at runtime. To date, these methods have not incorporated an AVF model and therefore tend to be ad hoc. Unfortunately, computing the AVF of complex microprocessor structures (e.g., the ISQ) can be quite involved. To provide probabilistic guarantees about fault tolerance, we have created a rigorous characterization of AVF behavior that can be easily implemented in hardware. We experimentally demonstrate AVF varia...
Regression test prioritization is often performed in a time constrained execution environment in which testing only occurs for a fixed time period. For example, many organizations rely upon nightly building and regression testing of their... more
Regression test prioritization is often performed in a time constrained execution environment in which testing only occurs for a fixed time period. For example, many organizations rely upon nightly building and regression testing of their applications every time source code changes are committed to a version control repository. This paper presents a regression test prioritization technique that uses a genetic algorithm to reorder test suites in light of testing time constraints. Experiment results indicate that our prioritization approach frequently yields higher average percentage of faults detected (APFD) values, for two case study applications, when basic block level coverage is used instead of method level coverage. The experiments also reveal fundamental trade-offs in the performance of time-aware prioritization. This paper shows that our prioritization technique is appropriate for many regression testing environments and explains how the baseline approach can be extended to op...
Despite the emerging ubiquity of hardware monitoring mechanisms and prior research work in other fields, the applicability and usefulness of hardware monitoring mechanisms have not been fully scrutinized for software engineering. In this... more
Despite the emerging ubiquity of hardware monitoring mechanisms and prior research work in other fields, the applicability and usefulness of hardware monitoring mechanisms have not been fully scrutinized for software engineering. In this work, we identify several recently developed hardware mechanisms that lend themselves well to structural test coverage analysis and automated fault localization and explore their potential. We discuss key factors impacting the applicability of hardware monitoring mechanism for these software engineering tasks, present novel online analyses leveraging these mechanisms, and provide preliminary results demonstrating the promise of this emerging hardware.
Regression testing is frequently performed in a time constrained environment. This paper explains how 0/1 knapsack solvers (e.g., greedy, dynamic programming, and the core algorithm) can identify a test suite reordering that rapidly... more
Regression testing is frequently performed in a time constrained environment. This paper explains how 0/1 knapsack solvers (e.g., greedy, dynamic programming, and the core algorithm) can identify a test suite reordering that rapidly covers the test requirements and always terminates within a specified testing time limit. We conducted experiments that reveal fundamental trade-offs in the (i) time and space costs that are associated with creating a reordered test suite and (ii) quality of the resulting prioritization. We find knapsack-based prioritizers that ignore the overlap in test case coverage incur a low time overhead and a moderate to high space overhead while creating prioritizations exhibiting a minor to modest decrease in effectiveness. We also find that the most sophisticated 0/1 knapsack solvers do not always identify the most effective prioritization, suggesting that overlap-aware prioritizers with a higher time overhead are useful in certain testing contexts.
The prevalence of push notifications for communication between devices is increasing and is vital to Internet of Things (IoT) components. It has been observed that delays of notification receipt vary even for devices that are on the same... more
The prevalence of push notifications for communication between devices is increasing and is vital to Internet of Things (IoT) components. It has been observed that delays of notification receipt vary even for devices that are on the same network and using the same hardware. A closer analysis is needed to understand what is occurring in the hardware when a notification occurs from a cloud service or other application.In this paper, we describe and develop a framework, AHPCap, to better understand application behavior at the hardware level at the time of a notification. We explain the framework and its deployment and capabilities. We then show an example of a hardware profile that can be generated on mobile devices and analyze the time required to capture and record the profile data. Lastly, we discuss some of AHPCap’s potential applications.
In the linux kernel, SCSI storage drivers are maintained as three different levels; the high level drivers handle device specific code, the middle level is a core layer which provides the primary functionality of the SCSI subsystem, and... more
In the linux kernel, SCSI storage drivers are maintained as three different levels; the high level drivers handle device specific code, the middle level is a core layer which provides the primary functionality of the SCSI subsystem, and the low level drivers (LLD) contain hardware specific code. Since the LLDs are hardware specific, they are predominantly developed by hardware vendors whereas the upper level and middle level drivers are mostly implemented by the open source developers who are responsible for the SCSI subsystem maintenance.In the linux kernel, SCSI storage drivers are maintained as three different levels; the high level drivers handle device specific code, the middle level is a core layer which provides the primary functionality of the SCSI subsystem, and the low level drivers (LLD) contain hardware specific code. Since the LLDs are hardware specific, they are predominantly developed by hardware vendors whereas the upper level and middle level drivers are mostly impl...
The Internet of Things (IoT) is a developing technology which allows any type of network device inside a home to be linked together. IoT is based on older Wireless Sensor Network (WSN) technology and has been reduced to smaller size and... more
The Internet of Things (IoT) is a developing technology which allows any type of network device inside a home to be linked together. IoT is based on older Wireless Sensor Network (WSN) technology and has been reduced to smaller size and scale for home use. However, both the original WSN and developing IoT technology has inherent security flaws. This paper identifies and evaluates security issues and their underlying causes in IoT technology. We focus on IoT reliance on known exploitable network ports and the difficulty of recovering from such attacks. Most IoT implementations utilize Telnet to communicate between devices. We explore the vulnerability of Telnet connections through a simulated IoT environment. Our results show that Telnet vulnerabilities can be exploited by attackers and grant access over IoT devices allowing the modification of devices and subtle spying on any data being transmitted.
Research Interests:
Test case prioritization techniques organize the test cases in a test suite, allowing for an increase in the effectiveness of testing. One performance goal, the fault-detection rate, is a measure of how quickly faults are detected during... more
Test case prioritization techniques organize the test cases in a test suite, allowing for an increase in the effectiveness of testing. One performance goal, the fault-detection rate, is a measure of how quickly faults are detected during the testing process. An improved rate of fault detection can provide faster feedback regarding the quality of the system under test, but frequently, complete testing is too expensive. This is often the case with regression testing, the process of validating modified software to detect whether new errors have ...
Some counting problems are simple enough to solve by observation, but many require a more sophisticated approach. Burnside's Theorem is a result of group theory that is often used to calculate the number of nonequivalent arrangements... more
Some counting problems are simple enough to solve by observation, but many require a more sophisticated approach. Burnside's Theorem is a result of group theory that is often used to calculate the number of nonequivalent arrangements of colorings of objects in a set under a group of permutations. In this project, we will discuss the notion of an arbitrary group acting on a set, the analysis and several applications of Burnside's Theorem, and a generalization of the theorem.
Test case prioritization techniques organize the test cases in a test suite, allowing for an increase in the effectiveness of testing. One performance goal, the faultdetection rate, is a measure of how quickly faults are detected during... more
Test case prioritization techniques organize the test cases in a test suite, allowing for an increase in the effectiveness of testing. One performance goal, the faultdetection rate, is a measure of how quickly faults are detected during the testing process. An improved rate of fault detection can provide faster feedback regarding the quality of the system under test, but frequently, complete testing is too expensive.
Abstract: Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they... more
Abstract: Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they require assembling a high-resolution and regularly sampled matrix from sparse and non-uniformly scattered input. Constructing and processing this immense matrix becomes a significant computational bottleneck. We describe a technique for performing basis decomposition directly from ...
Some counting problems are simple enough to solve by observation, but many require a more sophisticated approach. Burnside's Theorem is a result of group theory that is often used to calculate the number of nonequivalent arrangements... more
Some counting problems are simple enough to solve by observation, but many require a more sophisticated approach. Burnside's Theorem is a result of group theory that is often used to calculate the number of nonequivalent arrangements of colorings of objects in a set under a group of permutations. In this project, we will discuss the notion of an arbitrary group acting on a set, the analysis and several applications of Burnside's Theorem, and a generalization of the theorem.
Modern IP networks are complex entities that require constant maintenance and care. Similarly, constructing a new network comes with a high amount of upfront cost, planning, and risk. Unlike the disciplines of software and hardware... more
Modern IP networks are complex entities that require constant maintenance and care. Similarly, constructing a new network comes with a high amount of upfront cost, planning, and risk. Unlike the disciplines of software and hardware engineering, networking and IT professionals lack an expressive and useful certification language that they can use to verify that their work is correct. When installing and maintaining networks without a standard for describing their behavior, teams find themselves prone to making configuration mistakes. These mistakes can have real monetary and operational efficiency costs for organizations that maintain large networks. In this research, the Network Certification Description Language (NETCDL) is proposed as an easily human readable and writeable language that is used to describe network components and their desired behavior. The complexity of the grammar is shown to rank in the top 5 out of 31 traditional computer language grammars, as measured by metrics suite. The language is also shown to be able to express the majority of common use cases in network troubleshooting. A workflow involving a certifier tool is proposed that uses NETCDL to verify network correctness, and a reference certifier design is presented to guide and standardize future implementations.
Test suite evaluation is important when developing quality software. Mutation testing, in particular, can be helpful in determining the ability of a test suite to find defects in code. Because of challenges incurred developing on complex... more
Test suite evaluation is important when developing quality software. Mutation testing, in particular, can be helpful in determining the ability of a test suite to find defects in code. Because of challenges incurred developing on complex embedded systems, test suite evaluation on these systems is very difficult and costly. We developed and implemented a tool called DynaMut to insert conditional mutations into the software under test for embedded applications. We then demonstrate how the tool can be used to automate the collection of data using an existing proprietary embedded test suite in a runtime testing environment. Conditional mutation is used to reduce the time and effort needed to perform test quality evaluation in 48% to 67% less time than it would take to perform the testing with a more traditional mutate-compile-test methodology. We also analyze if testing time can be further reduced while maintaining quality by sampling the mutations tested.
Modern IP networks are complex entities that require constant maintenance and care. Similarly, constructing a new network comes with a high amount of upfront cost, planning, and risk. Unlike the disciplines of software and hardware... more
Modern IP networks are complex entities that require constant maintenance and care. Similarly, constructing a new network comes with a high amount of upfront cost, planning, and risk. Unlike the disciplines of software and hardware engineering, networking and IT professionals lack an expressive and useful certification language that they can use to verify that their work is correct. When installing and maintaining networks without a standard for describing their behavior, teams find themselves prone to making configuration mistakes. These mistakes can have real monetary and operational efficiency costs for organizations that maintain large networks. In this research, the Network Certification Description Language (NETCDL) is proposed as an easily human readable and writeable language that is used to describe network components and their desired behavior. The complexity of the grammar is shown to rank in the top 5 out of 31 traditional computer language grammars, as measured by metrics suite. The language is also shown to be able to express the majority of common use cases in network troubleshooting. A workflow involving a certifier tool is proposed that uses NETCDL to verify network correctness, and a reference certifier design is presented to guide and standardize future implementations.
Research Interests:
In the Linux kernel, SCSI storage drivers are maintained as three different levels. Since Low Level Drivers (LLDs) are hardware specific, they are predominantly developed by hardware vendors only. Thus, LLDs are highly error prone... more
In the Linux kernel, SCSI storage drivers are maintained as three different levels. Since Low Level Drivers (LLDs) are hardware specific, they are predominantly developed by hardware vendors only. Thus, LLDs are highly error prone compared to other parts of the SCSI stack. While a few tools exist to test upper and middle levels, there is no tool available to assist developers in verifying the functionality of LLDs at the unit level. We develop a framework for LLD developers for testing code at the function and unit level. The framework, LDTT is a kernel module with a helper application. LDTT allows LLD writers and designers to develop test cases that can interface between the kernel and device levels, which cannot be accessed by traditional testing frameworks. We demonstrate that LDTT can be used to write test cases for LLDs and that LLD-specific bugs can be detected by these test cases. K EY W ORDS
Research Interests:
Software systems that meet the stakeholders needs and expectations is the ultimate objective of the software provider. Software testing is a critical phase in the software development lifecycle that is used to evaluate the software. Tests... more
Software systems that meet the stakeholders needs and expectations is the ultimate objective of the software provider. Software testing is a critical phase in the software development lifecycle that is used to evaluate the software. Tests can be written by the testers or the automatic test generators in many different ways and with different goals. Yet, there is a lack of well-defined guidelines or a methodology to direct the testers to write tests We want to understand how tests are written and why they may have been written that way. This work is a characterization study aimed at recognizing the factors that may have influenced the development of the test suite. We found that increasing the coverage of the test suites for applications with at least 500 test cases can make the test suites more costly. The correlation coeffieicent obtained was 0.543. The study also found that there is a positive correlation between the mutation score and the coverage score.
Abstract Despite the emerging ubiquity of hardware monitoring mechanisms and prior research work in other fields, the applicability and usefulness of hardware monitoring mechanisms have not been fully scrutinized for software engineering.... more
Abstract Despite the emerging ubiquity of hardware monitoring mechanisms and prior research work in other fields, the applicability and usefulness of hardware monitoring mechanisms have not been fully scrutinized for software engineering. In this work, we identify several recently developed hardware mechanisms that lend themselves well to structural test overage analysis and automated fault localization and explore their potential. We discuss key factors impacting the applicability of hardware monitoring mechanism for ...
Abstract Many applications rely upon a tuple space within distributed system middleware to provide loosely coupled communication and service coordination. This paper describes an approach for measuring the throughput and response time of... more
Abstract Many applications rely upon a tuple space within distributed system middleware to provide loosely coupled communication and service coordination. This paper describes an approach for measuring the throughput and response time of a tuple space when it handles concurrent local space interactions. Furthermore, it discusses a technique that populates a tuple space with tuples before the execution of a benchmark in order to age the tuple space and provide a worst-case measurement of space performance. We apply the tuple space ...
Test case prioritization techniques organize the test cases in a test suite, allowing for an increase in the effectiveness of testing. One performance goal, the fault-detection rate, is a measure of how quickly faults are detected during... more
Test case prioritization techniques organize the test cases in a test suite, allowing for an increase in the effectiveness of testing. One performance goal, the fault-detection rate, is a measure of how quickly faults are detected during the testing process. An improved rate of fault detection can provide faster feedback regarding the quality of the system under test, but frequently, complete testing is too expensive. This is often the case with regression testing, the process of validating modified software to detect whether new errors have ...
Abstract: Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they... more
Abstract: Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they require assembling a high-resolution and regularly sampled matrix from sparse and non-uniformly scattered input. Constructing and processing this immense matrix becomes a significant computational bottleneck. We describe a technique for performing basis decomposition directly from ...
New techniques for writing and developing software have evolved in recent years. One is Test-Driven Development (TDD) in which tests are written before code. No code should be written without first having a test to execute it. Thus, in... more
New techniques for writing and developing software have evolved in recent years. One is Test-Driven Development (TDD) in which tests are written before code. No code should be written without first having a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be high. In this work, we analyze applications written using TDD and traditional techniques. Specifically, we demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have high mutation scores, and we especially reveal this in the case of TDD applications. We found that Test-Driven Development is an effective approach that improves the quality of the test suite to cover more of the source code and also to reveal more.
Research Interests: