Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
    Automatic understanding of specifications containing flexible word order and expressiveness close to natural language is a challenging task. We address this challenge by modeling semantic parsing as a game of BINGO with dependency... more
    Automatic understanding of specifications containing flexible word order and expressiveness close to natural language is a challenging task. We address this challenge by modeling semantic parsing as a game of BINGO with dependency grammar. In this model, the rows in a BINGO chart of a word represent distinct interpretations, and the columns describe the constraints required to complete each of these interpretations. BINGO parsing considers the context of each word in the input specification to ensure high precision in the creation of semantic frames. We encode contextual information of the hardware verification domain in our grammar by adding semantic links to the existing syntactic links of the link grammar. We also define semantic propagation operations as declarative rules that are executed for each dependency edge of the parse tree to create a semantic frame. We used hardware design specifications written in English to evaluate the framework. Our results showed that the system c...
    Abstract We present a study of the practical use of a simulation-based automatic test pattern generation (ATPG) for model checking in large sequential circuits. Preliminary findings show that ATPGs which gradually build and learn from the... more
    Abstract We present a study of the practical use of a simulation-based automatic test pattern generation (ATPG) for model checking in large sequential circuits. Preliminary findings show that ATPGs which gradually build and learn from the state-space has the potential to achieve the verification objective without needing the complete state-space information. The success of verifying a useful set of properties relies on the performance and capacity of ATPG. We compared an excitation-only ATPG with one that performs both excitation and propagation ...
    Abstract The work presents a new transition fault model called as late as possible transition fault (ALAPTF) model. The model aims at detecting smaller delays, which be missed by both the traditional transition fault model and the path... more
    Abstract The work presents a new transition fault model called as late as possible transition fault (ALAPTF) model. The model aims at detecting smaller delays, which be missed by both the traditional transition fault model and the path delay model. The model makes sure that each transition is launched as late as possible at the fault site, accumulating the small delay defects along its way. Because some transition faults may require multiple paths to be launched, the simple path-delay model miss such faults. Results on ISCAS'85 and ISCAS' ...
    We propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the miter... more
    We propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the miter circuit under verification, and hence reduce the complexity of the SAT instance. This efficient and effective preprocessing quickly builds up the implication graph for the miter circuit under verification, yielding a large set of direct, indirect and extended backward implications. These two-node implications spanning the entire circuit are converted into binary clauses, and they are added to the miter CNF formula. The added clauses constrain the search space of the SAT solver and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on large and difficult ISCAS'85, ISCAS'89 (full scan) and ITC'99 (full scan) CEC instances show that our approach is independent of...
    Abstract In this paper, we present RAG, an efficient Reliability Analysis tool based on Graphics processing units (GPU). RAG is a fault injection based parallel stochastic simulator implemented on a state-of-the-art GPU. A two-stage... more
    Abstract In this paper, we present RAG, an efficient Reliability Analysis tool based on Graphics processing units (GPU). RAG is a fault injection based parallel stochastic simulator implemented on a state-of-the-art GPU. A two-stage simulation framework is proposed to exploit the high computation efficiency of GPUs. Experimental results demonstrate the accuracy and performance of RAG. An average speedup of 412× and 198× is achieved compared to two state-of-the-art CPU-based approaches for reliability analysis.
    Diagnosis of each failed part requires the failed data captured on the test equipment. However, due to memory limitations on the tester, one often cannot store all the failed data for every chip tested. Consequently, truncated failure... more
    Diagnosis of each failed part requires the failed data captured on the test equipment. However, due to memory limitations on the tester, one often cannot store all the failed data for every chip tested. Consequently, truncated failure logs are used instead of complete logs for each part. Such truncation of the failure logs can result in very long turn-around times for diagnosis because important failure points may be removed from the log. Subsequently, the accuracy and resolution of final diagnosis may suffer even after multiple iterations of diagnosis. In addition, the existing test response compaction techniques though good for testing, either adversely affect diagnosis or are highly sensitive to deviation from the chosen fault model. In this context, the industry needs dynamic selection of better failure logs that enhances diagnosis. In this paper, we propose a number of metrics based on information theory that may help in selecting failure logs dynamically for improving the accuracy and resolution of final diagnosis. We also report on the efficacy of these metrics through the results of our experiments.
    The problem of test generation belongs to the class of NP-complete problems and it is becoming more and more difficult as the complexity of VLSI circuits increases, and as long as execution times pose an additional problem. Parallel... more
    The problem of test generation belongs to the class of NP-complete problems and it is becoming more and more difficult as the complexity of VLSI circuits increases, and as long as execution times pose an additional problem. Parallel implementations can potentially provide significant speedups while retaining good quality results. In this paper, we present three parallel genetic algorithms for simulation-based sequential circuit test generation. Simulation-based test generators are more capable of handling the constraints of complex design features than deterministic test generators. The three parallel genetic algorithm implementations are portable and scalable over a wide range of distributed and shared memory MIMD machines. Significant speedups were obtained, and fault coverages were similar to and occasionally better than those obtained using a sequential genetic algorithm, due to the parallel search strategies adopted
    Even with the high accuracy of automated fingerprint identification in matching plain to rolled prints, latent to rolled print matching continues to require human input. Latent prints are those that are lifted from a surface, typically at... more
    Even with the high accuracy of automated fingerprint identification in matching plain to rolled prints, latent to rolled print matching continues to require human input. Latent prints are those that are lifted from a surface, typically at a crime scene, whereas plain prints are obtained under supervision with quality control. In comparison to plain or rolled prints, latent prints are usually of poor quality and have a small fingerprint surface area, making it difficult to extract a large number of features reliably. Manually processing latent prints is time consuming, so efforts are being made to speed up the process through automation. One of the first steps is image segmentation, which is the separation of the foreground (fingerprint region) from the background. Traditional automated methods for segmentation are designed for backgrounds with random noise and perform poorly on structured/textured backgrounds, resulting in many spurious minutiae, thus inhibiting the matching process...
    Research Interests:
    This research was supported in part by the Semiconductor Research Corporation under contract SRC 95-DP-109, in part by ARPA under contract DABT63-95-C-0069, and by Hewlett-Packard under an equipment grant. A new sequential circuit test... more
    This research was supported in part by the Semiconductor Research Corporation under contract SRC 95-DP-109, in part by ARPA under contract DABT63-95-C-0069, and by Hewlett-Packard under an equipment grant. A new sequential circuit test generator, ALT-TEST, is described which alternates repeatedly between two phases of test generation. The first phase uses a simulation-based genetic algorithm, while the second phase uses
    Previous work has shown that maximum switching density at a given node is extremely sensitive to a slight change in the delay at that node. However, when estimating the peak power for the entire circuit, the powers estimated must not be... more
    Previous work has shown that maximum switching density at a given node is extremely sensitive to a slight change in the delay at that node. However, when estimating the peak power for the entire circuit, the powers estimated must not be as sensitive to a slight variation or inaccuracy in the assumed gate delays because computing the exact gate delays

    And 223 more