Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Clark Barrett

    The annual Satisfiability Modulo Theories Competition (SMT-COMP) is held to spur advances in SMT solver implementations on benchmark formulas of practical interest. Public competitions are a well-known means of stimulating advancement in... more
    The annual Satisfiability Modulo Theories Competition (SMT-COMP) is held to spur advances in SMT solver implementations on benchmark formulas of practical interest. Public competitions are a well-known means of stimulating advancement in software tools. For example, in automated reasoning, the CASC and SAT competitions for first-order and propositional reasoning tools, respectively, have spurred significant innovation in their fields [7, 5]. More information on the history and motivation for SMT-COMP can be found at the ...
    Satisfiability modulo theories (SMT) is a branch of automated reasoning that builds on advances in propositional satisfiability and on decision procedures for first-order reasoning. Its defining feature is the use of reasoning methods... more
    Satisfiability modulo theories (SMT) is a branch of automated reasoning that builds on advances in propositional satisfiability and on decision procedures for first-order reasoning. Its defining feature is the use of reasoning methods specific to logical theories of interest in target applications. Advances in SMT research and technology have led in the last few years to the development of very powerful satisfiability solvers and to an explosion of applications. SMT solvers are now used for processor verification, equivalence checking, bounded and ...
    This volume contains the proceedings of SMT 2008, the 6th International Workshop on Satisfiability Modulo Theories, held in Princeton, New Jersey on July 7-8, 2008. The workshop was affiliated with the 20th International Conference on... more
    This volume contains the proceedings of SMT 2008, the 6th International Workshop on Satisfiability Modulo Theories, held in Princeton, New Jersey on July 7-8, 2008. The workshop was affiliated with the 20th International Conference on Computer-Aided Verification (CAV 2008). The primary goal of the workshop was to bring together both researchers and users of SMT technology and provide them with a forum for presenting and discussing theoretical ideas, implementation and evaluation techniques, and applications. ...
    Research Interests:
    Research Interests:
    Abstract Satisfiability Modulo Theories (SMT) solvers are large and complicated pieces of code. As a result, ensuring their correctness is challenging. In this paper, we discuss a technique for ensuring soundness by producing and checking... more
    Abstract Satisfiability Modulo Theories (SMT) solvers are large and complicated pieces of code. As a result, ensuring their correctness is challenging. In this paper, we discuss a technique for ensuring soundness by producing and checking proofs. We give details of our implementation using CVC3 and HOL Light and provide initial results from our effort to certify the SMT-LIB benchmarks.
    Software that can produce independently checkable evidence for the correctness of its output has received recent attention for use in certifying compilers and proof-carrying code. CVC (Cooperating Validity Checker) is a proof-producing... more
    Software that can produce independently checkable evidence for the correctness of its output has received recent attention for use in certifying compilers and proof-carrying code. CVC (Cooperating Validity Checker) is a proof-producing validity checker for a decidable fragment of first-order logic enriched with background theories. This paper describes how proofs of valid formulas are produced from the decision procedure for linear real arithmetic implemented in CVC.
    Abstract Inductive data types are a valuable modeling tool for software verification. In the past, decision procedures have been proposed for various theories of inductive data types, some focused on the universal fragment, and some... more
    Abstract Inductive data types are a valuable modeling tool for software verification. In the past, decision procedures have been proposed for various theories of inductive data types, some focused on the universal fragment, and some focused on handling arbitrary quantifiers. Because of the complexity of the full theory, previous work on the full theory has not focused on strategies for practical implementation. However, even for the universal fragment, previous work has been limited in several significant ways.
    Proof Sketch: Let C0be the free constants shared by Γ1 and Γ2. Let A be a Σ1 (C)∪ Σ2 (C)-model of T1∪ T2∪ Γ1∪ Γ2. Let∆={c≈ d| c, d∈ C0, cA= dA}∪{c≈ d| c, d∈ C0, cA= dA}. The set∆ is a possible arrangement of C0. Moreover, AΣi (C)|= Ti∪... more
    Proof Sketch: Let C0be the free constants shared by Γ1 and Γ2. Let A be a Σ1 (C)∪ Σ2 (C)-model of T1∪ T2∪ Γ1∪ Γ2. Let∆={c≈ d| c, d∈ C0, cA= dA}∪{c≈ d| c, d∈ C0, cA= dA}. The set∆ is a possible arrangement of C0. Moreover, AΣi (C)|= Ti∪ Γi∪∆ for i= 1, 2. So the procedure will return sat for∆'s choice.
    Abstract. One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the... more
    Abstract. One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the Nelson-Oppen method that takes into account explicit equality propagation and can ignore pairs of shared variables that the theories do not care about. We show the correctness of the new approach and present care functions for the theory of uninterpreted functions and the theory of arrays.
    The classic method of Nelson and Oppen for combining decision procedures requires the theories to be stably-infinite. Unfortunately, some important theories do not fall into this category (eg the theory of bit-vectors). To remedy this... more
    The classic method of Nelson and Oppen for combining decision procedures requires the theories to be stably-infinite. Unfortunately, some important theories do not fall into this category (eg the theory of bit-vectors). To remedy this problem, previous work introduced the notion of polite theories. Polite theories can be combined with any other theory using an extension of the Nelson-Oppen approach. In this paper we revisit the notion of polite theories, fixing a subtle flaw in the original definition.
    It is well known that the use of points-to information can substantially improve the accuracy of a static program analysis. Commonly used algorithms for computing points-to information are known to be sound only for memory-safe programs.... more
    It is well known that the use of points-to information can substantially improve the accuracy of a static program analysis. Commonly used algorithms for computing points-to information are known to be sound only for memory-safe programs. Thus, it appears problematic to utilize points-to information to verify the memory safety property without giving up soundness. We show that a sound combination is possible, even if the points-to information is computed separately and only conditionally sound.
    A decision procedure for arbitrary first-order formulas can be viewed as combining a propositional search with a decision procedure for conjunctions of first-order literals, so Boolean SAT methods can be used for the propositional search... more
    A decision procedure for arbitrary first-order formulas can be viewed as combining a propositional search with a decision procedure for conjunctions of first-order literals, so Boolean SAT methods can be used for the propositional search in order to improve the performance of the overall decision procedure. We show how to combine some Boolean SAT methods with non-clausal heuristics developed for first-order decision procedures. The combination of methods leads to a smaller number of decisions than either method alone.
    The original version promises three main things: 1. For theories T which meet the criteria (we will call these Shostak theories), the method gives a decision procedure for quantifier-free T-satisfiability. 2. The method has the theory TE... more
    The original version promises three main things: 1. For theories T which meet the criteria (we will call these Shostak theories), the method gives a decision procedure for quantifier-free T-satisfiability. 2. The method has the theory TE “built-in”, so for any Shostak theory T, the method gives a decision procedure for quantifier-free T∪ TE-satisfiability. 3. Any two Shostak theories T1 and T2 can be combined to form a new Shostak theory T1∪ T2.
    Abstract LFSC is a high-level declarative language for defining proof systems and proof objects for virtually any logic. One of its distinguishing features is its support for computational side conditions on proof rules. Side conditions... more
    Abstract LFSC is a high-level declarative language for defining proof systems and proof objects for virtually any logic. One of its distinguishing features is its support for computational side conditions on proof rules. Side conditions facilitate the design of proof systems that reflect closely the sort of high-performance inferences made by SMT solvers. This paper investigates the issue of balancing declarative and computational inference in LFSC focusing on (quantifier-free) Linear Real Arithmetic.
    Abstract We describe an abstract methodology for exploring and categorizing the space of error traces for a system using a procedure based on Satisfiability Modulo Theories and Bounded Model Checking. A key component required by the... more
    Abstract We describe an abstract methodology for exploring and categorizing the space of error traces for a system using a procedure based on Satisfiability Modulo Theories and Bounded Model Checking. A key component required by the technique is a way to generalize an error trace into a category of error traces. We describe tools and techniques to support a human expert in this generalization task.
    Abstract One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the... more
    Abstract One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the Nelson-Oppen method that takes into account explicit equality propagation and can ignore pairs of shared variables that the theories do not care about. We show the correctness of the new approach and present care functions for the theories of uninterpreted functions and the theory of arrays.
    We present a tool, called cascade, to check assertions in C programs as part of a multi-stage verification strategy. cascade takes as input a C program and a control file (the output of an earlier stage) that specifies one or more... more
    We present a tool, called cascade, to check assertions in C programs as part of a multi-stage verification strategy. cascade takes as input a C program and a control file (the output of an earlier stage) that specifies one or more assertions to be checked together with (optionally) some restrictions on program behaviors. For each assertion, cascade produces either a concrete trace violating the assertion or a deduction (proof) that the assertion cannot be violated.
    Abstract Solvers for the Satisfiability Modulo Theories (SMT) problem are making rapid progress. However, many verification tools aren't making use of the full power of modern SMT solvers. We believe that the verification community could... more
    Abstract Solvers for the Satisfiability Modulo Theories (SMT) problem are making rapid progress. However, many verification tools aren't making use of the full power of modern SMT solvers. We believe that the verification community could be benefiting more from the work of the SMT community; at the same time, the SMT community could benefit from a more active and engaged verification user community.
    For efficiency and portability, network packet processing code is typically written in low-level languages and makes use of bit-level operations to compactly represent data. Although packet data is highly structured, low-level... more
    For efficiency and portability, network packet processing code is typically written in low-level languages and makes use of bit-level operations to compactly represent data. Although packet data is highly structured, low-level implementation details make it difficult to verify that the behavior of the code is consistent with high-level data invariants. We introduce a new approach to the verification problem, using a high-level definition of packet types as part of a specification rather than an implementation.
    Abstract. In this work, we investigate various proof systems for quantifier-free Linear Real Arithmetic, focusing on the continuum between declarative and computational styles of proof checking. We use LFSC, a high-level declarative... more
    Abstract. In this work, we investigate various proof systems for quantifier-free Linear Real Arithmetic, focusing on the continuum between declarative and computational styles of proof checking. We use LFSC, a high-level declarative language for defining proof systems and proof objects for virtually any logic. One of the distinguishing features of LFSC is its support for computational side conditions on proof rules.
    The SMT-LIB initiative is an international effort, supported by several research groups worldwide, with the two-fold goal of producing an extensive on-line library of benchmarks and promoting the adoption of common languages and... more
    The SMT-LIB initiative is an international effort, supported by several research groups worldwide, with the two-fold goal of producing an extensive on-line library of benchmarks and promoting the adoption of common languages and interfaces for SMT solvers. This document specifies Version 2.0 of the SMT-LIB Standard. This is a major upgrade of the previous version, Version 1.2, which, in addition to simplifying and extending the languages of that version, includes a new command language for interfacing with SMT solvers.
    We describe a tool called CVC Lite (CVCL), an automated theorem prover for formulas in a union of first-order theories. CVCL supports a set of theories which are useful in verification, including uninterpreted functions, arrays, records... more
    We describe a tool called CVC Lite (CVCL), an automated theorem prover for formulas in a union of first-order theories. CVCL supports a set of theories which are useful in verification, including uninterpreted functions, arrays, records and tuples, and linear arithmetic. New features in CVCL (beyond those provided in similar previous systems) include a library API, more support for producing proofs, some heuristics for reasoning about quantifiers, and support for symbolic simulation primitives.
    CVC3, a joint project of NYU and U Iowa, is the new and latest version of the Cooperating Validity Checker. CVC3 extends and builds on the functionality of its predecessors and includes many new features such as support for additional... more
    CVC3, a joint project of NYU and U Iowa, is the new and latest version of the Cooperating Validity Checker. CVC3 extends and builds on the functionality of its predecessors and includes many new features such as support for additional theories, an abstract architecture for Boolean reasoning, and SMT-LIB compliance. We describe the system and discuss some applications and continuing work.
    Abstract. A central task in formal verification is the definition of invariants, which characterize the reachable states of the system. When a system is finitestate, invariants can be discovered automatically. Our experience in verifying... more
    Abstract. A central task in formal verification is the definition of invariants, which characterize the reachable states of the system. When a system is finitestate, invariants can be discovered automatically. Our experience in verifying microprocessors using symbolic logic is that finding adequate invariants is extremely time-consuming. We present three techniques for automating the discovery of some of these invariants. All of them are essentially syntactic transformations on a logical formula derived from the state transition function. The goal is to eliminate ...

    And 5 more