Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
32 views

Testing & Testability

Uploaded by

rakeshluddu042
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Testing & Testability

Uploaded by

rakeshluddu042
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT 5

TEST AND TESTABILITY


TESTING

• Testing is the process of verifying a unit.


• A Unit can be a circuit, chip, board, and system from transistor level,
gate level, microcells, chips, and printed circuit boards.
• Testing is performed by applying known inputs to a unit and the
obtained response is compared with a known response or a
predictable response.
• If the obtained response did not match predictable response, then a
failure occurs.
• Figure (1) shows the block diagram of a testing unit.
• The Device Under Test (DUT) in Figure (1) is the unit to be tested.
• Testing a DUT needs a known input to verify the known output.
• In digital circuits, a group of bits in the form of 0's and l's called
vectors are applied as inputs to DUT. The vectors used to test the DUT
are known as test vectors/ input stimulus.
• For instance, a two-input XOR gate (digital circuit) is tested by
applying the inputs 00, 01, 10, 11.
• The inputs 00, 01, 10, 11 are known as test vectors/ input stimulus.
• The known outputs for the two-input vectors 00, 01, 10, 11 are
0,1,1,0 respectively.
• For any two-input circuit, the exhaustive testing for each input set
can be carried-out by using four test vectors. Figure (2) displays a
two-input XOR gate.
• Likewise, for a three-input digital
circuit, the test vectors are
000, 001, 010, 011, 100, 101, 1100
and 111.
• The predictable output response (anticipated from specifications and
required behavior) is compared with the output of DUT obtained by
applying test vectors.
• If the outcomes for each possible input is verified and are correct,
then the DUT is sent to next processing step that is fabrication onto
chip.
• If the response obtained by applying test vectors did not match with
the expected output, then a failure occurs. The term defect or fault is
used to define the failure in the DUT.
• Defects are issues that arise in Silicon is considered as a physical failure
of the chip.
• Various defects that arise in a Silicon chip are
I. Processing defects:
Defects arising due to shorting of power supply (VD1)) or ground (VSS)
rails, gate-oxide shorts, open and plugged Vias, and process or mask
errors.
2. Material defects:
Defects like surface impurities and crystal breakdowns. Faults are the
defects in circuit behavior.
In other words, defects produced by reasons like imperfect
connections, poor insulation, grounding, or shorting are known as faults.
Fault Modelling
• A testing process is needed to identity or detect the fault in the circuit.
• The testing process to detect the fault in circuit is called a fault model. In
other words, fault model detects the fault in the circuit.
• Different types of fault models are required to detect different kinds of fault.
• The fault can be categorized as undetected, detected, blocked, tied,
redundant, and untestable.
• Among these, the undetectable faults cannot be detected by any test
vectors.
• Fault model is produced to verify the Device Under Test (DUT). A fault model
finds the aims for testing the device or chip behavior.
• The various fault models available to detect different kinds of
fault are as follows:
• Single stuck-at faults
• Transistor open and short faults
• Memory faults
• Functional faults
• Delay faults
• Analog faults
• Among these models, Stuck-at fault and transistor open and
short faults are used widely in the present day verification
process.
Single Stuck-at-Fault model.
• In this model, the circuit is assumed to be an interconnection (called a
netlist) of Boolean gates.
• A stuck-at fault is assumed to affect only the interconnecting nets
between gates.
• A net is a conductor that interconnects two or more gate terminals.
Each net can have three states: normal, stuck-at-1 and stuck-at-O.
• If a net has stuck-at-I fault then the net is permanently set to logic 1
regardless of the correct logic output of the gate driving it.
• similarly, if a net has stuck-at-0 fault then the net is permanently set
to logic 0 regardless of-the correct logic output of the gate driving it.
• To perform testing to find stuck-at fault on a single net, initially
the net must be driven forcibly by a value opposite to which it is
stuck.
• For instance, to find whether a net is stuck-at logic 0 or not, that
net should be driven by force with logic 1.
• A circuit with ‘n’ nets can have 2n possible stuck-at faults,
because in single stuck-at fault model it is assumed that only one
location can have a stuck-at-0 or stuck-at-1 fault at a time.
• In the figure below the positions of stuck-at faults in the AND
gate G1 are illustrated with black circles.
GI is a 5 input AND gate thereby having 5 input nets and an
output net. So, there are 12 stuck-at faults possible in the gate.
Short Circuit and open Circuit Faults
• Figure (a) shows two shorted faults.
• The short S1 in figure (a) is modeled by
an S-A-0 fault at input A, While short S2
modifies the function of the gate.
• To ensure the most accurate modeling,
faults should be modeled at the transistor
level, because it is only at this level that the
complete circuit structure is known.
• A particular problem that arises with CMOS is that it is possible for a fault
to convert a combinational circuit into a sequential circuit.
• This is illustrated in the case of a 2-input NOR gate in which one of the
transistors is rendered ineffective (stuck open or stuck closed) in figure (b).
• This might be due to missing source, drain or gate connection.
• If one of the n-transistors (A connected to gate) is stuck open, then the
function displayed by the gate will be

Where Fn is previous state of gate.


Similarly if B n-transistors drain is missing
Then the function is
• If either p-transistor is open, the node would be
arbitrarily charged (i.e., it might be high due to some
weird charging sequence) until one of the n-
transistors discharges the node.
• Thereafter it would remain at zero, bar charge leakage
effects. This problem has caused researchers to search
for new method of test generation to detect such
behavior.
• It is also possible to have switches (transistors) exhibit
a "stuck open" or "stuck closed" state.
• Stuck dosed states can be detected by observing the
static VDD current (IDD) while applying test vectors.
• Consider the gate fault shown in figure (c),
where a p-transistor in a 2-input NAND gate
is shorted.
• This could physically occur if stray metal
overlapped the source and drain connections
or if the source and drain diffusions shorted.
• If we apply test vector “11” to A and B input
and measure the static IDD current, we will
observe that it rises to some value determined
by the β ratios of the n and p transistors.
(a) Observability
The observability of a particular internal circuit node is the degree to
which one can observe that node at the outputs of an integrated circuit (i.e., the
pins).
• This measure is of importance when a designer/tester desires to measure the
output of a gate within a larger circuit to check that it operates correctly.
• Given the limited number of nodes that can be directly observed, it is the aim
of good chip designers to have easily observed gate outputs.
• Adoption of some basic design for test techniques can aid tremendously in this
respect.
• Ideally, one should be able to observe directly or with moderate indirection
(i.e., you may have to wait a few cycles) every gate output within an integrated
circuit.
• While at one time this aim was hindered by the expense of extra test circuitry
and a lack of design methodology, current processes and design practices
allow one to approach this ideal.
(b) Controllability
The controllability of an internal circuit node within a chip is a measure
of the ease of setting the node to a 1 or 0 state.
• This measure is of importance when assessing the degree of difficulty of
testing a particular signal within a circuit.
• An easily controllable node would be directly settable via an input pad. A
node with little controllability might require many hundreds or thousands of
cycles to get it to the right state.
• Often one finds it impossible to generate a test sequence to set a number of
poorly controllable nodes into the right state.
• It should be the aim of good chip designers to make all nodes easily
controllable.
• In common with observability, the adoption of some simple design for test
techniques can aid in this respect tremendously.
C) Fault Coverage
A measure of goodness of test program is the amount of fault coverage it
achieves; that is for the vectors applied, what percentage of chips internal nodes
were checked.
• Conceptually, the way in which the fault coverage is calculated is as follows.
• Each circuit node is taken in sequence and held to 0(S-A-0), and the circuit is
simulated, comparing the chip outputs with a known "good machine" - a circuit
with no nodes artificially set to 0 (or I ).
• When a discrepancy is detected between the "faulty machine" and the good
machine, the fault is marked as detected and the simulation is stopped.
• This is repeated for setting the node to 1 (S-A-1), in turn, every node is stuck at 1
and 0. sequentially.
• The total number of nodes that, when set to 0 or I , do result in the detection of
the fault, divided by the total number of nodes in the circuit, is called the
percentage-fault coverage.
Fault Simulation
Fault simulation consists of simulating a circuit in the presence of
faults. Comparing the fault simulation results with those of the fault
free simulation of the same circuit simulated with the same applied
test, we can determine the faults detected by the test.
(a) Serial and Parallel Fault Simulation
Serial Fault Simulation
• Serial fault simulation is the simplest fault-simulation algorithm.
• We simulate two copies of circuit, the first copy is a good circuit.
• We then pick a fault and insert it into the faulty circuit.
• In test terminology, the circuits are called machines, so the two
copies are a good machine and a faulty machine.
• We then repeat the process simulating one faulty circuit at a time.
Serial simulation is slow and is impractical for large IC's.
Parallel Fault Simulation

Parallel fault simulation takes advantage Of multiple bits of the words in


computer memory. In the simplest case we need only one bit to represent
either a '1' or '0' for each node in the circuit.
• In a computer that uses a 32-bit word memory we can simulate a set of 32
copies of the circuit at the same time.
• One copy is the good circuit and we insert different faults into the other
copies, When we need to perform a logic operation across all bits in the word
simultaneously.
• ln this case, using one bit per node on a 32-bit machine, we would expect
parallel fault simulation to be about 32-times faster than serial simulation.
• The number of bits per node that we need in order to simulate
each circuit depends on the number of states in the logic system
we are using.
• Thus, if we use a four state system with '1', '0', `x (unknown),
and 'z' (high impedance) states, we need two bits per second.
• Parallel fault simulation is faster than serial fault simulation but
not as fast as concurrent fault simulation.
• It is also difficult to include behavioral models using parallel
fault simulation.
(b) Concurrent Fault Simulation
Concurrent fault simulation is the most widely used fault simulation
algorithm and takes advantage of the fact that a fault does not affect the
whole circuit.
• Thus we do not need to simulate the whole circuit for each new fault.
• In concurrent simulation we first completely simulate the good circuit.
• We then inject a fault and re-simulate a copy of only that part of the
circuit that behaves differently (this is the diverged circuit).
• For example, if the fault is in an inverter that is at a primary output, only
the inverter needs to be simulated-we can remove everything preceding
the inverter.
• Keeping track of exactly which parts of the circuit need to be diverged
for each new fault is complicated, but the savings in memory and
processing that result allows hundreds of faults to be simulated
concurrently.
• Concurrent simulation is split into several chunks, you can usually
control how many faults (usually around 100) are simulated in ,each
chunk or pass.
• Each pass thus consists of a series of test cycles. Every circuit has a
unique 'fault-activity signature' that governs the divergence that
occurs with different test vectors. Thus every circuit has a different
optimum setting for faults per pass.
Delay Fault Testing
• Failures that occur in CMOS could leave the functionality of the circuit
untouched, but effect the timing.
• For instance, consider the layout shown in
figure for a high-power NAND gate composed
of paralleled n and p-transistors.
• If the link illustrated was opened, the gate
would still function, but with increased pull-
down time.
• In addition, the fault now becomes
sequential because the detection of the fault
depends on the previous state of the gate and
the simulation clock speed.
Fault Sampling
• Another approach to fault analysis is known as fault sampling.
• This is used in circuits where it is impossible to fault every node in the circuit.
• Nodes are randomly selected and faulted. The resulting fault-detection rate
may be statistically inferred from the number of faults that are detected in
the fault set and the size of set.
• As with all probabilistic methods it is important that the randomly selected
faults be unbiased.
• Although this approach does not yield a specific level of fault coverage, it will
determine whether the fault coverage exceeds a desired level.
• The level of confidence may be increased by increasing the number of
samples.
Test Generation
ATPG:
• Automatic Test Pattern Generation (ATPG), is a process used in
semiconductor electrical testing where in the vectors or input patterns
required to check a device for faults are automatically generated by a
program.
• The vectors are sequentially applied to the device under test and the device
response to each set of inputs is compared with the expected response
from a good circuit.
• An 'error' in the response of the device means that it is faulty.
• The effectiveness of ATPG is measured primarily by the fault coverage
achieved and the cost of performing the test.
• A cycle of ATPG can generally be divided into two distinct phases, they are:
1. Creation of the test
2. Application of the test
• During the creation of the test, appropriate models for the device circuit are
developed at gate or transistor level in such a way that the output responses of a
faulty device for a given set of inputs will differ from those of a good device.
• This generation of test is basically a mathematical process that can be done in
three ways.
1. By 'manual' methods
2. By 'algorithmic' methods
3. By 'pseudo-random' methods.
• When creating a test, the goal should be to make it as efficient in memory space
and time requirements as much as possible. As such, the ATPG process must
generate the minimum or near minimum set of vectors needed to detect all the
important faults of the device.
The main considerations for test creations are,
1 . The time needed to construct the minimal test set
2. The size of pattern generator or Hardware/software s/m needed to
properly simulate the DUT.
3. The size of the testing process itself
4. The time needed to load the test patterns
5. The external equipment required (if any).

Examples of ATPG algorithm methods that are in wide use include the
D-Algorithm, the PODEM, and the FAN.
• Pattern generation through any of these algorithmic methods require
what is known as 'path sensitization'.
• "Path sensitization" refers to finding a path in the circuit that will allow
an error to show up at an observable output of a device if it is faulty.
• For example, in a two input AND gate, sensitizing the path of one input
requires the other input to be set to '1 '.
• Most algorithmic generation methods also refer to the notation D and D’.
• These notations were introduced by the D algorithm and have been
adopted by other algorithms since then.
• D simply stands for a ‘1' in a good circuit and a '0' in a faulty one.
• On the other hand D’, which is the opposite of D, stands for a '0' in a
good circuit and ‘1’ in a faculty circuit.
• Thus, propagating a D or D’ from the inputs to outputs simply means applying a
set of inputs to a device to make its output exhibit an 'error' if there is a fault
within the circuit.
• Algorithmic pattern generation basically consists of the following steps:
1. Fault selection, or choosing a fault that needs to be detected
2. Initial assignment, or finding an input pattern that steps up a D or D’ at
the output of the faulty gate.
3. Forward drive; or propagating a D or D’ to an observable output using
the shortest path possible
4. Justification: It is assigning of values to other unassigned inputs in order
to justify the assignments made during the forward drive. If an inconsistency
arises during justification, back tracking or back propagation is performed i.e.,
forward drive is done again using an alternate path. This recursive cycle is
performed until right set of input patterns needed to 'sensitize' a path and
propagate the fault to an observable output is determined.
• A test algorithm is complete if it is able to propagate a failure to an observable
output if a fault indeed exists.
• The D algorithm entails finding all sets of inputs to the circuit that will bring out
a fault within the circuit.
• A 'Primitive D Cube of Failure', or PDCF, is a set of inputs that sensitizes a path
for a particular fault within a circuit.
• The 'Propagation D Cube', or PDC, is a set of inputs that propagates a D from
the inputs to the output.
• The D-algorithm picks all possible PDCF's for the circuit under test and applies
them to the circuit with their corresponding PDC's to propagate various faults
to the output.
• While the PDCF's and PDC's are being applied, the ‘implied’ values for other
circuit nodes are tested for consistency, rejecting sets of inputs that cause a
circuit violation.
• The application and testing of various PDCF's and PDC's for a circuit is done
repeatedly and recursively, until the minimal set of input patterns necessary to
test the circuit for the specified faults is determined.
Design for Testability
• Design for testability specifies design techniques that make the task of successive
testing simple.
• There is definitely particular methodology that solves all VLSI system testing
problems.
• Also, there is no single DFT technique, which is useful for all shorts of circuits.
• DFT techniques for digital circuits are of two categories:
Ad hoc techniques
Structured techniques.
• Structured testing is further divided into four categories:
❖ Scan ❖ Partial Scan ❖ Built-in self-test ❖ Boundary scan
Ad-hoc testing
• Ad-hoc testing is a commonly used term for testing performed without
planning and documentation, but can be applied early scientific
experimental studies.
• The tests are intended to be run only once, unless a defect is discovered.
• Ad hoc testing is the least formal test method. As such, it has been
criticized because it is not structured and hence defects found using this
method may be harder to reproduce, since there are no written test cases).
• However, the strength of ad hoc testing is that important defects can be
found quickly.
• It is performed by improvisation: the tester seeks to find bugs by any
means that seem appropriate.
• Ad hoc testing can be seen as a light version of error guessing, which itself
is a light version of exploratory testing
STRUCTURAL TESTING
• The structural testing is the testing of the structure of the system or component.
• Structural testing is often referred to as ‘white box' or 'glass box' or 'clear-box
testing' because in structural testing we are interested in what is happening 'in
inside the system/application'.
• In Structural testing the testers are required to have the knowledge of the internal
implementations of the code. Here the testers require knowledge of how the
software is implemented & how it works.
• During structural testing the tester is concentrating on how the software does it.
For example, a structural technique wants to know how loops in the software are
working.
• Different test cases may be derived to exercise the loop once, twice, and many
times. This may be done regardless of the functionality of the software.
Built In Self Test (BIST)

You might also like