Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Cell-Aware: Improve

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5
At a glance
Powered by AI
Cell-aware ATPG characterizes library cell's physical design to produce user-defined fault models (UDFMs) that target faults inside cells. It uses actual cell-internal physical characteristics to define and target faults, allowing it to find defects that traditional methods may miss. Published results have shown cell-aware ATPG improves defects per million (DPM).

Cell-aware ATPG characterizes the library cell's physical design to produce a set of UDFMs. The method uses the actual cell-internal physical characteristics to define and target faults inside cells. This allows it to detect internal defects in cells that require unique stimulus to be detected.

Production silicon test results using cell-aware UDFMs have shown notable improvement in DPM beyond what stuck-at and transition patterns detect. Cell-aware ATPG is garnering attention from manufacturers as it has been proven to increase manufacturing test quality by providing higher defect coverage and lower DPM.

semiconductor test

Cell-aware
ATPG test methods
improve test quality
Using cell-aware automatic test-pattern generation and
simulation, you can find defects that other methods might miss.
By Ron Press, Mentor Graphics

T
raditional IC pattern-generation methods focus An engineer would design a “functional test” that checked
on detecting defects at gate terminals or at in- whether the IC functioned as intended.
terconnects. Unfortunately, a significant popula- As IC technology advanced, it became impractical for an
tion of defects may occur within an IC’s gates, engineer to manually create a thorough functional test for the
or cells. Many internal defects in cells can be device. Increasing sequential logic such as flops and latches
detected with traditional test methods, but some within ICs further complicated functional test. It could take
require a unique set of stimulus to excite the defect. A cell- many tens of thousands of clock cycles to propagate data at
aware ATPG (automatic test-pattern generation) method the IC’s input through the sequential logic, so it became al-
characterizes the library cell’s physical design to produce a set most impossible to create a functional test that could execute
of UDFMs (user-defined fault models).Thus, the method uses in a reasonable time and provide a high level of detection for
the actual cell-internal physical characteristics to define and all possible defects.
target faults. The solution was to implement scan DFT (design-for-test)
In addition to explaining how cell-aware ATPG works, I’ll structures within the device. Scan logic essentially turns se-
also use published simulation results from two major IC com- quential logic into shift registers, which are control-and-ob-
panies to highlight the test method. Production silicon test serve points that a tester can load and observe. The remaining
results using cell-aware UDFM have shown notable improve- test problem is the combinational logic between the sequen-
ment in DPM (defects per million) beyond what stuck-at and tial logic. Thus, the entire design is turned into many sets of
transition patterns detect. As a result, cell-aware UDFM is small combinational logic surrounded by virtual control-and-
garnering attention from manufacturers in the semiconductor observe points. This situation lends itself to automation using
industry. scan ATPG tools. Scan testing is considered a “structural test,”
because the logic gate segments are tested without specific
A brief history of IC test tests of the intended function of the IC.
“Defects” are the actual problems or production issues that ATPG circumvents the need for detailed knowledge of the
cause an IC not to function properly. “Faults” are models that IC design. The scan structure also produces very high defect
try to represent defects with simple properties that correlate detection. Standard scan testing is based on a stuck-at fault
to defects and are easy for ATPG tools to use. model that considers a potential stuck-at-0 and stuck-at-1
When ICs were first developed, their functions were fairly fault at every gate terminal. The stuck-at fault model verifies
simple, and tests simply checked the IC’s functional operation. that gate terminals are not “stuck” at logic-0 or logic-1 states.

Test & Measurement World | JUNE 2012 | www.tmworld.com


–20–
Somewhere between the times when 130-nm and 90-nm All of these scan test methods use fault models that define
process technologies were developed, new timing-related de- fault sites at the IC gate boundary. Stuck-at-fault models,
fects occurred that demanded special at-speed tests. One type however, also detect the majority of production defects such
of at-speed scan test, called transition patterns, was used to as bridges, opens, and even many defects within the gates.
target and detect the timing-related defects. Like stuck-at scan With more recent fabrication technologies, the population of
tests, transition tests use scan cells as control-and-observe defects occurring within cells is significant, perhaps amount-
points. After a transition test loads the scan cells, however, it ing to roughly 50% of all defects (Ref. 2).Thus, it is important
puts the IC in functional mode and applies two or more at- to ensure that you properly define fault models that target
speed clock pulses. these “cell-internal” defects.
Stuck-at and transition scan tests, therefore, are the founda-
tion of most production test methods; they can be automated Cell-aware ATPG
within ATPG tools, and they can achieve high test coverage To target the cell-internal defects, test engineers can now use
because of their structural nature. In recent years, newer scan the physical design of gate cells to drive ATPG. This involves
tests have been introduced to target defects that escape stuck- performing a library characterization to determine where
at and transition tests. Examples include timing-aware ATPG, defects can occur and how they would affect the operation of
deterministic bridge, multiple detect, and hold-time methods each cell. The result of the characterization is a UDFM that
(Ref. 1). Each of these methods provides some amount of describes all the cell inputs and responses necessary to detect
improved defect detection. the characterized defects. A cell-aware UDFM file would be

Test & Measurement World | JUNE 2012 | www.tmworld.com


–21–
semiconductor test

produced for a physical library for a


particular technology. Then, any de-
Reports
sign using that technology library just Layout Analog Cell-aware
fault fault model
needs the corresponding cell-aware extraction
simulation
Cell SPICE Defect generation
UDFM file for cell-aware ATPG.
layout parasitics matrix
UDFM is a term used to describe an GDS2 netlist UDFM
ATPG tool capability that lets you cus-
tom-define fault models. You might Defects
want to use a UDFM for ATPG if there
Cell-aware model generation flow
is a particular type of pattern that you
want to apply to a library cell, to an in-
stance, or between instances. The defi-
nition of the UDFM is similar to stuck-
at and transition patterns.You state the
UDFM
values at the cell or instance inputs and
indicate what the expected response is
for any number of desired cycles. Once Normal Cell-aware Test
the ATPG tool loads the UDFM file, it synthesis ATPG system
can target the custom-defined faults. RTL .V .STIL
UDFM provides the framework for
many types of custom fault types. Normal design flow

Cell-aware
characterization flow FIGURE 1. A cell-aware characterization generates a user-defined fault model for
The first step in creating cell-aware tests an ATPG flow.
is to characterize the cells within a tech-
nology library. First, you must perform
extraction on the physical cell layout library. Then, you can Table 1. Logic table for 3:1 mux.
use the parasitic capacitances and resistances to locate poten-
tial sites for bridges and opens. (Capacitors represent potential S0 S1 D0 D1 D2 Z
bridges, and resistors represent potential opens.) Next, you
0 0 0 – – 0
define the type of defects you want to model. For example, a
basic hard short can be modeled by a 1-Ω resistive bridge at 0 0 1 – – 1
Mentor Fig 1.eps DIANE
the capacitor locations. Studies have shown value in modeling 1 0 – 0 – 0
several resistive bridge values (Refs. 3 and 4). 1 0 – 1 – 1
With the definitions in place, you can perform an analog
fault simulation with the desired defects, such as a 1-Ω bridge. – 1 – – 0 0
The simulation is performed on all possible input combina- – 1 – – 1 1
tions with one defect site at a time. The results are compared
to the defect-free responses. If any of the responses differ from
those for the defect-free case, then that sequence is said to Table 2. Cell-aware values necessary to
detect the particular defect. Once you perform the analog
simulation for all cell-input sequences, for all defects being
detect a bridge at R4 (see Figure 2).
modeled, and for all cells in the library, you will have a defect S0 S1 D0 D1 D2 Z
matrix. Finally, you can use the defect matrix to generate the
0 0 0 – 1 0
actual cell-aware UDFM file used by ATPG. Figure 1 shows
the cell-aware characterization and ATPG flow. 1 0 – 0 1 0
0 1 1 – 0 0
Cell-aware ATPG makes a difference
1 1 – 1 0 0
Why is cell-aware ATPG necessary for finding defects that
stuck-at and transition patterns presumably miss if production
tests based on stuck-at and transition have been effective for
many years? The need for cell-aware ATPG arises from the ATPG. For example, a buffer, an AND gate, or an OR gate
increased use of complex cells and the growing distribution of needs no special inputs to detect cell-internal defects. Con-
defects occurring within those cells. sider a 3:1 mux gate. Table 1 shows the logic table for the
Many library cells won’t see any advantage to performing mux.These are the values that are needed to detect all stuck-at
cell-aware ATPG compared to normal stuck-at or transition faults (stuck-at 1 and 0 at each cell boundary pin). (continued)

Test & Measurement World | JUNE 2012 | www.tmworld.com


–22–
semiconductor test

Figure 2 shows the logical view that the ATPG uses along 1-Ω bridge case showed an average of 1.2% cell-internal fault
with the physical layout of the cell. In this layout, a bridge at coverage improvement compared to stuck-at tests for 10 de-
location R4 could cause a short from S1 to D2. If a value on signs.
D2 dominates over S1 in the presence of a bridge, then the Bridges are the most popular type of modeled defect, but
logic-test patterns might not detect the bridge at R4. there are many types of defects that you can model using the
Although the pattern set in Table 1 will achieve 100% cell-aware characterization. Another defect that some users
stuck-at coverage for the mux, it doesn’t ensure that R4 or are modeling is the internal opens defect (Ref. 4).
several other cell-internal bridges will be detected. In this case, AMD published test results based on applying cell-aware
the patterns in Table 2 would be needed to detect the R4 patterns to 600,000 ICs using a 45-nm process (Ref. 5). The
bridge. Other complex gates would have similar situations. results showed that cell-aware patterns detected defects in 32
devices that passed stuck-at and transition patterns. That cor-
relates to a 55 DPM improvement, which is significant for
ATPG logical view
many production environments. More significant DPM im-
provements have been observed on a 32-nm process IC using
D0 slow-speed and at-speed cell-aware patterns.

Z Choosing the best tests


D1 With cell-aware and other types of fault models and test types,
you may have trouble deciding which and how much of each
D2 to use in production. Most IC test sets have stuck-at and tran-
sition patterns as a baseline. There are a few methods for
S0 choosing an effective pattern set. Effective and efficient pro-
duction results require
S1 FIGURE 2. A 3:1 mux logical view (top) good data about the de-
and layout (bottom) show a fect distribution and ef-
potential bridge defect. fectiveness of tests. Often,
such data is not clear, be-
vdd
cause defect distributions
var y with technology
nodes, operational fre-
quencies, slack margins,
M11 and design-for-manufac-
turability rules.
c1 c3 Here are two methods
for determining an effec-
4 c2 tive test set. Each requires
R3
Mentor Fig 2.eps DIANE
some investment to apply
c4 D2 c5 S1 D0 S0 D1 the tests and determine
their value:
Z
R4 • Using field returns.
Field returns are devices
that passed production
M1
tests and were shipped as
functional, but that later
R4 bridge defect failed. If you have a popu-
Gnd from S1 to D2
lation of such devices, you
can use them to find the
value of additional tests.
As a first step, retest the
Industrial results parts to ensure they didn’t break after shipment.
Several IC companies have used cell-aware ATPG to improve You can apply a full set of tests for any type of potentially
defect detection. NXP Semiconductor used a cell-aware valuable test pattern. Then, use the percent detection and pat-
UDFM tool to perform cell library characterization and re- tern set size to equate a relative value of the test type. For
ported expected cell-internal detection improvement (Refs. 3 example, if you have 300 field returns from a production of
and 4). Published results can help you determine whether to 100,000 parts, then detecting 50 devices with cell-aware
model hard bridges such as a 1-Ω resistance or a variety of ATPG would imply you could improve DPM by 500 DPM,
bridge values (Ref. 4). Cell-aware fault simulations on the if cell-aware ATPG was part of production test.You can use a

Test & Measurement World | JUNE 2012 | www.tmworld.com


–24–
semiconductor test

similar approach if you have a thorough system-level test


that finds defective parts that passed production test.
• Adaptive tests for production. Another approach is
to add a set of additional patterns to the existing stuck-at
and transition pattern sets. Often, there is not much spare
room to apply new pattern sets in production. The ad-
ditional patterns need not be complete sets.You can add
1000 patterns for each pattern type that you are inter-
ested in. After some volume of production test, you can
observe the number of unique detects from each of your
additional patterns. You can use these results to increase
the pattern types that are detecting more defects and de-
crease the size of less-effective patterns.
Data from these tests gives you some insight into the
defect distributions based on the DPM detection of tests
and their calculated test coverage. From that, you can
extrapolate the value of a full pattern set or the detection
value of using a smaller pattern set.
The test pattern types that have shown the most
promise beyond stuck-at and transition patterns are tim-
ing-aware and cell-aware. Gate-exhaustive tests apply
every combination of inputs to each cell. They have
good detection but are unreasonably large pattern sets.
Cell-aware is a subset of gate-exhaustive patterns that
only include stimulus combinations that can cause the
modeled defects to be detected.
The new cell-aware ATPG flow allows test engineers
to target subtle shorts and open defects internal to stan-
dard cells that are not adequately detected with the
standard stuck-at or transition fault models. Cell-aware
testing has been proved to increase the quality of manu-
facturing test by providing higher defect coverage and
lower DPM. T&MW

REFERENCES
1. Lin, X., et al., “Timing-Aware ATPG for High Quality At-
speed Testing of Small Delay Defects,” 2006 Asian Test Sym-
posium. www.ieeexplore.ieee.org.
2. Sharma, M., et al., “Faster defect localization in nanometer
technology based on defective cell diagnosis,” International
Test Conference, 2007. www.ieeexplore.ieee.org.
3. Hapke, F., et al., “Defect-oriented cell-aware ATPG and fault
simulation for industrial cell libraries and designs,” Interna-
tional Test Conference, 2009. www.ieeexplore.ieee.org.
4. Hapke, F., et al., “Defect-oriented cell-internal testing,”
International Test Conference, 2010. www.ieeexplore.ieee.org.
5. Hapke, F., et al., “Cell-aware analysis for small-delay effects
and production test results from different fault models,” Inter-
national Test Conference, 2011. www.ieeexplore.ieee.org.

Ron Press is the technical marketing manager of the Design


for Test products at Mentor Graphics. The 25-year veteran
of the test industry has presented seminars on DFT and test
throughout the world. Press co-authored a patent on clock
switching and reduced-pin-count testing and received the
Raytheon Co. inventor’s award. Press is a member of the
International Test Conference Steering Committee, and
he earned his BSEE from the University of Massachusetts.
ron_press@mentor.com.

Test & Measurement World | JUNE 2012 | www.tmworld.com


–26–

You might also like